Medicine Has a Problem With Information. Here's Why That's Bad for Patients.
Some good reasons, some bad…
N.b. I am a second-year medical student. My views are currently high on theory and low on clinical reality, and I expect them to evolve as I enter the hospital. However, medicine has a deep trust problem that worries me more than being wrong. When physicians reflexively dismiss technologies patients find valuable, or when the gap between what’s technically possible and what’s clinically “allowed” becomes too wide, patients disengage from the system entirely. I see this on a daily basis in the startup ecosystem, and it only takes one unicorn to hit mass-market appeal with anti-medicine messaging. Transparency is the only way to extend a hand to these patients who feel alienated by the establishment. We cannot regain their trust if we hide how we think. I am starting that work now.
This essay is part of that project: examining my reasoning publicly, inviting critique, building a track record of honest thinking. If I’m wrong about data abundance, I want to be proven wrong with evidence, not dismissed with appeals to authority. And if I’m right, I want the argument to be strong enough that it moves the field forward.
The trust problem starts with meeting patients where they are—at home, online, thinking through these questions themselves. This is how we begin.
We had a class one day about medical tourism. The professor opened with a simple exercise: raise your hand if you’d get a whole-body MRI if someone offered you one for free.
About half the class raised their hands.
What followed was a carefully constructed ninety minutes on the dangers of overdiagnosis—the incidental findings that lead to unnecessary biopsies, the anxiety of ambiguous results, the inequities of boutique medicine. As one practical example, they showed us [company redacted]’s website and had us critique how it is allegedly marketed to worried wealthy people (side note: as someone who worked in marketing, naive consumers have no sense of marketing funnels and do not understand the differing messaging for bottom of funnel content versus top of funnel content. For anyone in marketing it’s no surprise at all that a company’s website is focused on mid-to-bottom of funnel messaging. Additionally, for anyone who has ever built a business, one of the first lessons you learn is to start top of market. Variables are minimized and you don’t need to worry about scale. You generate enough cash flow to improve the core product before aiming to move downmarket with economies of scale. This is Business 101.)
At the end, the professors asked us to vote again on the same question. I think they expected fewer hands—that we’d been educated out of our naive enthusiasm for more information.
The class fell into silence, broken only by a handful of awkward, quiet, honest laugh as we all looked around to see more hands up now than at the beginning.
The lead professor’s face was a mix of shock and disappointment. I was trying not to grin. The professor sees the anxiety whole body MRIs cause. I like that they represent a patient taking agency over their health. Both are true.
In the beginning of class I had already reached out to one of the executives from the referenced company— a medical doctor himself. (As a side note, if running JANUS taught me anything, it’s that you can just message decision-makers directly and cut through the marketing noise. This is probably the most valuable skill I learned and I am eternally grateful to that doctor, and so many others, who graciously shared their time in reinforcing this as a positive action.) He responded immediately. Told me he’d had similar experiences in his M1 year decades ago—except back then it was about lung cancer screening CTs, and everyone was still using paper charts. The technology was new, the establishment was skeptical, and medical students were being taught to distrust it.
This was clearly a discussion worth having. But the decades of elapsed time suggests it needs to be had differently, and assessed more earnestly.
What follows, is an attempt to begin in that earnest, open discussion, publicly.
There’s a peculiar sensibility in medicine— as soon as we picked up on it, a friend and I began calling it an “anti-abundance mentality.”
It’s a reflex that greets every new data source like continuous monitors, whole-body imaging, or patient portals with suspicion rather than curiosity. The default assumption is that more information is dangerous until proven otherwise.
This strikes tech people, like Roon, as backwards. In tech, data abundance is obviously good. More signals mean better models, faster iteration, clearer patterns. The marginal cost of storage approaches zero, so why wouldn’t you capture everything?
But healthcare has internalized scarcity as virtue. To be fair though, it’s for reasons that aren’t entirely stupid. Yet some still are.
Why Doctors Learned to Say No
Medical training is an extended exercise in restraint. You learn early that every intervention carries risk—radiation exposure, false positives that trigger cascading procedures, the anxiety of knowing something ambiguous about your body. Resources are genuinely finite in ways tech resources aren’t: operating room time, specialist availability, someone’s actual kidney.
More importantly, you’re taught that ordering lots of tests is the mark of a bad doctor. It suggests you don’t know what you’re looking for. The impressive attending is the one who needs fewer data points to reach the right diagnosis. Clinical acumen is demonstrated through parsimony.
This makes sense in context. Shotgunning labs because you’re intellectually lazy is bad medicine. The doctor who orders a comprehensive metabolic panel, complete blood count, lipid panel, thyroid function, vitamin levels, and tumor markers for a patient with a headache isn’t being thorough—they’re avoiding the harder work of clinical reasoning.
So restraint becomes sophistication. Scarcity becomes a North Star.
When the Constraint Disappears
The problem is that this mentality persists even after the original constraint vanishes.
Continuous glucose monitors were initially dismissed as “too much data”—why would a diabetic need to know their glucose every five minutes? The quarterly A1C was considered sufficient. Then studies showed that CGMs dramatically improved outcomes. Patients could see patterns in real-time, adjust behavior, prevent dangerous swings. Suddenly “too much data” became standard of care.
Genetic sequencing costs have collapsed by orders of magnitude, yet it remains locked behind specialist gatekeeping. The technology says abundance; the reimbursement model and professional norms say scarcity.
Whole-body MRI screening gets taught as dangerous excess—”you’ll find things that don’t matter and end up doing unnecessary biopsies.” Maybe. Or maybe early detection of treatable cancers saves lives and the incidental findings are manageable with better protocols. We won’t know without trying, but the default posture is resistance.
Most tellingly: patients having access to their own medical data is still treated as potentially dangerous rather than obviously empowering. The paternalism is barely concealed—you might not understand it, you might worry unnecessarily, you might make bad decisions. Better to keep the information with the professionals.
Why It Persists
Training trauma. Medical education rewards doing less and punishes shotgunning tests. This becomes identity, not just technique. After years of being graded on restraint, abundance feels professionally threatening.
Legitimate bad experiences. Every doctor has seen the patient whose incidental finding led to biopsy, led to complication, led to worse outcome than if they’d never known. These stories have weight. The precautionary principle gets overapplied: since abundance can cause harm, we should default to scarcity.
Payment models. Even as technology gets cheaper, reimbursement still treats testing as expensive. The system architecture assumes scarcity when the underlying economics have shifted. Insurance will pay for a comprehensive metabolic panel but not for continuous metabolic monitoring, even though the latter might prevent the expensive emergency that quarterly testing misses.
Status games. Clinical judgment is demonstrated through needing less data to diagnose. If abundance-tools make diagnosis easier, they threaten this status marker. The doctor who needs an AI reading of continuous vitals to catch early sepsis seems less impressive than the one who spots it from experience and intuition. This isn’t always the case, but there is some degree of selective pressure applied towards assessing pre/post-test probabilities.
The deeper issue is that scarcity mentality is load-bearing for quality in the current system. It prevents genuine waste and harm. You can’t just remove it without replacing its function.
The Conceptual Fix
What needs to happen is a clean separation between two different things that are currently conflated:
Abundance of data is often good. Information is cheap to gather, store, and analyze. Continuous monitoring, comprehensive panels, patient-generated health data—these should default to “yes” when the technology allows and the patient wants it.
Abundance of action is often bad. Interventions carry real risk and cost. You shouldn’t do procedures that won’t change management. Selective decision-making is correct. This is the same in business, as any business owner understands. You have a million “brilliant ideas” but each change comes with immense starting costs and training friction that get scaled across the entire team/ operation; so if you do change anything, you really want to make sure its gains will account for the temporary loss across your whole team.
The error is applying intervention-logic to information-gathering. They’re different categories requiring different heuristics.
This requires retraining an entire profession to think about data differently than they currently do. Information and intervention are treated as the same type of thing—both require justification, both should be minimized unless proven necessary. That made sense when tests were expensive and invasive. It doesn’t make sense when a sleep mask can log biometrics passively and cheaply.
The Processing Problem
The real reason doctors resist more data isn’t philosophical—it’s practical. They don’t have good tools to process it.
A continuous glucose monitor generates thousands of data points. If the doctor has to eyeball raw traces during a fifteen-minute appointment, abundance is genuinely burdensome. But if the system says “A1C equivalent is 7.2%, nocturnal hypoglycemia pattern on Tuesday nights, likely related to Monday dinner timing,” then abundance becomes useful.
Healthcare needs intelligence layers between data collection and clinical decision-making. Right now we have abundance at the collection layer (CGMs, wearables, genomics) and extreme scarcity at the interpretation layer (one doctor’s attention during a brief appointment). The gap is obvious.
This is solvable with better software, better models, better interfaces. It’s not a fundamental constraint—it’s an infrastructure problem.
The Risk Reframe
The current calculation is: “What’s the harm of this extra test?” Radiation, false positive cascade, patient anxiety, cost.
The reframe needs to be: “What’s the harm of not having this information when we need it?”
The patient whose cancer would have been caught on a whole-body MRI but wasn’t. The diabetic whose dangerous patterns weren’t visible in quarterly A1C checks. The cardiac event that continuous monitoring would have predicted but standard vitals missed.
CGMs won this argument through evidence, not rhetoric. Studies showed better outcomes. The abundance skeptics were proven wrong by data about data. Early cancer detection and longitudinal health tracking will need to win the same way—demonstrate that they change outcomes, not just generate information.
Follow the Incentives
The current reimbursement model financially rewards doing less. Value-based care was supposed to fix this but mostly just created administrative overhead.
What you need are payment models that reward better outcomes with whatever resources required—abundant testing or minimal testing, but optimizing for the right thing. This probably means working within capitated models or direct-to-consumer where the incentive structure is already different.
This is the unsexy answer: reimbursement reform. But it matters. The culture follows the incentives more than the arguments.
The Gradual Path
This doesn’t get “fixed” system-wide through a single policy change or persuasive essay. What happens instead is gradual erosion through specific victories:
CGMs for diabetes (won)
Liquid biopsies for cancer screening (winning)
Continuous vital monitoring for early deterioration (fighting)
Consumer genomics for disease risk (contested)
Whatever gets built next for sleep, metabolics, longevity
Each success creates precedent. Eventually the mentality shifts, but it’s a decades-long accumulation of proof points.
The culture has internalized scarcity as virtue, and virtues don’t change through argument. They change when the world makes the old virtue obsolete and the new virtue necessary.
For now, the leverage is in picking one specific instance of abundance—one category of information that’s cheap to gather and valuable to have—and proving it matters. Execute well enough that it becomes undeniable. The broader culture will follow, slowly, after enough specific battles are won.
The scarcity trap in medicine is real, but it’s not permanent. It’s just slower to escape than the technological advancement curve suggests it should be.
If medicine is art as much as it is science, then we must recognize that a part of the art is in the language we use, and much of the language we use to speak about data and information is reductionist and from bad-faith assumptions.
My proposition here, of diverging abundance of data from abundance of action, is an attempt give hospital systems the logical framework to become pro-data without becoming overly-reactive and wasteful. I hope it is taken in earnest and provokes consideration.
Medicine’s default should shift toward data abundance when we can demonstrate that the information improves outcomes and when we have infrastructure to process it meaningfully, but each modality needs to prove itself rather than inheriting blanket skepticism.
Post-mortem of my argument
Rebuttals You Might Make Against My Argument
I underestimate legitimate medical concerns. The “incidentaloma” problem with whole-body MRI screening isn’t just theoretical hand-wringing. Studies have shown that aggressive screening in asymptomatic populations can lead to net harm through the cascade of interventions on clinically insignificant findings. This isn’t just doctors being backwards—it’s evidence-based concern about iatrogenic harm.
The CGM analogy doesn’t fully transfer. CGMs work because diabetes has clear, actionable thresholds and frequent monitoring demonstrably improves outcomes. Whole-body MRI screening doesn’t have the same evidence base yet. The question isn’t “is data good?” but “does this specific data source improve outcomes net of harms?” That’s an empirical question for each modality.
I am missing the false positive problem. When you screen low-prevalence conditions in asymptomatic populations, most positive findings are false positives. This is basic epidemiology. A test with 95% specificity sounds great until you apply it to a million healthy people and generate 50,000 false alarms. The math here is real, not just medical conservatism.
My Response to Each Rebuttal
On “Legitimate Medical Concerns” and Incidentalomas
The incidentaloma critique assumes the current management protocols for ambiguous findings are fixed and optimal. They’re not. The cascade problem is real, but it’s a protocol design failure, not an inherent property of information abundance.
When CGMs first emerged, physicians could have made identical arguments: “We’ll find glucose fluctuations that don’t matter and patients will panic and over-treat.” But we didn’t conclude “therefore don’t use CGMs.” We developed better interpretation guidelines, better patient education, better decision thresholds. We solved the protocol problem.
The medical establishment treats incidentalomas as an argument against gathering data rather than an argument for developing better response protocols. That’s backwards. If whole-body MRI screening creates management challenges, the answer is better management protocols, not information avoidance.
Moreover, the harm calculus is suspiciously one-sided. Every analysis of incidentaloma risk carefully quantifies the harms of false positives and cascading interventions. But the harms of missed diagnoses due to information scarcity are systematically underweighted because they’re invisible—we never know about the cancer that would have been caught early, the aneurysm that would have been monitored, the condition that would have been preventable.
Absence of evidence gets treated as evidence of absence. “We don’t have RCTs proving whole-body MRI screening improves outcomes” becomes “therefore we should assume it’s harmful” rather than “therefore we should study it properly.” That asymmetry reveals the bias.
On the CGM Analogy “Not Transferring”
This critique proves my point rather than refuting it. Yes, CGMs had to demonstrate outcome improvements through studies. And they did. But what matters is that the medical establishment resisted even doing those studies because of prior assumptions about “too much data.”
The pathway was: technology becomes feasible → medical culture resists → contrarians push for trials → evidence emerges → grudging acceptance. The resistance happened before the evidence phase, not because of evidence. That’s the cultural problem I’m identifying.
Saying “whole-body MRI doesn’t have the same evidence base yet” is technically true but misleading. The question is whether the default posture should be “this is probably harmful until proven otherwise” or “this is worth studying rigorously.” Medicine currently defaults to the former, and that default itself prevents the evidence generation.
Furthermore, the claim that “diabetes has clear actionable thresholds” wasn’t obvious before CGMs. CGMs revealed patterns and thresholds we didn’t know to look for. That’s exactly what exploratory data collection does—it shows you what’s actionable. You can’t know what information will be useful until you have it.
On the False Positive Problem
Yes, I understand basic epidemiology. The false positive problem is real. But here’s what this critique misses:
We already accept false positive burdens all the time when we think the benefit justifies it. Mammography in average-risk women generates plenty of false positives. We do it anyway because the lives saved outweigh the harms. PSA screening generates massive false positive rates—we debated whether it was worth it, not whether false positives exist.
The question isn’t “do false positives exist” but “does the benefit/harm ratio favor testing?” And crucially: we can’t know that ratio without collecting the data to find out.
Moreover, the false positive argument assumes we’re stuck with current sensitivity/specificity profiles and current management protocols. We’re not. Better imaging, better biomarkers, better AI interpretation, better clinical guidelines for managing ambiguous findings—all of these can shift the ratio.
Saying “false positives make screening harmful” treats the entire system as static. That’s exactly the scarcity mentality I’m critiquing: assuming we’re stuck with current constraints rather than engineering around them.
Finally, the false positive critique is selectively applied. Nobody says “we shouldn’t collect vitals every 4 hours in hospitals because most abnormal vitals are false alarms.” We accept the false positives because we think catching the true positives matters. The question is why that same logic doesn’t apply to other abundant data sources.
On “Not Addressing When Data Improves Outcomes”
I actually did address this—perhaps not explicitly enough. The answer is: we find out by collecting the data and studying it rigorously, not by refusing to collect it based on prior assumptions.
The current system says: “Prove this new abundant data source improves outcomes before we adopt it, but we’ll resist the studies that would generate that proof, and we’ll interpret ambiguous evidence in the most conservative possible way.”
That’s a Catch-22 designed to preserve the status quo.
The pathway should be:
Technology makes new data collection feasible
Observational studies explore whether the data reveals useful patterns
RCTs test whether acting on those patterns improves outcomes
If yes, adopt; if no, abandon
But medicine currently shorts-circuits this at step 1: “We don’t have RCTs proving this works, so we shouldn’t even do the exploratory work to see if there are useful patterns.”
The CGM story proves this. The randomized evidence came after years of observational use by contrarian endocrinologists who thought the data might be useful. If the medical establishment had successfully prevented that exploratory phase, we’d never have gotten the RCT evidence.
On “First Do No Harm” vs. “Try Things and See What Works”
This is framed as if they’re opposed. They’re not. “First do no harm” applies to interventions, not information gathering.
Getting a whole-body MRI is not an intervention. Looking at your own glucose data is not an intervention. Having continuous vitals monitoring is not an intervention. These are information-gathering activities with minimal direct harm.
The harm comes from subsequent actions taken in response to information. That’s precisely why my data/action distinction matters. You can gather abundant data and maintain appropriate restraint in how you respond to it.
“First do no harm” has been weaponized into “first gather no information that might lead someone to take action.” That’s not the same thing. That’s paralysis disguised as prudence.
The Core Issue
The core issue is that the medical establishment has developed sophisticated arguments for why information abundance is dangerous, and those arguments sound like evidence-based caution but function as motivated reasoning to preserve existing practice patterns.
Every new data source faces the same gauntlet: it won’t work, we don’t need it, false positives, incidentalomas, anxiety, we don’t have RCTs, it’s not cost-effective. Then the contrarians prove it works anyway, and the establishment grudgingly accepts it while claiming they were appropriately cautious.
At some point you have to ask: if this pattern repeats with CGMs, pulse oximetry, cardiac telemetry, liquid biopsies, and every other monitoring innovation, maybe the problem isn’t that each individual technology needs to overcome legitimate skepticism. Maybe the problem is that the default posture of skepticism toward information abundance is systematically wrong.
I’m not arguing for reckless adoption of every monitoring technology. I’m arguing that the burden of proof should be symmetrical: show me evidence that abundant data causes net harm, and show me evidence that it causes net benefit. Right now the burden is asymmetric—abundant data must prove itself beneficial while scarcity is assumed safe. That asymmetry is the cultural problem.
References
Continuous Glucose Monitoring (CGM)
Initial Resistance Phase: When the first CGM devices reached the market in 1999-2000, enthusiasm waned even within the scientific community due to unexpected sensor output drift and limited clinical utility over the FDA-approved three-day implantation period. PubMed Central The FDA explicitly stated that the first CGM was only supplemental to standard home glucose-monitoring devices and should be used occasionally, not for everyday use. ScienceDirect Early sensors’ utility was limited due to significant drift in sensitivity, and as a result, there was less enthusiasm concerning CGM in the early days of the technology. AJMC
The Resistance Arguments: The objections followed the predictable pattern. Physicians argued patients didn’t need continuous data when quarterly A1C tests were sufficient. Barriers to use included lack of FDA approval for insulin dosing, cost and variable reimbursement, need for recalibrations, and lack of training for physicians regarding interpretation of CGM results. PubMed Central Physicians were cited as major barriers to implementation, facing demands on time that were impossible to meet during brief clinical visits, lack of reasonable reimbursement, potential medical-legal liability, and uncertainties associated with the new intervention. PubMed Central
Contrarian Success: Despite institutional resistance, endocrinologists who believed in the technology pushed forward with clinical use and studies. By 2016—seventeen years after the first device—the accuracy of CGM sensors was so good that the FDA approved continuous glucose readings to replace fingerstick blood sugar testing altogether. HealthCentralIt is now accepted that CGM increases quality of life by allowing informed diabetes management decisions as a result of more optimized glucose control, leading to better health and a reduction in diabetic complications. PubMed Central
Pulse Oximetry
Initial Resistance Phase: In Japan, the first commercial pulse oximeter was considered a useful research device but not a clinically viable option; only 200 devices were sold. PubMed Central The technology was invented in 1972, but it will likely surprise younger physicians to know that the modern pulse oximeter was not invented until the early 1970s and did not become commercially available until the 1980s. Even in the late 1990s there was still debate regarding the utility of routine pulse oximetry for ED patients. Acponline
The Resistance Arguments: Early studies foreshadowed widespread use but illustrated that the problem was one of practicality—the early commercial device, although functional, was difficult to use in clinical practice. Chestnet It took nearly 40 years from early proof-of-concept to widespread adoption.
Establishment Acceptance: It wasn’t until 1988 that Dr. Thomas Neff suggested we should consider oxygen saturation by pulse oximetry as a “fifth vital sign,” a concept that definitely took hold. Acponline By the late 1980s, pulse oximetry was considered a standard of care for monitoring patients during anesthesia and joined the ECG as a routine monitor for all critically-ill patients. Chestnet
Cardiac Telemetry
The Overuse Problem (Resistance in Reverse): Cardiac telemetry presents an interesting inversion—the technology was adopted, but the medical establishment has spent decades trying to restrict its use, demonstrating the same pattern of institutional inertia. Studies indicate telemetry monitoring is often overused in intermediate level of care settings, with analysis showing 33% of telemetry days did not meet appropriate indications. PubMed Central
Cultural Resistance to Evidence-Based Guidelines: AHA telemetry monitoring practice standards had not been fully embedded or adhered to within the system, leading to wide variation in what was considered appropriate telemetry monitoring use. System norms mentioned among interviewees may inhibit appropriate use—for example, the norm that everyone on the cardiac floor receives telemetry monitoring. PubMed Central Physicians expressed that a concern for clinical deterioration, rather than explicit concern for development of arrhythmia, drove most telemetry use—notably because telemetry does not replace more frequent vital sign checks and may lead to a false sense of security. PubMed Central
Liquid Biopsy (Currently Fighting the Battle)
Current Resistance Phase: Liquid biopsy perfectly demonstrates this pattern in real-time. Adoption of liquid biopsy to exclude patients from targeted therapy has seen much slower clinical adoption, mostly due to concerns for false negatives. At this time, negative liquid biopsy sequencing should be followed up by tissue biopsy sequencing. College of American Pathologists Integrating liquid biopsies into current clinical workflows requires overcoming logistical challenges, including the need for healthcare professional training on interpretation and limitations. The lack of standardization in liquid biopsy protocols presents a significant barrier to widespread adoption. PubMed Central
The Familiar Objections: False positives may lead to unnecessary treatments, exposing patients to potential side effects and causing psychological distress. False negatives can result in delayed diagnosis and treatment. Regulatory and ethical considerations play a crucial role, with the evolving regulatory landscape needing to address implications of early detection such as managing incidental findings and risk of overdiagnosis. PubMed Central
Evidence of Contrarian Success: Despite resistance, landmark trials like PADA-1 demonstrated improved progression-free survival using liquid biopsy to guide therapy, and the plasmaMATCH study confirmed 96-99% concordance between liquid biopsy and tissue sequencing, supporting broader adoption in clinical practice. Nature Liquid biopsy-based tests are gaining popularity for early cancer diagnosis, with multi-cancer early detection tests like Galleri, CancerSEEK, and OneTest showing promise for detecting multiple cancers at early, treatable stages. PubMed Central
The Pattern Is Clear
In each case, the sequence is identical:
Technology becomes feasible (CGM 1999, pulse oximetry 1970s, liquid biopsy 2000s)
Initial enthusiasm from inventors/researchers
Medical establishment resistance citing practical concerns, lack of evidence, false positive risks, cost, and workflow disruption
Contrarians persist with clinical use and studies despite institutional barriers
Evidence accumulates over 10-20 years proving clinical benefit
Grudging acceptance as technology becomes standard of care
Retrospective claims that the delay was appropriate caution rather than institutional inertia
The time lag is remarkably consistent: approximately 15-25 years from proof-of-concept to mainstream acceptance, not because the technology needed that long to mature, but because institutional medicine needed that long to overcome its default skepticism toward information abundance.

Excellent analisys! The 'meeting patients where they are' idea is truly profound.
"Status games. Clinical judgment is demonstrated through needing less data to diagnose. If abundance-tools make diagnosis easier, they threaten this status marker. The doctor who needs an AI reading of continuous vitals to catch early sepsis seems less impressive than the one who spots it from experience and intuition. This isn’t always the case, but there is some degree of selective pressure applied towards assessing pre/post-test probabilities."
Status games drive a surprising amount of physician behavior. I have always felt that this is because merit is difficult to assign in medicine (who is the better clinician?). In the absence of clear signals of merit that are easier to determine in other fields (the better software engineer is the one whose code works, is maintainable, and is performant, the better investor is the one with greater return, the better athlete run faster), medicine often falls back on pedigree and reputation.