"Status games. Clinical judgment is demonstrated through needing less data to diagnose. If abundance-tools make diagnosis easier, they threaten this status marker. The doctor who needs an AI reading of continuous vitals to catch early sepsis seems less impressive than the one who spots it from experience and intuition. This isn’t always the case, but there is some degree of selective pressure applied towards assessing pre/post-test probabilities."
Status games drive a surprising amount of physician behavior. I have always felt that this is because merit is difficult to assign in medicine (who is the better clinician?). In the absence of clear signals of merit that are easier to determine in other fields (the better software engineer is the one whose code works, is maintainable, and is performant, the better investor is the one with greater return, the better athlete run faster), medicine often falls back on pedigree and reputation.
I've thought a lot about this too. Surgery is interesting in that the metric for skill and success is fairly clear. The input is probably simply the amount of procedures done, and the output is probably simply individual relative complication rates normalized to the specific procedure.
Medicine is a bit harder and I think you're spot on about how this creates an opening for pedigree and reputation to fill the gap. Perhaps alternatives can exist, and some do (patient satisfaction reports) but it seems like these are subject to many of the same issues public reviews often are (self-selection effects, and disproportionate representation.)
I was quite happy to learn about organizations like KLAS Research. There's probably a need for a The Infatuation, Michelin, and Zaggaat but for clinicians, as odd as that may sound at first.
Michelin for clinicians is actually great idea! People are always trying to find the "best doctor for XYZ" and mostly people ask from recommendations from people they know. Patient reviews on websites tend to reflect the clinicians bedside manner (which is useful, but arguably not the most important thing).
You could imagine an organization that performs deep diligence on clinicians, by surveying patients, examining previous cases, establishing baselines for patient initial health states. It would require clinical expertise, be very costly, but could create the best picture of "who is the best doctor for XYZ?".
The problem is three fold with scoring clinicians: 1) how do you play out the false negative in a unique n=1 situation? Can’t adjust for everything in the situation 2) there’s a bar which is “good enough” in most medicine. Most of medicine is also “vibes based” because there is only “good enough” and depends what you’re optimizing for(cost, survival, satisfaction, etc)? 3) patients scores are merely one aspect of the grade
Excellent analisys! The 'meeting patients where they are' idea is truly profound.
Thank you, Roxy. It really is an essential aspect of winning trust.
"Status games. Clinical judgment is demonstrated through needing less data to diagnose. If abundance-tools make diagnosis easier, they threaten this status marker. The doctor who needs an AI reading of continuous vitals to catch early sepsis seems less impressive than the one who spots it from experience and intuition. This isn’t always the case, but there is some degree of selective pressure applied towards assessing pre/post-test probabilities."
Status games drive a surprising amount of physician behavior. I have always felt that this is because merit is difficult to assign in medicine (who is the better clinician?). In the absence of clear signals of merit that are easier to determine in other fields (the better software engineer is the one whose code works, is maintainable, and is performant, the better investor is the one with greater return, the better athlete run faster), medicine often falls back on pedigree and reputation.
I've thought a lot about this too. Surgery is interesting in that the metric for skill and success is fairly clear. The input is probably simply the amount of procedures done, and the output is probably simply individual relative complication rates normalized to the specific procedure.
Medicine is a bit harder and I think you're spot on about how this creates an opening for pedigree and reputation to fill the gap. Perhaps alternatives can exist, and some do (patient satisfaction reports) but it seems like these are subject to many of the same issues public reviews often are (self-selection effects, and disproportionate representation.)
I was quite happy to learn about organizations like KLAS Research. There's probably a need for a The Infatuation, Michelin, and Zaggaat but for clinicians, as odd as that may sound at first.
Michelin for clinicians is actually great idea! People are always trying to find the "best doctor for XYZ" and mostly people ask from recommendations from people they know. Patient reviews on websites tend to reflect the clinicians bedside manner (which is useful, but arguably not the most important thing).
You could imagine an organization that performs deep diligence on clinicians, by surveying patients, examining previous cases, establishing baselines for patient initial health states. It would require clinical expertise, be very costly, but could create the best picture of "who is the best doctor for XYZ?".
The problem is three fold with scoring clinicians: 1) how do you play out the false negative in a unique n=1 situation? Can’t adjust for everything in the situation 2) there’s a bar which is “good enough” in most medicine. Most of medicine is also “vibes based” because there is only “good enough” and depends what you’re optimizing for(cost, survival, satisfaction, etc)? 3) patients scores are merely one aspect of the grade
I think anti abundance is okay because screening still carries costs. It’s not 0(even aside from whatever extra interventions we do)
I wonder how many would want an MRI if they paid for it?