This TV commercial caught my attention a few days ago.
In the TV spot, the narrator outlines the number of images a radiologist must parse through to find an abnormality. To help, IBM will teach its Watson “to see.”
Therein lies the problem: Seeing is not enough.
Two years ago, I compared IBM Watson to Sir Arthur Conan Doyle’s Dr. Watson. Although astute and logical with excellent fund of knowledge, Dr. Watson remains dependent on Sherlock to make the final leap to sound deduction.
Mastering detection and interpretation makes Watson a valuable tool for the radiologist. However, radiologists create value in at least three ways, all of which deserve attention to tackle challenges in radiology.
We start at the end of the imaging workflow, the highest-profile job description of a radiologist. A trained radiologist’s eyes are necessary for interpretation of the hundreds of images in a CT or MRI in search for a single abnormality.
However, even within this single category, decision-support is not one task. There are at least three distinct decision points a radiologist (and any supporting software) must answer.
- Detection – Is this a real finding?
- Diagnosis – What are the leading differential diagnoses given the constellation of all findings? Is it “almost certainly,” “most likely,” “probable,” or “possible?”
- Recommendation – How should a clinician manage this abnormality? Does it make a difference if this patient would be receiving follow-up scans for a different reason? Does it make a difference if this patient has known metastatic cancer?
However, a radiologist becomes involved long before any image has been acquired. Once a radiologist becomes involved in a patient’s care through an imaging order, the question to answer now becomes: What is the best way to image this indication?
Even as a radiology resident I learned a hard lesson. Imaging protocols may appear universally applicable, but the quality of your images is a function of the scanner hardware, your technologist, and your patient. Variation in this combination deserves second and third thoughts about the protocol.
Opportunities exist for decision support tools to take the helm of protocol modification using clinical information. It may help further personalizing an examination by reducing the contrast dose, optimizing kVP/mAS, or identifying the appropriate fields of view.
Finally, before an examination is even obtained, an ordering clinician must determine whether to image in the first place. If the patient is acutely ill, then immediate management supersedes noninvasive diagnostics. If the patient is combating end-stage heart failure, then a 5 mm renal solid lesion may not be worth imaging at all.
In an ideal world, a collaborative effort between ordering clinicians and radiologists is the best way to make imaging decisions. However, hospitals are busy, and this process only occasionally happen.
A proper decision-support tool would help guide clinicians to make better decisions even before a patient arrives at a radiology practice.
One would not reduce the medicine to mere laboratory values, imaging reports, and biopsy data. Indeed, medical decision is complex, as few patients in real life match the population in clinical trial. Answers to questions like “can I start warfarin on Mr. Smith with atrial fibrillation and CHADS2 score of 4” seems straightforward, but nowhere in the algorithm does it consider Mr. Smith’s history of frequent falls or his questionable compliance in the setting of prescriptions with narrow therapeutic windows.
In medical imaging, the complexity is no less.
Therefore, machine learning tools are ~10 years away from making the best decisions in any clinical scenario consistently. However, I predict the next advancements in decision support would be in something more useful and achievable. Examples: (1) preventing particularly bad imaging decisions, and (2) aiding fast imaging decisions (e.g. in the emergency department) for less trained, non-MD professionals.