Artificial intelligence is the hottest topic in medical informatics. The promises of an intelligent automation in medicine’s future are equal parts optimism, hype, and fear.
In this post, Mike Hearn struggles to reconcile the paradox surrounding the supposedly objective, data-driven approaches to AI and the incredibly opinion-charged, ultra-political world from which AI draws its data source.
The post focuses on broader applications, but in medicine, a similar problem exists. If AI is expected to extract insight from the text of original research articles, statistical analyses, and systematic reviews, its “insights” are marred by human biases.
The difference, of course, is that AI may bury such biases into a machine learning black box. We have an increasing body of research on latent human biases, but machine biases are much harder to discover, particularly when it reflects the inherent biases in the data from which it draws its conclusions. Our own biases.
AI acts as a mirror. Sometimes we don’t like the face staring back at us.