The most dangerous AI – Mike’s blog

Artificial intelligence is the hottest topic in medical informatics.  The promises of an intelligent automation in medicine’s future are equal parts optimism, hype, and fear.

In this post, Mike Hearn struggles to reconcile the paradox surrounding the supposedly objective, data-driven approaches to AI and the incredibly opinion-charged, ultra-political world from which AI draws its data source.

The post focuses on broader applications, but in medicine, a similar problem exists. If AI is expected to extract insight from the text of original research articles, statistical analyses, and systematic reviews, its “insights” are marred by human biases.

The difference, of course, is that AI may bury such biases into a machine learning black box.  We have an increasing body of research on latent human biases, but machine biases are much harder to discover, particularly when it reflects the inherent biases in the data from which it draws its conclusions. Our own biases.

AI acts as a mirror. Sometimes we don’t like the face staring back at us.

Source: The most dangerous AI – Mike’s blog

Howard Chen on GithubHoward Chen on LinkedinHoward Chen on Wordpress
Howard Chen
Vice Chair for Artificial Intelligence at Cleveland Clinic Diagnostics Institute
Howard is passionate about making diagnostic tests more accurate, expedient, and affordable through disciplined implementation of advanced technology. He previously served as Chief Informatics Officer for Imaging, where he led teams deploying and unifying radiology applications and AI in a multi-state, multi-hospital environment. Blog opinions are his own and in no way reflect those of the employer.

Leave a Reply