Monthly Archives: February 2024

The FDA has an Idea, or 10, on Good Machine Learning Practice

Good machine learning practices goes beyond the chip

The U.S. FDA, Health Canada, and the UK’s MHRA have unveiled 10 guiding principles for Good Machine Learning Practice (GMLP) in developing AI/ML medical devices. These principles aim to ensure safety, efficacy, and quality in healthcare innovation. Key focuses include leveraging multi-disciplinary expertise, implementing good software and security practices, ensuring representative clinical study participants and data sets, maintaining independence between training and test data sets, and emphasizing the performance of the human-AI team. These guidelines also highlight the importance of clear user information, robust testing, and ongoing monitoring of deployed models to manage re-training risks and maintain performance.

Read the full GMLP draft on the FDA website.

1-Minute Summary

Here are the ten principles of GMLP.

Continue reading

Artificial Intelligence: Are you a Centaur or a Cyborg?

In the rapidly evolving field of radiology, artificial intelligence (AI) is not just a tool but a collaborator, reshaping the dynamics of diagnosis and patient care. On the first order, the answer seemed clear: knowledge workers using AI outperforms those that don’t.

But the literature offers little detail on what happens after you embrace AI. Just with every tool ever existed, it really matters how you use it. As it turns out, it also matters how AI becomes part of your work.

To better understand this partnership, a group of Harvard Business School scholars published a study on business consultants who have, and who have not, opted to adopt GPT-4 in their daily work in spring 2023. There are several interesting conclusions – one of them delve into the analogy of centaurs versus cyborgs, concepts borrowed from mythology and science fiction that provide a vivid framework for the interaction between human intelligence and AI in radiology.

Continue reading

AI Regulation in Healthcare: CMS and Congressional Scrutiny

AI regulation in healthcare is coming. This article from FierceHealthcare summarizes the growing use of artificial intelligence (AI) in healthcare innovations and the increasing scrutiny from Senate lawmakers regarding AI regulation and healthcare payment. It highlights the concerns of bias in AI systems and the legislative scrutiny to ensure these technologies benefit patient care without discrimination. The discussion also covers lawsuits against major Medicare Advantage insurers for allegedly using AI to deny care, and the Centers for Medicare & Medicaid Services’ (CMS) guidance on AI use in healthcare decisions. Additionally, the need for transparency, accountability, and meaningful human review in AI applications in healthcare is emphasized, alongside calls for federal support to navigate AI’s integration into healthcare practices responsibly.

I urge you to read the full article which includes links to the first-hand sources and supplement with a short summary below for the busy professional.

1-Minute Summary

  • Federal lawmakers are actively discussing the impact of artificial intelligence (AI) in healthcare, emphasizing the need to protect patients from bias inherent in some big data systems without stifling innovation. These biases can discriminate against patients based on race, gender, sexual orientation, and disability.
  • To ensure the beneficial outcomes of AI while safeguarding patient rights, the Algorithmic Accountability Act was introduced. This act mandates healthcare systems to regularly verify that AI tools are being used as intended and are not perpetuating harmful biases, especially in federal programs like Medicare and Medicaid.
  • Major Medicare Advantage insurers, including Humana and UnitedHealthcare, are under legal scrutiny for allegedly using AI algorithms to deny care, highlighting the challenges of implementing AI in patient care decisions without exacerbating discrimination or introducing new biases.
  • The Centers for Medicare & Medicaid Services (CMS) issued guidelines prohibiting the use of AI or algorithms for making coverage decisions or denying care based solely on algorithmic predictions, emphasizing the necessity for decisions to be based on individual patient circumstances and reviewed by medical professionals.
  • Testimonies during the legislative hearings called for additional clarity on the proper use of AI in healthcare, suggesting the establishment of AI assurance labs for developing standards, and advocating for federal support to help healthcare organizations navigate the use of AI tools through investments in technical assistance, infrastructure, and training.

American College of Radiology

The emphasis on AI transparency as an answer to bias is not new. The American College of Radiology (ACR) recently kicked off its Transparent-AI initiative to advocate for openness and trust in AI. The program invites all manufacturers with FDA-cleared AI tools to participate. By offering detailed insights into an algorithm’s training, performance, and intended use, Transparent-AI not only boosts product credibility but also aids in integrating these innovations into diverse healthcare environments. Behind the scenes, the ACR has also advocated for transparency in AI with various federal agencies and lawmakers.

Disclosure: I am not involved with Transparent-AI but do sit on the ACR Commission on Informatics and chair the ACR Informatics Advisory Council and annual ACR DSI Summit. Register for the 2024 DSI Summit here!

AI+Human Better than Human in Neurodegenerative Imaging

Recent research underscores a leap in neuroimaging accuracy for Alzheimer’s disease diagnosis, emphasizing the superior performance of AI-assisted radiologists over either AI or humans alone. This collaborative approach marries the meticulous precision of AI with the nuanced understanding of human experts, potentially setting a new standard in the detection of amyloid-related imaging abnormalities. Specifically, it demonstrated superior performance in detecting amyloid-related imaging abnormalities (ARIA), crucial for amyloid-β–directed antibody therapy. This synergy enhances diagnostic precision and underscores the potential of AI-enhanced radiological diagnostics to improve patient care significantly.

How will this synergy between AI and human intelligence redefine the future of medical diagnostics? Can this model be the blueprint for addressing other complex diseases? This breakthrough prompts us to envision a healthcare landscape where technology and human expertise converge to offer unparalleled patient care.

Detailed study can be found in JAMA Network Open.