AI regulation in healthcare is coming. This article from FierceHealthcare summarizes the growing use of artificial intelligence (AI) in healthcare innovations and the increasing scrutiny from Senate lawmakers regarding AI regulation and healthcare payment. It highlights the concerns of bias in AI systems and the legislative scrutiny to ensure these technologies benefit patient care without discrimination. The discussion also covers lawsuits against major Medicare Advantage insurers for allegedly using AI to deny care, and the Centers for Medicare & Medicaid Services’ (CMS) guidance on AI use in healthcare decisions. Additionally, the need for transparency, accountability, and meaningful human review in AI applications in healthcare is emphasized, alongside calls for federal support to navigate AI’s integration into healthcare practices responsibly.
I urge you to read the full article which includes links to the first-hand sources and supplement with a short summary below for the busy professional.
1-Minute Summary
- Federal lawmakers are actively discussing the impact of artificial intelligence (AI) in healthcare, emphasizing the need to protect patients from bias inherent in some big data systems without stifling innovation. These biases can discriminate against patients based on race, gender, sexual orientation, and disability.
- To ensure the beneficial outcomes of AI while safeguarding patient rights, the Algorithmic Accountability Act was introduced. This act mandates healthcare systems to regularly verify that AI tools are being used as intended and are not perpetuating harmful biases, especially in federal programs like Medicare and Medicaid.
- Major Medicare Advantage insurers, including Humana and UnitedHealthcare, are under legal scrutiny for allegedly using AI algorithms to deny care, highlighting the challenges of implementing AI in patient care decisions without exacerbating discrimination or introducing new biases.
- The Centers for Medicare & Medicaid Services (CMS) issued guidelines prohibiting the use of AI or algorithms for making coverage decisions or denying care based solely on algorithmic predictions, emphasizing the necessity for decisions to be based on individual patient circumstances and reviewed by medical professionals.
- Testimonies during the legislative hearings called for additional clarity on the proper use of AI in healthcare, suggesting the establishment of AI assurance labs for developing standards, and advocating for federal support to help healthcare organizations navigate the use of AI tools through investments in technical assistance, infrastructure, and training.
American College of Radiology
The emphasis on AI transparency as an answer to bias is not new. The American College of Radiology (ACR) recently kicked off its Transparent-AI initiative to advocate for openness and trust in AI. The program invites all manufacturers with FDA-cleared AI tools to participate. By offering detailed insights into an algorithm’s training, performance, and intended use, Transparent-AI not only boosts product credibility but also aids in integrating these innovations into diverse healthcare environments. Behind the scenes, the ACR has also advocated for transparency in AI with various federal agencies and lawmakers.
Disclosure: I am not involved with Transparent-AI but do sit on the ACR Commission on Informatics and chair the ACR Informatics Advisory Council and annual ACR DSI Summit. Register for the 2024 DSI Summit here!