The U.S. FDA, Health Canada, and the UK’s MHRA have unveiled 10 guiding principles for Good Machine Learning Practice (GMLP) in developing AI/ML medical devices. These principles aim to ensure safety, efficacy, and quality in healthcare innovation. Key focuses include leveraging multi-disciplinary expertise, implementing good software and security practices, ensuring representative clinical study participants and data sets, maintaining independence between training and test data sets, and emphasizing the performance of the human-AI team. These guidelines also highlight the importance of clear user information, robust testing, and ongoing monitoring of deployed models to manage re-training risks and maintain performance.
In the rapidly evolving field of radiology, artificial intelligence (AI) is not just a tool but a collaborator, reshaping the dynamics of diagnosis and patient care. On the first order, the answer seemed clear: knowledge workers using AI outperforms those that don’t.
But the literature offers little detail on what happens after you embrace AI. Just with every tool ever existed, it really matters how you use it. As it turns out, it also matters how AI becomes part of your work.
To better understand this partnership, a group of Harvard Business School scholars published a study on business consultants who have, and who have not, opted to adopt GPT-4 in their daily work in spring 2023. There are several interesting conclusions – one of them delve into the analogy of centaurs versus cyborgs, concepts borrowed from mythology and science fiction that provide a vivid framework for the interaction between human intelligence and AI in radiology.
AI regulation in healthcare is coming. This article from FierceHealthcare summarizes the growing use of artificial intelligence (AI) in healthcare innovations and the increasing scrutiny from Senate lawmakers regarding AI regulation and healthcare payment. It highlights the concerns of bias in AI systems and the legislative scrutiny to ensure these technologies benefit patient care without discrimination. The discussion also covers lawsuits against major Medicare Advantage insurers for allegedly using AI to deny care, and the Centers for Medicare & Medicaid Services’ (CMS) guidance on AI use in healthcare decisions. Additionally, the need for transparency, accountability, and meaningful human review in AI applications in healthcare is emphasized, alongside calls for federal support to navigate AI’s integration into healthcare practices responsibly.
I urge you to read the full article which includes links to the first-hand sources and supplement with a short summary below for the busy professional.
Federal lawmakers are actively discussing the impact of artificial intelligence (AI) in healthcare, emphasizing the need to protect patients from bias inherent in some big data systems without stifling innovation. These biases can discriminate against patients based on race, gender, sexual orientation, and disability.
To ensure the beneficial outcomes of AI while safeguarding patient rights, the Algorithmic Accountability Act was introduced. This act mandates healthcare systems to regularly verify that AI tools are being used as intended and are not perpetuating harmful biases, especially in federal programs like Medicare and Medicaid.
Major Medicare Advantage insurers, including Humana and UnitedHealthcare, are under legal scrutiny for allegedly using AI algorithms to deny care, highlighting the challenges of implementing AI in patient care decisions without exacerbating discrimination or introducing new biases.
The Centers for Medicare & Medicaid Services (CMS) issued guidelines prohibiting the use of AI or algorithms for making coverage decisions or denying care based solely on algorithmic predictions, emphasizing the necessity for decisions to be based on individual patient circumstances and reviewed by medical professionals.
Testimonies during the legislative hearings called for additional clarity on the proper use of AI in healthcare, suggesting the establishment of AI assurance labs for developing standards, and advocating for federal support to help healthcare organizations navigate the use of AI tools through investments in technical assistance, infrastructure, and training.
American College of Radiology
The emphasis on AI transparency as an answer to bias is not new. The American College of Radiology (ACR) recently kicked off its Transparent-AI initiative to advocate for openness and trust in AI. The program invites all manufacturers with FDA-cleared AI tools to participate. By offering detailed insights into an algorithm’s training, performance, and intended use, Transparent-AI not only boosts product credibility but also aids in integrating these innovations into diverse healthcare environments. Behind the scenes, the ACR has also advocated for transparency in AI with various federal agencies and lawmakers.
Recent research underscores a leap in neuroimaging accuracy for Alzheimer’s disease diagnosis, emphasizing the superior performance of AI-assisted radiologists over either AI or humans alone. This collaborative approach marries the meticulous precision of AI with the nuanced understanding of human experts, potentially setting a new standard in the detection of amyloid-related imaging abnormalities. Specifically, it demonstrated superior performance in detecting amyloid-related imaging abnormalities (ARIA), crucial for amyloid-β–directed antibody therapy. This synergy enhances diagnostic precision and underscores the potential of AI-enhanced radiological diagnostics to improve patient care significantly.
How will this synergy between AI and human intelligence redefine the future of medical diagnostics? Can this model be the blueprint for addressing other complex diseases? This breakthrough prompts us to envision a healthcare landscape where technology and human expertise converge to offer unparalleled patient care.
As part of the effort to explore NVidia Jetson Nano, and part of its AI Specialist course (after finishing Fundamentals of AI in Nvidia Deep Learning Institute), I started buliding a JetBot.
JetBots are well documented and relatively easy to build provdied you have the right parts. There is also a bill of materials to make purchasing simpler.
The chassis was 3D printed according to the full DIY instructions (did not use a kit).
The camera used in this picture is actually from a Rasp Pi infrared camera I bought years ago. Turns out I could remove the lens and apply to another camera I bought for this project (IMX290-160FOV). It turns out that the 70 degree FOV on the lens was really just not wide enough to see what is going on. The 160-degree FOV was perfect and seems to help the bot see around itself.
This post is part of a series on learning about Internet of Things. These posts are mainly a learning tool for me – taking notes, jotting down ideas, and tracking progress. This means they might be unrelated to radiology or healthcare. They also might contain works-in-progress or inaccuracies.
ESP8266 is a wifi enabled microcontroller. One of the most helpful ones because of it’s wifi ability and very low cost. This makes the ESP8266 popular in even commercial products that need wifi connectivity.
For development purposes, there are also a lot of variants for this chip. After some preliminary research, there appears to be two most helpful breakout boards for it.
NodeMCU is technically the name of the Lua-compatible firmware for ESP8266, which later added support for ESP32 (the more powerful, dual-core sibling of ESP8266). NodeMCU was created in 2014 when user Hong committed the first file of nodemcu-firmware to GitHub. but people sometimes use this term to refer to breakout boards using ESP8266 following this particular schema. It comes with additional chips that enable USB-to-serial and other “quality of life” enhancements that make development easier. The breakout board is also compatible with solderless breadboards, making prototyping much easier.
“Gift funds used in support of the OEA project have a deficit balance of $11.59 million as of August 31, 2016, meaning that MD Anderson spent gift monies it has not yet received from donors.”
“Agreement with PricewaterhouseCoopers (PwC) for “Business Plan for a Flagship Informatics Tool” to lead an assessment of the “capabilities necessary to build the tool” and “incorporate the outcome of the assessment into a business plan that will guide the development” of the tool.”
“The first MD Anderson contract related to development of OEA using Watson technology was signed with IBM in June 2012… The original contract terms were for six months at a fixed fee of $2.4 million. That contract has been extended 12 times, with total fees of $39.2 million. The current extension expired on October 31, 2016.”
Interestingly, if you search in the document for “radiology” or “imaging”:
That said, it is a good cautionary tale for the radiologist-informaticist because the value proposition very closely mirrored what we are hearing in imaging today.
Just swap out the words “treatment,” “clinical-trial,” and “therapy” with “diagnosis” and “imaging”:
Artificial intelligence promises to uproot the practice of medical imaging – traditionally thought to be expensive and highly expert-driven. Radiology industry juggernauts like General Electric, Nuance, and Partners HealthCare all teaming up with established AI players like Intel and NVidia. The innovations have advanced rapidly. Recently, AI has managed to achieve super-human accuracy in the detection of pneumonia on radiography.
What can we learn from other industries that have seen the arrival of large, untamable, data- and AI-powered competitors?
Amazon entering an industry is typically regarded as an extinction-level event. Amazon started with the Internet, then had Big Data, and now has AI – they’ve bet first, bet big, and bet right on all of the major tech trends.
But this story isn’t about Amazon; it’s about everybody else.