In this briefing:
- RSNA launches a new AI journal.
- ACR makes several moves to advance the use of artificial intelligence in the future of medical imaging.
- Researchers publish in high impact journal the successful use of AI to detect breast density.
- ACR and MICCAI sign agreement to advance AI in medical imaging.
- Geisinger declares successful implementation of algorithm to improve time-to-diagnosis of intracranial hemorrhages 20-fold.
Stay up to speed in 2 minutes. Radiology AI Briefing is a semi-regular series of blog posts featuring hand-picked news stories and summaries on machine learning and data science.
Big Data has become a radiology buzzword (the others: machine learning, AI, and disruptive innovation are also up there).
However, there is a real problem with using the term Big Data – it isn’t just one set of data problems. Big Data is a conglomerate of different data challenges: volume of data, heterogeneity of data, or the velocity of data are all important dimensions. Machine learning and internet of things are others layers superimposed on the big data problem.
Sometimes it is helpful to step back and approach data problems with a common framework, a way to think about how and which facets of data science fit in a real-life workflow in the face of an actual problem.
Below is a 6-element framework that helps me think about data-driven informatics problems. They are generally in chronological order, but they are not “steps” because you frequently will find yourself going back and redefining many things. However, the framework helps you maintain a big-picture outlook. The reason any sufficiently complex data problem requires a team approach. Continue reading
Dr. Keith Dreyer opens with a keynote during the Intersociety Summer Conference (ISC) with description of data science and overview of how machine learning have evolved over time.
He describes that machines and humans inherently see things differently. Humans are excellent at object classification, recognition of faces, understanding language, driving, and imaging diagnostics. Continue reading
The use of the phrase, “Artificial Intelligence” has exploded within the past few years as the theme of dozens of our most popular movies and television shows, magazines, books, and social media. This is despite the difficulty that many experts … Continue reading
Arthur C. Clark and Stanley Kubrick predicted supercomputers more intelligent than humans. In 2001: A Space Odyssey, the HAL states, with typical human immodesty, “The 9000 series is the most reliable computer ever made… We are all, by any practical definition of the words, foolproof and incapable of error.” Forty years later, IBM’s Watson pummeled humans in Jeopardy – a distinctly human game. Continue reading
“Mathematical reasoning may be regarded rather schematically as the exercise of a combination of two facilities, which we may call intuition and ingenuity.” – Alan Turing
Sherlock Holmes is fictional expert in what he calls the “exact science of detection” (A Study in Scarlet). Despite his genius in deductive reasoning and intuition is unparalleled, much of the detective success relies upon the calm and composed guidance of his trusty sidekick Dr. Watson. In most of the canonical novels, Watson acts as the sanity check for Holmes’ storm of ideas and, of course, the meticulous chronicler of their adventures together.
After defeating its human opponents on Jeopardy, the supercomputer Watson by IBM will attempt to learn medicine. Despite its terabytes of storage and raw processing horsepower, Watson’s ability to make medical decisions remains unclear. Can IBM’s Watson truly understand the complex human body and make medical decisions, or will it – like Dr. Watson attempting deduction – prove to be an helpful sounding board but falling short of achieving true intuition?