As part of the effort to explore NVidia Jetson Nano, and part of its AI Specialist course (after finishing Fundamentals of AI in Nvidia Deep Learning Institute), I started buliding a JetBot.
JetBots are well documented and relatively easy to build provdied you have the right parts. There is also a bill of materials to make purchasing simpler.
The chassis was 3D printed according to the full DIY instructions (did not use a kit).
The camera used in this picture is actually from a Rasp Pi infrared camera I bought years ago. Turns out I could remove the lens and apply to another camera I bought for this project (IMX290-160FOV). It turns out that the 70 degree FOV on the lens was really just not wide enough to see what is going on. The 160-degree FOV was perfect and seems to help the bot see around itself.
This post is part of a series on learning about Internet of Things. These posts are mainly a learning tool for me – taking notes, jotting down ideas, and tracking progress. This means they might be unrelated to radiology or healthcare. They also might contain works-in-progress or inaccuracies.
ESP8266 is a wifi enabled microcontroller. One of the most helpful ones because of it’s wifi ability and very low cost. This makes the ESP8266 popular in even commercial products that need wifi connectivity.
For development purposes, there are also a lot of variants for this chip. After some preliminary research, there appears to be two most helpful breakout boards for it.
NodeMCU is technically the name of the Lua-compatible firmware for ESP8266, which later added support for ESP32 (the more powerful, dual-core sibling of ESP8266). NodeMCU was created in 2014 when user Hong committed the first file of nodemcu-firmware to GitHub. but people sometimes use this term to refer to breakout boards using ESP8266 following this particular schema. It comes with additional chips that enable USB-to-serial and other “quality of life” enhancements that make development easier. The breakout board is also compatible with solderless breadboards, making prototyping much easier.
“Gift funds used in support of the OEA project have a deficit balance of $11.59 million as of August 31, 2016, meaning that MD Anderson spent gift monies it has not yet received from donors.”
“Agreement with PricewaterhouseCoopers (PwC) for “Business Plan for a Flagship Informatics Tool” to lead an assessment of the “capabilities necessary to build the tool” and “incorporate the outcome of the assessment into a business plan that will guide the development” of the tool.”
“The first MD Anderson contract related to development of OEA using Watson technology was signed with IBM in June 2012… The original contract terms were for six months at a fixed fee of $2.4 million. That contract has been extended 12 times, with total fees of $39.2 million. The current extension expired on October 31, 2016.”
Interestingly, if you search in the document for “radiology” or “imaging”:
That said, it is a good cautionary tale for the radiologist-informaticist because the value proposition very closely mirrored what we are hearing in imaging today.
Just swap out the words “treatment,” “clinical-trial,” and “therapy” with “diagnosis” and “imaging”:
Artificial intelligence promises to uproot the practice of medical imaging – traditionally thought to be expensive and highly expert-driven. Radiology industry juggernauts like General Electric, Nuance, and Partners HealthCare all teaming up with established AI players like Intel and NVidia. The innovations have advanced rapidly. Recently, AI has managed to achieve super-human accuracy in the detection of pneumonia on radiography.
What can we learn from other industries that have seen the arrival of large, untamable, data- and AI-powered competitors?
Amazon entering an industry is typically regarded as an extinction-level event. Amazon started with the Internet, then had Big Data, and now has AI – they’ve bet first, bet big, and bet right on all of the major tech trends.
But this story isn’t about Amazon; it’s about everybody else.
Artificial intelligence is the hottest topic in medical informatics. The promises of an intelligent automation in medicine’s future are equal parts optimism, hype, and fear.
In this post, Mike Hearn struggles to reconcile the paradox surrounding the supposedly objective, data-driven approaches to AI and the incredibly opinion-charged, ultra-political world from which AI draws its data source.
The post focuses on broader applications, but in medicine, a similar problem exists. If AI is expected to extract insight from the text of original research articles, statistical analyses, and systematic reviews, its “insights” are marred by human biases.
The difference, of course, is that AI may bury such biases into a machine learning black box. We have an increasing body of research on latent human biases, but machine biases are much harder to discover, particularly when it reflects the inherent biases in the data from which it draws its conclusions. Our own biases.
AI acts as a mirror. Sometimes we don’t like the face staring back at us.
This month’s Harvard Business Review has an article highlighting one of the most fascinating emerging trends in quality improvement: that a “root cause” exists may be a myth. As healthcare QI/QA moves towards eliminating errors and improving metric-based performance, the increasing obsession towards solving a quality problem is laudable but sometimes misguided.
This excellent HBR article focuses on reframing. In short, what you say after discovering a complex problem is important. Before saying “Let’s start making a pareto chart and collect some data!” try inserting a 30-second pause with, “Is that the right problem we should be solving?”
Without spoiling the fun of reading the article, try thinking through this issue before reading – You have received multiple complaints about the speed of your building’s elevators. How would you address this problem?
In fact, the very idea that a single root problem exists may be misleading; problems are typically multicausal and can be addressed in many ways.
If you have been paying attention to data science in healthcare you will have noticed the gradual shift from 2016’s Big Data to 2017’s Machine Learning. Specifically, deep learning techniques attract much of the attention. The FDA recently approved the use of deep learning techniques in cardiac diagnoses. Enlitic promises to automate the process of radiologic diagnosis for medical imaging. And with the advent of wearables, there is an ever-increasing volume of health data that requires “smart” algorithms to parse out the signal from the noise. Continue reading →
Because of its unique combination of data, science, and machines and the interactions with patients’ lives, innovation in the industry requires much more than cool new gadgets or one-off apps that don’t get to the heart of challenges around cost, quality, and access.”