After the recent presidential election, you are probably either particularly alarmed or especially excited about the outcome. Regardless of your particular political predilection, it is fair to say that this election puts data science on its head when so many got so wrong.
On an earlier issue of Harvard Business Review, the venerable magazine shared a piece of research from the University of Southern California on forecasting. When forecasting sales, the best estimators use a combination of intuition and logic – with both the logic-heavy and intuition-heavy forecasters performing less accurately.
In the age of artificial intelligence and big data, it can be sobering to realize that despite the staggering volume of data we are now collecting, ignoring your gut instincts can take a heavy toll on your decision-making abilities.
Source: “What type of forecaster are you?” Harvard Business Review (March): 26.
The first part of this discussed the heterogeneity of data projects and how a uniform approach can help hone in the solution. The first post also discussed the first two elements: Refine the question, and identifying the right data. Here we tackle the next two elements.
Plan Your Approach
At this step, we begin to go into the technicalities of data science. This post is not designed to go into the detail of each approach, but it will attempt to ask the relevant questions.
How will you process the data that you now possess? In almost all cases, this step will involve data wrangling (also known as data munging or data cleaning). To determine how the “clean” form your data must take for proper analysis, it is important to determine the transformations and algorithms necessary for your question. Continue reading
Big Data has become a radiology buzzword (the others: machine learning, AI, and disruptive innovation are also up there).
However, there is a real problem with using the term Big Data – it isn’t just one set of data problems. Big Data is a conglomerate of different data challenges: volume of data, heterogeneity of data, or the velocity of data are all important dimensions. Machine learning and internet of things are others layers superimposed on the big data problem.
Sometimes it is helpful to step back and approach data problems with a common framework, a way to think about how and which facets of data science fit in a real-life workflow in the face of an actual problem.
Below is a 6-element framework that helps me think about data-driven informatics problems. They are generally in chronological order, but they are not “steps” because you frequently will find yourself going back and redefining many things. However, the framework helps you maintain a big-picture outlook. The reason any sufficiently complex data problem requires a team approach. Continue reading
At the turn of the century, Joel Spolsky came up with the idea of a “Joel Test” – a highly irresponsible, sloppy test to rate the quality of a software team.
Then, this group thought to come up with their own criteria to rate the quality of a data science team. How do your analysts in the radiology department fare?
- Can new hires get set up in the environment to run analyses on their first day?
- Can data scientists utilize the latest tools/packages without help from IT?
- Can data scientists use on-demand and scalable compute resources without help from IT/dev ops?
- Can data scientists find and reproduce past experiments and results, using the original code, data, parameters, and software versions?
- Does collaboration happen through a system other than email?
- Can predictive models be deployed to production without custom engineering or infrastructure work?
- Is there a single place to search for past research and reusable data sets, code, etc?
- Do your data scientists use the best tools money can buy?
Kaggle is a website to host coding competitions related to machine learning, big data, or otherwise all things data science.
Newly launched on Kaggle is a healthcare-related competition! A group of health institutions provided a large data set consisting of three patients’ interictal and preictal (up to 1 hour before) EEG tracings in raw data. The goal? Predict which “unknown” EEGs are preictal so healthcare providers can intervene.
Also, with the timely arrival of Internet of Things (IoT), wearable, and big data, can you imagine the impact of giving patients an accurate 5-minute warning every time a seizure is about to start? Continue reading
If you have taken overnight call, you quickly develop a sense for the emergency department and the inpatient floors. In my institution, radiologists develop hypotheses on how inpatient orders are placed.
For instance, sometimes it might seem as if inpatient radiology exams follow some sort of circadian rhythm. The data look to confirm it: we see the infamous “x-ray bump” in the early morning, with the increase in CT start more gradually but last later into the day.
Also, are weekdays and weekends any different? If so, how?
Going on a Quest
With a little coding in Python or R, one can gain a lot of insight into how our referring providers’ lives intertwine with our own. Read the full story in my new post on Radiology Data Quest.
From the Open Data Network I stumbled upon the CMS outpatient imaging data organized by state and decided to peek into the dataset and stick the data onto a US map for fun. Geek out with Joe and me in this new blog Radiology Data Quest.