Monthly Archives: September 2016

8 Ways to Gauge The Quality of Your Data Team

At the turn of the century, Joel Spolsky came up with the idea of a “Joel Test” – a highly irresponsible, sloppy test to rate the quality of a software team.

Then, this group thought to come up with their own criteria to rate the quality of a data science team.  How do your analysts in the radiology department fare?

The “Joel Test” for Data Science

  1. Can new hires get set up in the environment to run analyses on their first day?
  2. Can data scientists utilize the latest tools/packages without help from IT?
  3. Can data scientists use on-demand and scalable compute resources without help from IT/dev ops?
  4. Can data scientists find and reproduce past experiments and results, using the original code, data, parameters, and software versions?
  5. Does collaboration happen through a system other than email?
  6. Can predictive models be deployed to production without custom engineering or infrastructure work?
  7. Is there a single place to search for past research and reusable data sets, code, etc?
  8. Do your data scientists use the best tools money can buy?

New Seizure Prediction Competition, and Why It Matters for Radiology

Kaggle is a website to host coding competitions related to machine learning, big data, or otherwise all things data science.

Newly launched on Kaggle is a healthcare-related competition!  A group of health institutions provided a large data set consisting of three patients’ interictal and preictal (up to 1 hour before) EEG tracings in raw data.  The goal? Predict which “unknown” EEGs are preictal so healthcare providers can intervene.

Also, with the timely arrival of Internet of Things (IoT), wearable, and big data, can you imagine the impact of giving patients an accurate 5-minute warning every time a seizure is about to start? Continue reading