The Agitator, Innovator, and Orchestrator Model

A well-written framework on Stanford Social Innovation Review describes three distinct forces of transforming a practice.

An agitator brings the grievances of specific individuals or groups to the forefront of public awareness. An innovator creates an actionable solution to address these grievances. And an orchestrator coordinates action across groups, organizations, and sectors to scale the proposed solution.

The key observation is that transformation requires all three in harmony.  In medicine, the voices of agitators frequently meet top-down repression or with the silence of the leadership. “This is just the way we’ve always done it,” they might say.

The Stanford article focuses on building a team consisting of people in all three domains in order to bring about social innovation.  In medicine, practices tend to be resistant to change partly due to the higher stakes but also due to the highly regulated climate of modern health care.  (This is not necessarily good or bad – it just is.)

Although medicine often places more weight on orchestration – coordination of interdisciplinary care to benefit patient health – it stands to reason that a healthy dose of the other two is also necessary. If you see yourself as an agitator, know that a thorough understanding of stakeholder analysis can help you better differentiate between a simple inconvenience and an opportunity to create value. If you are an innovator, your strength may lie in an intuitive visualization of connections between disparate organizational units. Know that what seems obvious to you is probably opaque to others. In the end:

Agitation without innovation means complaints without ways forward, and innovation without orchestration means ideas without impact.

The most dangerous AI – Mike’s blog

Artificial intelligence is the hottest topic in medical informatics.  The promises of an intelligent automation in medicine’s future are equal parts optimism, hype, and fear.

In this post, Mike Hearn struggles to reconcile the paradox surrounding the supposedly objective, data-driven approaches to AI and the incredibly opinion-charged, ultra-political world from which AI draws its data source.

The post focuses on broader applications, but in medicine, a similar problem exists. If AI is expected to extract insight from the text of original research articles, statistical analyses, and systematic reviews, its “insights” are marred by human biases.

The difference, of course, is that AI may bury such biases into a machine learning black box.  We have an increasing body of research on latent human biases, but machine biases are much harder to discover, particularly when it reflects the inherent biases in the data from which it draws its conclusions. Our own biases.

AI acts as a mirror. Sometimes we don’t like the face staring back at us.

Source: The most dangerous AI – Mike’s blog

Are You Solving the Right Problems?

This month’s Harvard Business Review has an article highlighting one of the most fascinating emerging trends in quality improvement: that a “root cause” exists may be a myth.  As healthcare QI/QA moves towards eliminating errors and improving metric-based performance, the increasing obsession towards solving a quality problem is laudable but sometimes misguided.

This excellent HBR article focuses on reframing.  In short, what you say after discovering a complex problem is important.  Before saying “Let’s start making a pareto chart and collect some data!” try inserting a 30-second pause with, “Is that the right problem we should be solving?”

Without spoiling the fun of reading the article, try thinking through this issue before reading – You have received multiple complaints about the speed of your building’s elevators.  How would you address this problem?

 

In fact, the very idea that a single root problem exists may be misleading; problems are typically multicausal and can be addressed in many ways.

Source: Are You Solving the Right Problems?

Propel healthcare data science by solving the boring problems

Data cleaning is boring but critical.

If you have been paying attention to data science in healthcare you will have noticed the gradual shift from 2016’s Big Data to 2017’s Machine Learning.  Specifically, deep learning techniques attract much of the attention. The FDA recently approved the use of deep learning techniques in cardiac diagnoses.  Enlitic promises to automate the process of radiologic diagnosis for medical imaging.  And with the advent of wearables, there is an ever-increasing volume of health data that requires “smart” algorithms to parse out the signal from the noise. Continue reading

William Chen’s answer to What are the top 5 skills needed to become a data scientist? – Quora

A Quora answer/article about data science.

Incidentally, the same 5 skills are also highly relevant to be a physician-informatician, particularly in radiology.  Give it a read.

Source: William Chen’s answer to What are the top 5 skills needed to become a data scientist? – Quora

DICOM Processing and Segmentation in Python – Radiology Data Quest

There is something strangely satisfying about being able to take things apart and putting it back together.  Inspired by the popularity of Lego sets in our childhoods, Minecraft brought this sense of wonder to video games.

For those of us who are life-long tinkerers who happen to be radiologists, I published in Radiology Data Quest a DIY on how one take DICOM apart and manipulate it.  All in Python, no less.

 

DICOM is a pain in the neck.  It also happens to be very helpful.  As clinical radiologists, we expect post-processing, even taking them for granted. However, the magic that occurs behind the scene…

Source: DICOM Processing and Segmentation in Python – Radiology Data Quest

AI is Transforming Healthcare, but Cost, Quality, and Access is So Much More

Because of its unique combination of data, science, and machines and the interactions with patients’ lives, innovation in the industry requires much more than cool new gadgets or one-off apps that don’t get to the heart of challenges around cost, quality, and access.”

Source: Doctors, data, and diseases: How AI is transforming health care | VentureBeat | Bots | by Charles Koontz, GE Healthcare

Data Science and Machine Learning Developments in 2016 – Radiology Data Quest

    As we welcome 2017, it’s time to sum up the key developments in data science and machine learning from the past year so that we can open our eyes to the new year. Here is what you missed …

Source: Data Science and Machine Learning Developments in 2016 – Radiology Data Quest

Innovating in a large health system

One would think that resource-rich organizations are able to foster new ideas better than poor, cash-constrained startups.

However, it is remarkably difficult to innovate within a large health system on an ad hoc basis, for the same reason that it is difficult to innovate in a large corporation.  For one, it’s all too easy to feel like a cog in a large machine.  Fear of failure, perceived lack of reward, and a paucity of institutional support are other reasons why innovation stagnates in otherwise resource-rich organizations.

But little-fish-big-pond problems are not the only ones that plague innovation.  This phenomenon is well-recognized as part of the key reasons why disruptive innovations are notoriously difficult to launch from within a corporation.

If you feel this way, you may be an “intrapreneur.”

Continue reading

Check your assumptions with two types of MVP

In an MVP-based approach, flagship design begins with a paper boat. And a mouse.

The concept of MVP applies to research, quality improvement, or innovation projects.   In this case, MVP is not “most valuable player” (although it could make you one) but “minimum viable product.” In a nutshell, rather than the traditional approach of building only after deliberate planning and careful design, the MVP concept focuses on a just-enough set of features.  At first glance, it may seem counter-intuitive: shouldn’t the greatest success come after long-term project planning and thorough discussion of all possible outcomes?

Initially, MVP was developed for anemic startup companies to get a quick infusion of revenue before developing their flagship offering.  It was soon realized that this process of small-target rapid iteration yields not just faster but also better results.  Gantt charts, project timelines, and table-pounding meetings are still important, but real-life experimentation is a higher priority.

When Innovation and Improvement Collide

MVP is an extension of Plan-Do-Study-Act (PDSA). It makes one assumption: “all assumptions are more likely wrong than right.”  In designing a medical research proposal, you have implicitly made assumptions about some aspect of a disease’s biology. In creating a product, you inevitably face the need to make assumptions about customer needs.  In starting a business, you have determined that about research outcome, or even the fundamental design.

If these assumptions are more likely wrong than right, then the best next step must be to make as few as possible before having the opportunity to test them.  The MVP approach asks innovators to encapsulate as few concepts as possible into a deliverable and then bring that product to a test market to verify those assumptions.  Since outcome metrics can be very difficult or expensive to obtain – just ask people who run Phase III trials or market research – limiting variables allows you to be sure that acquired data only have a small number of possible interpretations.

Two Ways of Learning from an MVP

Some approaches to software design approaches embrace the MVP concept, one of the better known being Scrum.  Another product-oriented approach is called pretotyping (not to be confused with prototyping) – “faking” as much as possible with the goal of acquiring data before the making heavy financial investments.

The venerable Harvard Business Review has more – there are two types of MVPs.  Your MVP can be validating – by trying an inferior product to prove a concept.  It can also be invalidating, where the MVP is actually a better project than the one you plan to create.

If your MVP is a worse product than your imagined final version, success validates your idea; failure, on the other hand, doesn’t necessarily invalidate it. If your MVP offers a better experience, then failure invalidates your business model; success doesn’t necessarily validate it.

Hypothetically, if your radiology department is contemplating an investment of $3 million over 5 years on a “virtual radiology consultation” technology to improve communication, the rationale for purchase may be that busy radiologists cannot satisfy the high clinician demand for collegial discussions, and live digital discussions would solve that problem.

To test this assumption, you could deploy an invalidating MVP.  For instance, this may take the form of a one-week real radiology consultation for all questions.

This solution is obviously a huge waste of valuable radiology resource and unsustainable over time.  But failure invalidates one key assumption for the intended purchase.  Even if successful, it may raise important points to resolve: is subspecialist availability necessary?  Does the consultation need to be 24/7 or only during key hours?  The virtual solution might still work, but it would work for reasons other than but you know at least one of the underlying assumptions might need reassessment.