Humans are lazy. We don’t like to think hard. Proven. In this smart guy’s book.
But sometimes people take brilliant decision science theories too far.
It’s true that very few decisions in this world can be made as black-and-white, but most of us find making a decision using black-and-white terms much easier than by “weighing all the pros and cons.” This phenomenon is well studied and is called heuristics or attribute substitution – we make decisions by replacing a hard question with an easier one (subconsciously).
Case in point:
Is the Apple iPhone 5S or Google Nexus 5 a better purchase decision?
Three interesting observations arise from trying to answer this question: (1) it’s easy to have an instinctive answer – which phone do I own; do I like it? Which ones do your friends have? And (2) it’s hard to actually find the right answer, the nuances of the question notwithstanding. What constitutes a “better decision?” What does “purchase” mean – buying full price? With a plan? Shouldn’t we consider the older iPhones and other Android phones too? And isn’t iPhone 6 being announced in like 2 weeks?
Our reliance on heuristics points out a simple fact of decision-making. Humans don’t like making hard decisions. When we’re asked to make a hard decision (“should I get the iPhone 6 or the Nexus 5”), we substitute it with a much easier answer without realizing it (“which smartphone do I like better?”).
Then there’s the opposite of heuristics, the analytic purists. The people who write out all the pros and cons of every brand of smartphone, reading all published reviews of expert and amateur consumer opinions, and do a Net Present Value analysis for each choice.
In medicine, the stakes are higher, and the difference between Bayesian thinking and heuristics sometimes lead to a one-sided debate with a focus on making the right call. These discussions often end with one side urging the other to pursue a more Bayesian way of thinking, a more “correct” way to make medical decisions, citing evidence-based medicine as a practical application of Bayesian networks, as if those who think with anecdotal experiences or “medical gestalt” are somehow missing out on an entire treasure trove of knowledge.
The problem is this – if we are not hardwired to think in Bayesian, and staggering volumes of evidence suggest that we are not, then increasing amounts of willpower, call-to-action, or persuasion will not get us there. What it will do is to provide a false sense of security to those that believe they are thinking outside of the attribute substitution paradigm, akin to traditional economists who believed in the all-seeing, ultra-rational, game-theory touting, decision-makers who simply cannot possibly exist.
Perhaps we shouldn’t avoid thinking in heuristics. We should perhaps simply acknowledge that we cannot escape our predictably irrational thinking. For instance, in pursuing the ultimate evidence-based practice, we sometimes forget that the use of the best medical data are based on the best patient cohorts – without comorbidies, with high compliance, with close follow-up. We forget that statistical significance does not always, or even often, translate to clinical significance. Finally, evidence-based medicine nearly always neglect its cost to the patient, to the payer, to society, all three so often overlap (as in Medicare).
So, the final observation is this: (3) the fact that you know it’s hard to determine the deliberate answer does not make the intuitive answer less convincing, less appropriate, or less correct.
The best solutions probably consist of elements of both. And no, I can’t back that up with data; it’s just a hunch.