Great Storytellers of Our Moral Decisions

Not long ago, I had the opportunity to participate in a medical research project.  In broad terms, the researchers developed a virtual tool to evaluate the skills of doctors on a particular procedure without performing on a real patient, and they needed people at various stages of proficiency to test the training program.  Since I was a total novice, it made me an ideal subject – I was expected to stumble and burn.  In fact, I was so clueless that I had to ask the experimenter to repeat the instructions for the simulation.  Then, through either sheer luck or innate talent (ha), I scored near the top of the chart.

Shortly after the study concluded, I was notified that after discussing with the co-researchers, the research team has decided to discard my data-point because “the instructions were given twice, which gave an unfair advantage over the other participants.”  I wanted to reply, “But if a complete novice can score like this without knowing how to do the actual procedure, doesn’t that say something about the quality of the virtual evaluation?”

More interestingly, if I had scored much lower than the average novice – making the results look even better – would the research team have thrown out my data-point all the same?In movies, novels, and comic books, there exists irredeemable villains and  incorruptible paragons, and between those two moral extremes rest the entire real world’s population.  As human beings, our minds compose a self-narrative, a mental autobiography of who we are.  It may come as no surprise that our self-narratives paint a different picture than what may be evident through our actions.

To understand this difference, we must first consider the modern theory of behavioral psychology – and its not-so-modern analogues.  The overarching premise is that the mind has two components.  We are generally aware of only the rational, thinking mind, but there is a strong, ancient thinking machine that is far quicker, emotional, and focuses on the present.

Plato describes the human soul as the combination of a horse-drawn carriage and its charioteer.  The charioteer thinks he is in control of the carriage – until of course, when he is not and the horse just runs.  Jon Haidt uses the metaphor of an elephant and its rider in The Happiness Hypothesis.  Nobel Laureate Daniel Khaneman’s book Thinking, Fast and Slow refers to them simply as Systems 1 and 2, a designation that came from a paper by Stanovich and West.

Most of the rider/animal analogies imply that the rational mind is only in control when the experiential mind is unaroused by emotion.  After all, there is not much the rider can do if the elephant is too angry, or too scared, or too hungry to follow commands.

In David Eagleman’s book Incognito: The Secret Lives of the Brain, he presents the two minds as two rivalling parties in congress, voting each time to create a particular decision.  This framework suggests that, when faced with the desire to satisfy an immediate pleasure (say, having ice cream) versus working towards a future benefit (say, being on a diet), the struggle may be laden with filibusters and repeat negotiations.  Sometimes the struggle is palpable by our simultaneous desire to walk into the ice cream shop and walk away.  Sometimes one of the two sides will win, but sometimes they reach a compromise (say, having a sorbet instead of ice cream). The space of all possible compromises is the “fudge factor.”  It is the set of decisions where the mind can both get a little bit of short-term benefits and not feel too guilty about breaking the rules.

The mind often does not see the compromise as a middle ground between minor cheating and minor immediate gratifications post-hoc, but instead believes it to be an entirely justified choice.  The subheading for one of the chapters in Dan Ariely’s The (Honest) Truth about Dishonesty is “We Are All Storytellers.”  In fact, storytelling is the way we resolve cognitive dissonance – inconsistencies between our actions and our self-narratives.  If my self-narrative includes being a frugal person, and I have just made an impulse purchase of a $1,000 sound system that I do not need, I may think, “but hey, I didn’t buy that $2,000 stereo over there, and this one was 20% off, what a great deal.”  The beauty of rationalization is that the justification is sensible, sounds plausible, and looks great in my mental autobiography.

In the medical research experiment I participated, the research team consisted of probably the most intelligent, well-intentioned physicians and students in the country.  They also pride themselves on being good researchers, and no one would create false data – after all, that is not what good researchers do.  Furthermore, these researchers certainly have invested a tremendous amount of time and effort to develop a teaching tool that could also improve medical care.  Perhaps this self-narrative could have allowed them to rationalize the decision to groom the data-points to prove the conclusion that the team had fundamentally believed is true.  The danger of dishonesty is never painted in neon green with flashing lights.  It looms subtly and in the way consistent with our human hard-wiring.

Howard Chen on GithubHoward Chen on LinkedinHoward Chen on Wordpress
Howard Chen
Vice Chair for Artificial Intelligence at Cleveland Clinic Diagnostics Institute
Howard is passionate about making diagnostic tests more accurate, expedient, and affordable through disciplined implementation of advanced technology. He previously served as Chief Informatics Officer for Imaging, where he led teams deploying and unifying radiology applications and AI in a multi-state, multi-hospital environment. Blog opinions are his own and in no way reflect those of the employer.

Leave a Reply