Can Gamification Improve Learning Effectiveness? (Spoilers: I Don’t Know)

This post is part of a series on preparing for the radiology core exam.

No, I don’t actually have the answer to the whether gamification can improve learning effectiveness.

But my radiology class might find out first hand through our QBank Challenge!

The first core exam took place in October, 2013, fully replacing the oral boards.  Since then, there has been a rapid proliferation of review question banks.  For instance, RadPrimer boasts how its “more than 7,000 assessment questions will help you prepare for the next step.”

Since it is so hard to preemptively complete review questions 9 months before the actual test, and such a pain to do 7,000 questions at the last minute, two weeks ago my residency class started a pilot round of “QBank Challenge.”

The Gamification Design

First, I created a Google form looking like this:

Radiology isn’t quite like riding a bicycle, but there you have it.

A classmate and I dreamed up a “gamified” honor system where a resident can submit the number of questions he/she completed for some friendly competition and bragging rights.

The form is connected to a Google Sheet, which then feeds into a leaderboard:

Someone’s been answering some serious number of questions.

The idea is that we will keep track of both short-term accomplishments as well as a cumulative score both collaboratively and competitively.  By completing the goals of each 2-week run you can accrue Challenge Points, and with each new run everyone again starts with 0.

This way someone who answered 800 questions in one round won’t have an insurmountable lead.

The global leaderboard looks like this:

 

Thoughts

Personally I’ve found that the system works to motivate me (alas, I’m not the one in the lead).  It converted “doing review questions” from just something nagging in the back my mind to an actual activity.  Even though the core exam is 9 months away, having a 2-week goal made it concrete and immediately relevant.

This was a pilot run.  There are a few things that can be tested for next time:

  1. Can we convert reviewing textbook or teaching file cases to questions and count them?   After all, review questions are hardly the only way to prepare for a standardized test.
  2. For the pilot, the goal was 400 questions over 2 weeks.  Perhaps this early in the preparation the best goal should be lower?  Ideally the goal should be somewhat aspirational but also attainable.

 

We are near the end of the pilot run. What long-term ramification that this method has remains to be seen.

Howard Chen on GithubHoward Chen on LinkedinHoward Chen on Wordpress
Howard Chen
Vice Chair for Artificial Intelligence at Cleveland Clinic Diagnostics Institute
Howard is passionate about making diagnostic tests more accurate, expedient, and affordable through disciplined implementation of advanced technology. He previously served as Chief Informatics Officer for Imaging, where he led teams deploying and unifying radiology applications and AI in a multi-state, multi-hospital environment. Blog opinions are his own and in no way reflect those of the employer.

Leave a Reply