If you’ve ever wondered, there’s a lot that goes on behind making a Firecracker question. Here’s a glance into some of the details that go unnoticed. You’ll see how serious (and picky) we are with the process.
Design - Phase I
The design phase is a critical part of engineering effective questions. Before we ever author a question, the question-in-embryo goes through a number of design stages. First, we align the idea with our board coverage map (which aligns to the official USMLE outline and other sources). This ensures that we have proper domain and depth coverage. Second, we create a detailed blueprint for each question. The blueprint manages the goals and specifications for each question in preparation for authoring. A single blueprint can take an hour or more to create. But despite the time and work, there are significant advantages to using our question blueprints:
- We can create ideal associations among answer choices, vignette components, and overarching concepts that normally might not be obvious when writing a question.
- Question blueprints allow for transparent review of question elements by multiple editors before the question is ever authored.
- We can design a question that will precisely fit assessment goals.
- Because each question is built from specific sets of blueprint elements, Firecracker can more accurately recommend questions according to individual student strengths and weaknesses.
Development - Phase II
Once a blueprint is approved, our 50 member team of medical students and physicians from top schools and hospitals around the country work through multiple stages of development to turn it into a full board relevant question. A single question requires about 8-9 hours of work to move from an approved blueprint to a brand new board-level question. The process is rigorous and has multiple checkpoints as shown below:
Big Data Curation - Phase III
Every month Firecracker students answer over 8M review questions and 150,000 patient cases, forming a vast data ecosystem. Numbers like these keep our questions under constant curative pressures and student scrutiny. We use data from this student-question ecosystem to recommend daily questions, flag questions for improvement, and generate analytics, such as question item response curves. Below is an example of item response curves from two of our questions. The graphs show the probability of successfully answering the question in relationship to knowledge mastery. The question on the left is relatively easy and doesn’t effectively discriminate mastery, which means we’ll pull it from our bank to fix it. The question on the right is more dependent upon knowledge mastery; it is also more difficult.
Next time you answer a Firecracker question, you can appreciate that you’re a part of building a world-class learning system where every detail counts. Whether part of the design, development, or data phase, each step of the process plays a vital role in Firecracker question creation.