Transforming Medical Education Assessment: The Case for Multisource Progress Assessment (MPA)

By Jerome Rotgans

 

The pandemic has exposed critical weaknesses in the assessment programs of medical education. Typically, a few high-stakes exams at the end of the year determine student performance and progression. However, with required safe-distancing measures, some of these exams were postponed or even canceled, leaving students in a precarious situation. The reliance on a limited number of assessment data points during normal operations was already problematic, but COVID-19 made it clear that the high-stakes assessment programs were not suitable for purpose and changes are needed for future-proofing assessment practices in medical education.

To address the limitations, some medical schools have embarked on a review process to explore whether the current assessment program can be transformed to combine high-stake examinations with frequent low-stake tests that provide students with a more complete picture of their performance and encourage continuous development and learning. This approach to assessment is referred to as “assessment for learning”, which treats assessment as an integral part of learning and is developmental in nature rather than making students solely pass hurdles.

In medical education, a balance between both approaches to assessment needs to be struck; assessment to help students improve over time and high-stake decision points to ascertain students have mastered the required clinical knowledge/skills. This balanced approach to assessment in medical education is captured by the concept of “programmatic assessment”, which I will refer to as Multisource Progress Assessment (or MPA). I believe that this name is more adequate since it explicitly refers to a multitude of sources and its developmental purpose to enhance learning.

Implementing MPA at this early stage does often not require any changes to the current assessment rules and regulations. Students still have to fulfill all requirements (pass all in-course assessments) to be eligible for attempting the end-of-year examinations. The only change with implementing MPA is that students’ overall performance will not solely be determined by means of the end-of-year examinations. Instead, all available test data points will be taken into consideration to determine students’ holistic performance.

To implement MPA, an inventory of all low-stake tests and high-stake exams needs to be compiled, and the tests and exams need to be categorized according to the competences they are expected to measure. Different medical schools have different competency frameworks, which are in essence quite similar. This categorization is useful because it allows aggregating test scores for each competency.

With the advancement of visualization software packages, such as Microsoft Power BI, it is relatively easy to present all available test data in an Assessment Dashboard. See Figure 1 for an example of such a dashboard. The dashboard will enable staff to track students’ progress and allows for early intervention. This will also enable students to track their own progress, almost in real time.

In conclusion, MPA is an effective way to address the limitations of high-stake assessment programs in medical education. It provides a more complete picture of students’ performance and encourages continuous development. Implementing MPA is not as difficult as one might expect and can be done with the help of visualization software packages. The Assessment Dashboard will enable staff and students to track progress and allow for early intervention, which will lead to better outcomes for everyone involved.

 

Figure 1: Mock-up Assessment Dashboard including competency graph, detailed breakdown of all tests and exams and qualitative feedback.

 

It would be great to hear your thoughts on this!