Assessment in Health Professions Education

Specialist Services

What we do...

We provide specialist psychometric services to help you make sense of your assessment data. Deep insights in the psychometric properties of your data is a must for enabling transparency and accountability

Validity

Establishing validity evidence of your assessments is requirement of any competency framework. It addresses the question whether your test results adequately measure the knowledge and skills you intended to measure. We have developed qualitative and quantitative techniques to establish validity evidence.

Reliability

Reliability is concerned with the accuracy and reproducibility of your test results. This is particularly important when relying on high-stake exams. We have developed new ways to establish reliability evidence using sophisticated analytical methods.

Assessment services

We are specialized in state-of-the art psychometric analyses of assessment data using an array of sophisticated analytical techniques, such as Item Response Theory, Structural Equation Modeling, and simulation models. We summarize and present the results of our analyses in an intuitive visual manner so that all stakeholders (experts and novices) can understand complex data.
Our psychometric services include:

(1) Item analysis of MCQ/SBA/true-false tests using Item Response Theory. This approach allows for evaluating the validity and reliability of single items as well as the entire test. We provide item characteristic curves, test characteristic curves, cut-off determination using Ebel, Cohen, modified Cohen and the enhanced Cohen’s methods.

(2) OSCE analysis using many-facet Rasch analysis to scrutinize the validity and reliability of OSCEs. Using this approach provides not only important information about stations and domains the OSCE measures, but also the extent to which examiners were biased in their evaluation of students. Besides the borderline regression method to establish the cut-off scores for students, we have developed a sophisticated approach, based on the station characteristic curves, to determine the cut-off value for passing an OSCE.

(3) Work-based assessments and observational measures of competencies. We have developed a scoring procedure for these measures that is reliable and robust and can be used to evaluate the performance of medical students in an authentic clinical/professional setting. These methods are easy to employ and require minimal time investment from clinicians.

(4) Development of assessment blueprints: A well-structured assessment blueprint is a crucial component of any high-quality assessment program. We  provide services to develop detailed blueprints that outline the content and cognitive skills that need to be assessed, as well as the types of assessment methods to be used.

(5) Item-writing workshops: We offer workshops or training sessions to help educators and assessment writers to develop high-quality assessment items that meet the requirements of Item Response Theory. This includes training on how to write items that are clear, concise, and assess the intended cognitive skills.

(6) Assessment validation: We provide services to validate assessments that have already been developed, using statistical techniques such as factor analysis, discriminant analysis, predictive validity analysis and Item Response Theory. We help you ensure that the assessments are measuring what they are intended to measure and are providing accurate information about students’ knowledge and skills.

(7) Standard-setting: Standard-setting is the process of determining the cut-off scores or pass marks for an assessment. We provide services to help educators and assessment developers to establish defensible cut-off scores using statistical techniques such as the Angoff method, the Bookmark method, or the Hofstee method, but also a new method using Test Characteristic Curves. 

(8) Assessment program evaluation: We provide services to evaluate the effectiveness of existing assessment programs in health professions education. This includes conducting surveys or focus groups with students, educators, and other stakeholders to gather feedback about the assessment program and using statistical techniques to analyze the data and provide recommendations for improvement.