Current Assessment in Medical Education: Programmatic Assessment

Current Assessment in Medical Education: Programmatic Assessment

Authors

  • Director Prideaux Centre for Research in Health Professions Education, College of Medicine and Public Health, Flinders University, Sturt Road, Bedford Park, South Australia, 5042
  • Department of Educational Development and Research, Maastricht University, Maastricht, The Netherlands

Keywords:

Assessment, Feedback, Programmatic Assessment, Portfolio

Abstract

Programmatic assessment is both a philosophy and a method for assessment. It has been developed in medical education as a response to the limitation of the dominant testing or measurement approaches and to better align with changes in how medical competence was conceptualised. It is based on continual collection of assessment and feedback information which is collected in a portfolio and periodically discussed with a mentor or coach. Students produce reflections in which they analyse the assessment information triangulated across methods to meaningfully- rather than purely numericallyanalyse their strengths and weakness in the different competency domains. Decisions about students’ progress are made proportionally, in that low-stakes decisions are made when only limited assessment information is available but high-stakes decisions will and can be made on rich information. Quality of the process and credibility of the outcomes is determined by a combination of psychometric methods, qualitative rigour, and through organisational due processes. The method is still young, preliminary results are promising but more empirical justification research is needed.

Downloads

Download data is not yet available.

Metrics

Metrics Loading ...

Downloads

Published

2019-05-29

How to Cite

Schuwirth, L. W. T., & Vleuten, C. P. M. V. D. (2019). Current Assessment in Medical Education: Programmatic Assessment. Journal of Applied Testing Technology, 20(S2), 2–10. Retrieved from http://www.jattjournal.net/index.php/atp/article/view/143673

References

Albanese, M.A., Mejicano, G., Mullan, P., Kokotailo, P. & Gruppen. L. (2008). Defining characteristics of educational competencies. Medical Education, 42(3), 248-255. https://doi.org/10.1111/j.1365-2923.2007.02996.x PMid:18275412

Anbar, M. (1991). Comparing Assessments of Students’ Knowledge by Computerized Open-ended and Multiplechoice Tests. Academic Medicine, 66(7), 420-422.https://doi.org/10.1097/00001888-199107000-00012 PMid:2059271

Boreham, N. C. (1994). The dangerous practice of thinking.Medical Education, 28, 172-179. https://doi.

org/10.1111/j.1365-2923.1994.tb02695.x PMid:8035708

Boud, D. (1990). Assessment and the promotion of academic values.Studies in Higher Education, 15(1), 101-111. https:// doi.org/10.1080/03075079012331377621 Canmeds. (2005). Retreived from: http://www.royalcollege.ca/portal/ page/portal/rc/canmeds

Chi, M. T. H., Glaser, R., & Rees, E. (1982). Expertise in problem solving. In R. J. Sternberg (Ed.), Advances in the psychology of human intelligence (pp. 7 - 76). Hillsdale NJ: Lawrence Erlbaum Associates.

Cilliers, F. J., Schuwirth, L. W. T., Herman N., Adendorff, H. J. & Van der Vleuten, C. P. M. (2012). A model of the preassessment learning effects of summative assessment in medical education. Advances in Health Sciences Education, 17, 39-53. doi: 10.1007/s10459-011-9292-5 https://doi. org/10.1007/s10459-011-9292-5

Cilliers, F. J., Schuwirth, L. W. T., Adendorff, H. J., Herman, N., & Van der Vleuten, C. P. M. (2010). The mechanisms of impact of summative assessment on medical students’ learning. Advances in Health Sciences Education, 15, 695-715. doi:10.1007/s10459-010-9232-9 https://doi.org/10.1007/s10459-010-9232-9

Cronbach, L. J., & Meehl, P. E. (1955). Construct validity in psychological tests. Psychological bulletin, 52(4), 281 - 302. https://doi.org/10.1037/h0040957 PMid:13245896

Delandshere, G., & Petrosky, A. R. (1998). Assessment of Complex Performances: Limitations of Key Measurement Assumptions. Educational Researcher, 27(2), 14-24. https://doi.org/10.3102/0013189X027002014

Driessen, E., Van der Vleuten, C. P. M., Schuwirth, L. W. T., Van Tartwijk, J., & Vermunt, J. (2005). The use of qualitative research criteria for portfolio assessment as an alternative to reliability evaluation: a case study. Medical Education, 39(2), 214-220. https://doi.org/10.1111/j.13652929.2004.02059.x PMid:15679689

Ericsson, K. A. (2007). An expert-performance perspective of research on medical expertise: the study ofclinical performance. Medical Education, 41, 11241130. doi:10.1111/j.1365-2923.2007.02946.x https://doi.

org/10.1111/j.1365-2923.2007.02946.x

Ericsson, K. A., Krampe, R. T., & Tesch-Romer, C. (1993). The Role of Deliberate Practice in the Acquisition of Expert Performance. Psychological Review, 100(3), 363-406. https://doi.org/10.1037/0033-295X.100.3.363

Eva, K. (2003). On the generality of specificity. Medical Education, 37, 587-588. https://doi.org/10.1046/j.13652923.2003.01563.x PMid:12834414

Eva, K. W., Neville, A. J., & Norman, G. R. (1998). Exploring the etiology of content specificity: Factors influencing analogic transfer and problem solving. Academic Medicine, 73(10), s1-5. https://doi.org/10.1097/00001888199810000-00028

Ferland, J. J., Dorval, J., & Levasseur, L. (1987). Measuring higher cognitive levels by multiple choice questions: a myth? Medical Education, 21(2), 109-113. https://doi. org/10.1111/j.1365-2923.1987.tb00675.x PMid:3574161

Frederiksen, N. (1984). The real test bias: Influences of testing on teaching and learning. American Psychologist, 39(3), 193-202. https://doi.org/10.1037/0003-066X.39.3.193

Gielen, S., Dochy, F., & Dierick, S. (2003). Evaluating the consequential validity of new modes of assessment: The influences of assessment on learning, inlcuding pre-, post- and true assessment effects. In Segers M, Dochy F, & Cascallar E (Eds.), Optimising new modes of assessment: In search of qualities and standards (pp. 37 - 54). Dordrecht: Kluwer Academic Publishers. https://doi.org/10.1007/0-306-48125-1_3

Govaerts, M. (2008). Educational competencies or education for professional competence? Medical Education, 42(3), 234236. https://doi.org/10.1111/j.1365-2923.2007.03001.x PMid:18275410

Hager, P., & Gonczi, A. (1996). What is competence? Medical Teacher, 18(1), 15 - 18. https://doi. org/10.3109/01421599609040255

Harden, R. M., & Gleeson, F. A. (1979). Assessment of clinical competence using an objective structured clinical examination (OSCE). Medical Education, 13(1), 41-54. https://doi.org/10.1111/j.1365-2923.1979.tb00918.x PMid:763183

Harrison, C. J., Könings, K. D., Dannefer, E. F., Schuwirth, L. W., Wass, V., & van der Vleuten, C. P. (2016). Factors influencing students’ receptivity to formative feedback emerging from different assessment cultures. Perspectives on Medical Education, 5(5), 276-284. https://doi.org/10.1007/s40037016-0297-x PMid:27650373 PMCid:PMC5035283

Hettiaratchi, E. (1978). A comparison of student performance in two parallel physiology tests in multiple choice and short answer forms. Medical Education, 12, 290-296. https://doi. org/10.1111/j.1365-2923.1978.tb00353.x PMid:672701

Hodges, B., Regehr, G., McNaughton, N., Tiberius, R., & Hanson, M. (1999). OSCE checklists do not capture increasing levels of expertise. Academic Medicine, 74(10), 1129-1134. https://doi.org/10.1097/00001888-199910000-00017 PMid:10536636

Jones, M. G., & Brader-Araje, L. (2002). The impact of constructivism on education: language, discourse, and meaning. American Communication Journal, 5(3), online journal.

Kerfoot, B. P, DeWolf, W. C., Masser, B. A., Church, P. A., & Federman, D. D. (2007). Spaced education improves the retention of clinical knowledge by medical students: a randomised controlled trial. Medical Education, 41(1), 23-31. doi:10.1111/j.1365-2929.2006.02644.x https://doi. org/10.1111/j.1365-2929.2006.02644.x

Maatsch, J. & Huang, R. (1986). An evaluation of the construct validity of four alternative theories of clinical competence. In Proceedings of the 25th annual RIME conference (Vol. 25, pp. 69-74). Chicago: AAMC. PMid:3641596

Norcini, J. J., & Swanson, D. B. (1989). Factors influencing testing time requirements for measurements using written simulations. Teaching and Learning in Medicine, 1(2), 85-91. https://doi.org/10.1080/10401338909539387

Norman, G, Swanson, D, & Case, S. (1996). Conceptual and methodology issues in studies comparing assessment formats, issues in comparing item formats. Teaching and Learning in Medicine, 8(4), 208-216. https://doi. org/10.1080/10401339609539799

Norman, G. R. (1989). Reliability and Construct validity of some cognitive measures of clinical reasoning. Teaching and Learning in Medicine, 1(4), 194 - 199. https://doi. org/10.1080/10401338909539410

Norman, G. R., Van der Vleuten, C. P., & De Graaff, E. (1991). Pitfalls in the pursuit of objectivity: issues of validity, efficiency and acceptability. Medical Education, 25(2), 119-126. https://doi.org/10.1111/j.1365-2923.1991. tb00037.x PMid:2023553

Pintrich P. (2004). A conceptual framework for assessing motivation and self-regulated learning in college students. Educational Psychology Review, 16(4), 385-407. doi:1040-726X/04/1200-0385/0 https://doi.org/10.1007/ s10648-004-0006-x

Posner, M. I. (1988). What is it to be an expert? In M. T. H. Chi, R. Glaser, & M. J. Farr (Eds.), The nature of expertise (pp. xxix - xxxvi). Hillsdale, NJ, US: Lawrence Erlbaum Associates, Inc.

Schuwirth, L. W. T., & Van der Vleuten, C. P. M. (2011). Programmatic assessment: from assessment of learning to assessment for learning. Medical Teacher, 33(6),478-485. https://doi.org/10.3109/0142159X.2011.565828 PMid:21609177

Schuwirth, L. W. T., & Van der Vleuten, C. P. M. (2012). Programmatic assessment and Kane’s validity perspective. Medical Education, 46(1), 8-48. doi:10.1111/j.1365-2923.2011.04098.x https://doi. org/10.1111/j.1365-2923.2011.04098.x

Schuwirth, L. W. T, & Van der Vleuten, C. P. M. (2006). A plea for new psychometrical models in educational assessment. Medical Education, 40(4), 296-300. https://doi.org/10.1111/j.1365-2929.2006.02405.x PMid:16573664

Swanson, D. B., Norcini, J. J., & Grosso, L. J. (1987). Assessment of clinical competence: written and computerbased simulations. Assessment and Evaluation in Higher Education, 12(3), 220 - 246. https://doi.

org/10.1080/0260293870120307

van der Vleuten, C., Heeneman, S., & Schuwirth, L. (2017). Programmatic assessment. A Practical Guide for Medical Teachers, 295.

Van der Vleuten, C. P. M. (1996). The assessment of Professional Competence: Developments, Research and Practical Implications. Advances in Health Science Education, 1(1), 41-67. https://doi.org/10.1007/BF00596229 PMid:24178994

Van der Vleuten CPM, & Schuwirth LWT. (2005). Assessing professional competence: from methods to programmes.Medical Education, 39(3), 309-317. https://doi. org/10.1111/j.1365-2929.2005.02094.x PMid:15733167

Van der Vleuten, C. P. M, Schuwirth, L. W. T, Driessen, E. W., Dijkstra, J., Tigelaar, D., Baartman, L. K. J., & Van Tartwijk, J. (2012). A model for programmatic assessment fit for purpose. Medical Teacher, 34, 205-214. doi:10.310 9/0142159X.2012.652239 https://doi.org/10.3109/01421 59X.2012.652239

Van der Vleuten, C. P. M., Norman, G. R., & De Graaf, E. (1991). Pitfalls in the pursuit of objectivity: issues of reliability. Medical Education, 25, 110-118. https://doi.org/10.1111/j.1365-2923.1991.tb00036.x

Van der Vleuten, C. P. M., Van Luyk, S. J. & Beckers, H. J. M. (1988). A written test as an alternative to performance testing. Medical Education, 22, 97-107.

Loading...