A Quantitative Evaluation of a Live Remote Proctoring Pilot

A Quantitative Evaluation of a Live Remote Proctoring Pilot

Authors

  • Chief Assessment Officer, National Board of Certification and Recertification for Nurse Anesthetists, Chicago, IL 60631
  • Senior Psychometrician, Board of Pharmacy Specialties
  • Vice President of Clinical Solutions, Agilum Healthcare Intelligence, Inc.

Keywords:

Certification, Live Remote Proctoring, Modality Effects, User Experience

Abstract

The COVID-19 pandemic has witnessed a renewed interest in Live Remote Proctoring (LRP), not only as a test availability measure but as a necessity to maintain business continuity. Many certification organizations have correspondingly provided LRP as an option for candidates. This study describes a retrospective, observational pilot study evaluating modality effects for three high-stakes certification examination programs administered concurrently in test center and LRP conditions. Also reported are summaries of a post-examination survey assessing drivers for selection of testing condition and candidate satisfaction, particularly with aspects of the LRP experience. Significant differences were observed in both distributions of test scores and test duration (both higher for test center candidates). Although explanatory variables are not completely understood, the authors offer insights as to factors influencing these outcomes. While satisfaction levels with LRP were reasonably positive, significant technical issues were reported by LRP candidates. The primary drivers for the selection of LRP were safety concerns related to the pandemic, simple convenience, and lack of test center availability.

Downloads

Download data is not yet available.

Metrics

Metrics Loading ...

Downloads

Published

2022-03-20

How to Cite

Muckle, T. J., Meng, Y., & Johnson, S. (2022). A Quantitative Evaluation of a Live Remote Proctoring Pilot. Journal of Applied Testing Technology, 23, 46–53. Retrieved from https://www.jattjournal.net/index.php/atp/article/view/165806

Issue

Section

Articles

References

AERA, APA, NCME (2014). Standards for educational and psychological testing. Washington DC: American Educational Research Association.

Bonett, D. G. (2003). Sample size requirements for comparing two alpha coefficients. Applied Psychological Measurement, 27, 72-74. https://doi.org/10.1177/0146621602239477

Harnisch, D. L., & Linn, R. L. (1981). Analysis of item response patterns: Questionable test data and dissimilar curriculum practices. Journal of Educational Measurement, 18, 133−146. https://doi.org/10.1111/j.1745-3984.1981.tb00848.x

Hurtz, G. M., & Weiner, J. A. (2019). Analysis of test-taker profiles across a suite of statistical indices for detecting the presence and impact of cheating. Journal of Applied Testing Technology, 20(1), 1-15.

Karabatsos, G. (2003). Comparing the aberrant response detection performance of thirty-six person-fit statistics. Applied Measurement in Education, 16, 277−298. https://doi.org/10.1207/S15324818AME1604_2

Spence, D., Ward, R., Wooden, S., Browne, M., Song, H., Hawkins, R., & Wojnakowski, M. (2019). Use of resources and method of proctoring during the NBCRNA continued professional certification assessment: Analysis of outcomes. Journal of Nursing Regulation, 10(3), 37-46. https://doi.org/10.1016/S2155-8256(19)30147-4

van der Linden, W. J. (2006). A lognormal model for response times on test items. Journal of Educational and Behavioral Statistics, 31(2), 181-204. https://doi.org/10.3102/10769986031002181

Weiner, J., Saiar, A., & Granger, E. (2013). An empirical method for the detection of potential test fraud. Presented at the 2nd annual meeting of the Society for the Detection of Potential Test Fraud in Madison, Wisconsin.

Weiner, J. A., & Hurtz, G. M. (2017). A comparative study of online remote proctored versus onsite proctored highstakes exams. Journal of Applied Testing Technology, 18, 13-20.

Loading...