Examining Panelist Data Froma Bilingual Standard Setting Study

Examining Panelist Data Froma Bilingual Standard Setting Study

Authors

  • Buros Center for Testing, University of Nebraska-Lincoln
  • Buros Center for Testing, University of Nebraska-Lincoln
  • Buros Center for Testing, University of Nebraska-Lincoln
  • Buros Center for Testing, University of Nebraska-Lincoln

Abstract

This study examined the relationships between the evaluations obtained from standard setting panelists and changes in ratings between different rounds of a standard setting study that involved setting standards on different language versions of an examWe investigated panelists' evaluations to determine if their perceptions of the standard setting were related to adjustments they made in their recommended cut scores across rounds of the process. The standard setting was conducted for a high school mathematics test composed of multiple-choice and constructed response items. The test was designed for a population of students who speak and receive primary instruction in either English or French. Results indicated panelists' ratings of their ratings and their comfort with the process were related to how their ratings changed across sequential rounds of the process. Differences in the degree to which the evaluations influenced the standard setting judgments were observed across the English and French panelists, with the French group reporting increasing comfort across rounds in contrast to the English group that had relatively higher comfort at the beginning of the process. The results illustrate how standard setting evaluation data can provide insight into factors that affect panelists' ratings.

Downloads

Download data is not yet available.

Metrics

Metrics Loading ...

Downloads

Published

2014-04-09

How to Cite

Rodeck, E. M., Chin, T.-Y., Davis, S. L., & Plake, B. S. (2014). Examining Panelist Data Froma Bilingual Standard Setting Study. Journal of Applied Testing Technology, 9(3), 1–19. Retrieved from https://www.jattjournal.net/index.php/atp/article/view/48348

Issue

Section

Articles

References

Angoff,W. H. (1971). Scales, norms, and equivalent scores. In R. L. Thorndike (Ed.), EducationalMeasurement, (2nd ed., pp. 508-600),Washington, DC:American Council on Education.

Cizek, G.J., Bunch,M.B., Koons, H. (2004). An NCME instructional module on setting performance standards: Contemporarymethods. Educational Measurement: Issues and Practice, 23(4), pp. 3 1-50.

Cizek,G.J., (Ed.). (2001). Setting performance standards: Concepts, methods, and perspectives.Mahwah, NJ: Lawrence Erlbaum Associates.

Cohen, J. (1988). Statistical power analysis for the behavioral sciences (2nd ed.). Hillsdale,NJ: Erlbaum.

Hambleton, R.K. (2001). Setting performance standards on educational assessments and criteria for evaluating the process. In G.J. Cizek (Ed.), Setting performance standards: Concepts, methods, and perspectives, pp. 89-116. Mahwah, NJ:Lawrence ErlbaumAssociates.

Kane,M. (1994).Validating the performance standards associatedwith passing scores. Review of Educational Research, 64(3), 425 – 461.

Livingston, S. A. and Zieky,M.J. (1982). Passing scores: A manual for setting standards of performance on educational and occupational tests. Princeton, N.J.: Educational Testing Service.

Loading...