Validity Evidence in Accommodations for English Language Learners and Students with Disabilities Wayne Camara the College

Validity Evidence in Accommodations for English Language Learners and Students with Disabilities Wayne Camara the College

Authors

  • The College Board

Abstract

The five papers in this special issue of the Journal of Applied Testing Technology address fundamental issues of validity when tests are modified or accommodations are provided to English Language Learners (ELL) or students with disabilities. Three papers employed differential item functioning (DIF) and factor analysis and found the underlying constructs measured by tests do not change among these groups of students. Despite this strong finding, consistent and large score differences are present across groups. Such consistent and large score differentials among these groups on cognitive ability tests would be ideally contrasted with findings from alternative measures (e.g., portfolio's, performance assessments, and teachers' ratings). Two papers examine current methods used to identify and classify both ELL and students with disabilities, while other papers examine the performance of students with specific disabilities (e.g., deaf, mental retardation). The impact of modifications and accommodations on score comparability is discussed in relation to professional standards and current validity theory.

Downloads

Download data is not yet available.

Metrics

Metrics Loading ...

Downloads

Published

2014-04-09

How to Cite

Camara, W. (2014). Validity Evidence in Accommodations for English Language Learners and Students with Disabilities Wayne Camara the College. Journal of Applied Testing Technology, 10(2), 1–23. Retrieved from http://www.jattjournal.net/index.php/atp/article/view/48357

Issue

Section

Articles

References

Abedi, J. (2009, this issue). English Language Learners with disabilities: Classification, assessment and accommodation issues. Journal of Applied Testing Technology.

American Educational Research Association, American Psychological Association, and National Council for Measurement in Education (1999). Standards for educational and psychological testing. Washington, DC: American Educational Research Association.

Americans with Disabilities Act of 1990, Pub. L. No. 101-336, 2, 104, Stat. 328 (1991).

Cahalan, C., Mandinach. E.B., & Camara, W.J. (2002). Predictive validity of SAT I: Reasoning Test for test-takers with learning disabilities and extended time accommodations (College Board Research Report 2002-05). New York: The College Board.

Cohen, J. (1988). Statistical power analysis for the behavioral sciences (2nd Ed.). Hillsdale, NJ: Lawrence Earlbaum Associates.

Cook. L., Eignor, D., Steinberg, J., Sawaki, Y., and Cline, F. (2009, this issue). Using factor analysis to investigate the impact of accommodations on the scores of students with disabilities on a reading comprehension assessment. Journal of Applied Testing Technology.

Dorans, NJ, & Holland, PW. (1993). Differential item functioning (p. 35-66). In Holland, PW, & Wainer, H (Eds.). DIF detection and description: Mantel-Haenszel and standardization. Hillsdale, NJ: Lawrence Erlbaum.

Hunter, J.E., Schmidt, F.L., Hunter, R. (1979). Differential validity of employment tests by race: A comprehensive review and analysis. Psychological Bulletin, 86, 721-735.

Kane, M.T., 2006). Validation. In R. L. Brennan (Ed.), Educational Measurement (4th ed., pp. 17-64) Washington, DC: American Council on Education and Praeger.

Ketterlin-Geller, L.R. (2008). Testing students with special needs: A model for understanding the interaction between assessment and student characteristics in a universally designed environment. Educational Measurement: Issues and Practice, 27(3), 3-16.

Kindler, A.L. (2002). Survey of the state’s limited English proficiency students and available educational programs and services 2000-2001 summary report. Washington, DC: National Clearinghouse for English Language Acquisition & Language Instruction Educational Programs.

Kobrin, J.L., Patterson, B.F., Shaw, E.J., Mattern, K.D. & Barbuti, S.M. (2008). Validity of the SAT for predicting first-year college grade point average (College Board Research Report No. 2008-5). New York: The College Board.

Koretz, D.M. & Hamilton, L.S. (2006). Testing for accountability in K-12. In R. L. Brennan (Ed.), Educational Measurement (4th ed., pp. 531-621).Westport, CT: American Council on Education and Praeger

Laitusis, C.C., Maneckshana, B., Monfils, L., and Ahlgrim-Delzell, L. (2009, this issue). Differential item functioning comparisons on a performance-based alternative assessment for students with severe cognitive impairments, autism and orthopedic impairments. Journal of Applied Testing Technology.

Mattern, K. D., Patterson, B.F., Shaw, E.J., Kobrin, J.L., and Barbuti, S.M. (2008). Differential validity and prediction of the SAT (College Board Research Report 2008-4). New York: The College Board.

Messick, S. (1989). Validity. In R. L. Linn (Ed.), Educational Measurement (3rd ed., pp.13-100). Washington, DC: American Council on Education.

Milewski, G., Johnsen, D., Glazer, N, & Kubota, M. (2005). A survey to evaluate the alignment of the SAT writing and critical reading sections to the curricular and instructional practices. (College Board Research Report 2005-1) New York:College Board.

Moen, R., Liu, K., Thurlow, M., Lekwa, A., Scullin, S., and Hausmann, K. (200). Identifying less accurately measured students. Journal of Applied Testing Technology.

National Research Council (1997). Educating one and all: Students with disabilities and standards-based reform. Washington, DC: National Academy Press.

No Child Left Behind Act of 2001, Pub. L. No. 107-110, 115 U.S.C. § 1425 (2002). Rivera, C., Collum, E., Shafer, W.L. & Sia, J.K. (2006). Analysis of state assessment policies regarding the accommodation of English Language Learners. In C. Rivera & E. Collum (Eds.), State Assessment Policy and Practice for English Language

Learners (pp. 1-174). Mahwah, NJ: Lawrence Erlbaum Associates.

Scheuneman, J.D., Camara, W.J., Cascallar, A.S., Wendler, C., & Lawrence, I. (2002). Calculator access, use and type in relation to performance on the SAT I: Reasoning test in mathematics. Applied Measurement in Education, 15(1), 95-112.

Sireci, S.G., Scarpati, S., & Li, S. (2003). Test accommodations for students with disabilities: An analysis of the interaction hypothesis. Review of Educational Research, 75(4), 457-490.

Steinberg, J., Cline, F., Ling, G., Cook, L., & Tognatta, N. (2009, this issue). Examining validity and fairness of a state standards-based assessment of English-Language Arts for Deaf and Hard of Hearing Students. Journal of Applied Testing Technology.

Thompson, T. & Way, D. (2007). Investigating CAT Designs to achieve comparability with a paper test. Paper presented at the Applications and Issues Conference of the Graduate Management Admissions Council, Minneapolis, MN.

U.S. Department of Education (2007). Digest of Educational Statistics. Retrieved October 19, 2008 from http://nces.ed.gov/programs/digest/d07/.

Vacc, N. A. & Tippins, N. (2002). Documentation. In R.B. Ekstrom and D.K. Smith (Eds.), Assessing individuals with disabilities in educational, employment and counseling settings (pp. 59-70). Washington, DC: American Psychological Association.

Young, J. (2001). Differential validity, differential prediction and college admissions testing: A comprehensive review and analysis (College Board Research Report 2001-6). New York: The College Board.

Loading...