How University Entrance Examinations Can Threaten Validity in Higher Education Admissions

Authors

  • Hossein Salarian University of Tehran

DOI:

https://doi.org/10.61227/iltt.v1i1.177

Keywords:

University entrance exam, High-stakes test, Positive/negative washback, Stakeholders

Abstract

Although university entrance examinations are designed to promote fairness and objectivity in admissions, they can at times compromise the validity of the process by prioritizing test-taking strategies over genuine academic ability or potential. To this end, this study explor the validity, fairness, and predictive value of university entrance examinations among 400 students via a mixed-methods research design. A cross-sectional survey was carried out to collect quantitative data analyzed through statistical software (SPSS), while semi-structured interviews supplied qualitative depth to explore the nuances behind the statistical trends. Also, ANOVA tests assessed differences across regions. Interview transcriptions were coded through NVIVO 8 and analyzed using thematic analysis. Moreover, academic records (GPA and entrance exam scores) obtained to investigate predictive validity. Quantitative results showed moderate acceptance of entrance exams' predictive power, but significant concerns around fairness, regional equity, and psychological stress. Semi-structured interviews revealed that many students felt the exams forced rote learning and unfairly favored urban, well-resourced applicants. Students reported stress, anxiety, and health impacts from the exam's high-stakes nature. Prior research supports these findings, noting GPA as a stronger long-term predictor of performance. Overall, the study highlights that while entrance exams can motivate, they risk undermining fairness and mental health. Reform efforts, including multi-dimensional admissions models combining GPA, interviews, and personal statements, are recommended to achieve greater equity and validity in higher education admissions. This study has some theoretical and practical implications along with providing some suggestions for further studies to improve it.

References

Alderson, J. C., & Wall, D. (1993). Does washback exist? Applied Linguistics, 14, 115–129.

Alderson, J. C., & Hamp-Lyons, L. (1996). TOEFL Preparation Courses: A Study of Washback. Language Testing, 13(3), 280-297.

AERA, A. NCME, American Educational Research Association (AERA), American Psychological Association (APA), & National Council on Measurement in Education

(NCME). (2014). Standards for educational and psychological testing. American Educational Research Association.

Bachman, L. (1995). Fundamental consideration in language testing. Oxford: Oxford University Press.

Bailey, K. M. (1996). Working for washback: A review of the washback concept in language testing. Language Testing, 13(3), 257-279.

Bailey, K. M. (1999). Washback in Language Testing. TOEFL Monograph Series. Report Number: RM-99-04. TOEFL-MS-15. Princeton. NJ: Educational Testing Service.

Bennett, R. E., Kane, M. T., & Bridgeman, B. (2011). Theory of action and validity argument in the context of through-course summative assessment. Paper presented at invitational Research Symposium on Through Course Summative Assessment, Atlanta, GA.

Bird K. A., Castleman B. L., Mabel, Z., & Song Y. (2021). Bringing transparency to predictive analytics: A systematic comparison of predictive modeling methods in higher education. AERA Open, 7, 1–19

Boud, D., & Bearman, M. (2024). The assessment challenge of social and collaborative learning in higher education. Educational Philosophy and Theory, 56 (5), 459–468.

Borsboom, D., Mellenbergh, G. J., & van Heerden, J. (2004). The concept of validity. Psychological Review, 111, 1061–1071.

Brennan, R. L. (2006). Perspectives on the evolution and future of educational measurement.

In R. L. Brennan (Ed.), Educational measurement (4th ed.). Praeger.

Dornyei, Z. (2007). Research methods in applied linguistics. Oxford University Press. Thousand Oaks, California: Sage Publications, Inc.

Chapelle, C. A., & Voss, E. (2021). Validity argument in language testing: Case studies of validation research. Cambridge University Press.

Flake, J. K., & Fried, E. I. (2020). Measurement schmeasurement: Questionable measurement practices and how to avoid them. Advances in Methods and Practices in Psychological Science, 3, 456–465.

Frederiksen, J., & Collins, A. (1989). A systems Approach to Educational Testing. Educational Researcher. 18(4), 22-32.

Fulcher, G. (2014). Philosophy and language testing. In A. Kunnan (Ed.), The companion to language assessment. John Wiley & Sons.

Gipps, C. V. (1995). Beyond testing: Toward a theory of educational assessment. The Falmer Press.

Haertel, E. H., & Herman, J. L. (2005). A historical perspective on validity arguments for accountability testing. Yearbook of the National Society for the Study of Education, Cambridge University Press.

Haladyna, T., Nolen, S. and Hass, N. (1991). Raising standardized achievement test score pollution. Educational Researcher.20 (5), 2-7.

Hughes, A. (1989). Testing for Language Teachers, Cambridge: CUP

Hymes, D. H. (1972). On communicative competence in Pride. University of Chicago press.

Kane, M. (2013). Validating the interpretations and uses of test scores. Journal of Educational Measurement, 50, 1–13.

Kelly S., Olney A. M., Donnelly P., Nystrand, M., D. & Mello, S. K. (2018). Automatically measuring question authenticity in real-world classrooms. Educational Researcher, 47(7), 451–464.

Mackey, A. & Gass, S. M. (2005). Second language research: methodology and design. Lawrence Elbaum Associates.

Madaus, G. (1988). The influence of Testing on the Curriculum in Tanner (ED) Critical Issues in Curriculum. Yearbook of NSSE, part 1, Chicago, IL, University of Chicago press.

Maxwell, J. (2004). Qualitative research design: An interactive approach. Sage Publications Inc.

Moses, M. S., & Nanna, M. J. (2007). The testing culture and the persistence of high stakes testing reforms. Education and Culture, 23(1), 55–72.

Schimmack, U. (2021). The validation crisis in psychology. Meta-Psychology, 5, Article

1645. https://doi.org/10.15626/MP.2019.1645

Shohamy, E. (1997). Testing methods, testing consequences: Are they ethical? Language Testing,14, 34–49.

Thomas, R. M. (2005). High-stakes testing: Coping with collateral damage. Lawrence Erlbaum Associates Publishers.

Toulmin, S. E. (2003). The uses of argument. Cambridge University Press.

Wall, D. (2000). The impact of high-stakes testing on teaching and learning: Can this be predicted? Language Testing, 14(2), 197–221.

Wall, D. (2005). The Impact of high-stakes examinations on classroom teaching: A case study using insights from testing and innovation theory. Cambridge: CUP.

Wohlin, M., Host, P., Runeson, M., Ohlsson, B., Regnell, & Wessl´en, A. (2000). in software engineering: an introduction. Kluwer Academic Publishers.

Ysseldyke, J., Nelson, J., Christenson, S., Johnson, D., Dennis, A., Triezenberg, H., & Hawes, M. (2004). What we know and need to know about the consequences of high- stakes testing for students with disabilities. Council for Exceptional Children, 71(1), 75–94.

Zheng Y., Nydick S., Huang S., Zhang S. (2024). MxML (exploring the relationship between measurement and machine learning): Current state of the field. Educational Measurement: Issues and Practice, 43(1), 19–38.

Additional Files

Published

2025-07-15

 


How to Cite

Salarian, H. (2025). How University Entrance Examinations Can Threaten Validity in Higher Education Admissions. Innovation in Language Testing and Teaching, 1(1), 21–38. https://doi.org/10.61227/iltt.v1i1.177

Similar Articles

You may also start an advanced similarity search for this article.