Professor reaffirms bias in standardized testing



New findings from IU Kelley School of Business professor Herman Aguinis suggest hundreds of thousands of college students have been affected by varied predictions of standardized test 
performance.

Aguinis’ recent study, published in the Journal of Educational Psychology, is drawing attention to the contextual reception of tests like the SAT and GRE in different institutions, according to an IU press release.

He and study coauthors argued admission policies, grading approaches and academic support are different at each school, raising questions as to how useful and fair standardized tests can be as a prediction of success among gender and ethnic groups.

“Our main implication is that tests do not work in the same way across colleges and universities, and we have found that hundreds of thousands of people’s predicted GPA based on SAT scores were under- or overestimated,” Aguinis said in the release.

Aguinis’ paper “Differential Prediction Generalization in College Admissions Testing,” coauthored with Steven A. Culpepper of the University of Illinois at Urbana-Champaign and Charles A. Pierce of the University of Memphis, comes at a time when many universities and colleges are making the SAT optional for admissions.

“If the prediction is not the same, that means that you can benefit or suffer based only on your ethnicity or gender because your performance is expected to be higher or lower than it will be, which means you’re more or less likely to be offered a scholarship or you’re more or less likely to be offered admission,” Aguinis said in the release.

The most recent study follows a “groundbreaking paper” the three researchers coauthored in 2010 in the Journal of Applied Psychology, according to the release. The 2010 paper “received a great deal of attention” because it found College Board testing had the potential to be biased and its methods of revealing biases in testing were deficient.

Two research scientists from the College Board, which administers the SAT and GRE test, responded to these findings in 2013 in a paper also published in the Journal of Applied Psychology.

Krista Mattern and Brian F. Patterson, the two College Board researchers, questioned the 2010 paper because its research was based on simulation and not data, according to the release.

In Mattern and Patterson’s report, data from more than 475,000 students at more than 200 colleges from 2006 to 2008 compared the relationship of SAT data and first-year grade-point averages.

In publishing this data, the Journal of Applied Psychology required Mattern and Patterson to release College Board data to the public for the first time, according to the release. Aguinis and his coauthors based their latest research on this data.

“The first thing we did was to do what they did,” Aguinis said in the release. “And we found that our results are exactly like theirs — on average — across 200 colleges.”

Five different reviewers evaluated Aguinis’ latest report, and the researchers were asked to submit nine versions in addition to their original manuscript before 
publication.

“This paper was scrutinized like no other I have ever seen before,” Aguinis said in the release.

Aguinis said in the release standardized tests can have bias in any particular context and a majority of colleges showed contextual 
differences.

“Hundreds of thousands of students have probably been denied admission or denied scholarships just because of their ethnicity or gender when standardized tests are central in the admissions process,” Aguinis said. “But not against blacks or against women necessarily. It goes both ways.”

The recent study compared more than 200,000 men and women respectively, and nearly 30,000 black and 300,000 white students from 176 college and universities from 2006 to 2008, according to the release.

The paper is about “predicting performance for all people,” Aguinis said in the release. He said the SAT and similar tests are not irrelevant, but they need to be understood within context to obtain the best measures of students’ future academic success.

“The bias we found sometimes benefits one group and some other times the other,” he said in the brief. “You need to understand if the test is predicting performance to the same extent across groups. Otherwise the selection process may be unfair for members of certain groups, and the implications are critical for people’s future.”

Like what you're reading? Support independent, award-winning college journalism on this site. Donate here.

More



Comments powered by Disqus