Skip to Content, Navigation, or Footer.
Saturday, April 20
The Indiana Daily Student

education

Expert outlines flaws in teacher evaluations

TeachEvauls

An expert on teacher-evaluation methods said during a lecture Thursday that they are biased and do not account for all relevant factors.

Cassandra Guarino, an associate professor of educational leadership and policy, said value-added models are data-driven and objective methods of teacher evaluations that improve the status quo. However, she said the models are biased, error-laden and might be based on faulty tests.

Race to the Top, a federal program through which states compete for millions of dollars in funding by improving education, has triggered the implementation of value-added models. Currently, teachers in Indiana are evaluated by a growth model, which ranks teachers according to a median of students’ standardized-test scores. The growth model is a version of value-added models Guarino said is less desirable.

“We’re not actually estimating the magnitude of a teacher’s effectiveness,”
Guarino said.

She said the growth model doesn’t provide for parent responses to which teacher a student is assigned, like when a parent hires a tutor because their child’s teacher
underperforms.

Guarino said the model is one-dimensional. Evaluations are a function of
students’ test scores.

“One number can’t represent all the important skills students learn each year,” education professorBarry Bull said.

Guarino said one skill not accounted for in standardized-tests is noncognitive skills, behavior conducive to success.

Others expressed concern at the bias created by nonrandom assignment of students to teachers, Guarino said.

Although criticisms were the focus of Guarino’s presentation, the audience responded positively when she asked them if they thought the models were better than the status quo. They agreed when she said value-added evaluation is better than no evaluation.
Joyce Alexander, executive associate dean of the School of Education, said there’s still work to be done.

“We should use real student outcomes,” Alexander said. “I’m not sure we should use only one metric.”

Get stories like this in your inbox
Subscribe