- About Us
- Our Programs
- Grant Information
- News and Resources
By Sarah Garland - The Hechinger Report -
ABOUT THIS PROJECT
This story, the second in a three-part series examining the new teacher evaluation systems being used in Memphis and Shelby County, is a collaboration between The Commercial Appeal and The Hechinger Report. Hechinger is a nonprofit, nonpartisan education news service based at Teachers College, Columbia University.
To close the achievement gap between poor and affluent students in Tennessee, some students may need to learn at double the rate of their high-performing peers, according to Tennessee Department of Education materials.
But this goal could create a potential Catch-22 for teachers, who for the first time this year will be measured on whether their students make large gains on standardized tests, as determined by the controversial statistical formula known among researchers as "value-added modeling."
"There's something suspicious about that formula," said Keith Williams, president of the Memphis Education Association, the local teachers union. "You're using something that has some real flaws."
In Tennessee, 45 percent of teachers teach in subjects with standardized tests, and for more than a decade, Tennessee has rated these teachers using their students' progress on the tests. School officials use complex statistics to predict how individual students will perform, based on their past scores. Teachers whose students achieve higher than predicted scores are deemed highly effective. Teachers whose students don't hit their predicted marks are seen as less so.
Until now, the state did nothing more than report the data to districts. This year, however, student test-score growth will count for 35 percent of a teacher's year-end evaluation. Districts will use the data to decide which teachers deserve tenure and which should be fired. (Another 15 percent of a teacher's score is made up of achievement measures chosen by the district, and 50 percent is based on classroom observations and other measures.)
The 55 percent of teachers who don't teach in subjects with standardized tests will be rated based on the test-score ratings of other teachers in their schools.
Under the Tennessee 5-point rating system, teachers defined as a 3, or "at expectations," are those whose students make at least a year's worth of growth on state tests. To receive an "above expectations" score of 4 or 5, which new teachers must do for two years to get tenure, a teacher's students must demonstrate more than a year of growth.
Whether to use test-score data in teacher hiring and firing decisions has fueled heated debates nationwide. Until recently, most teachers were evaluated based only on infrequent classroom observations by principals. Now, more than two dozen states are looking to student test-scores to supplement observations, spurred on by the Obama administration's Race to the Top federal grant competition in which Tennessee was a first-round winner.
"Relative to what exists today, 'value-added' does a much better job of predicting how a teacher is going to be in the future," said Dan Goldhaber, director of the Center for Education Data & Research at the University of Washington. But, he added, "some people don't think that test-scores are the right way to judge the output of students."
The statistical formulas are highly complex -- the one used in Tennessee is especially complicated -- and, critics say, therefore not transparent. Research has suggested that the calculations are best used for identifying the very best and very worst teachers, but less reliable when it comes to rating teachers in the middle.
Educators and researchers have also debated whether the models should account for poverty and other factors that can make a difference in how students perform. And teachers and advocates like Williams worry about "a ceiling effect," in which teachers with high-achieving students receive low ratings because their students have less room for improvement.
"Research has shown practically no relationship between the entering academic achievement level for a class of students and a teacher's subsequent value-added estimate," Kelli Gauthier, a spokeswoman for the Tennessee Department of Education, said.
William Sanders, a former University of Tennessee researcher who now works for SAS, a private business-intelligence company, developed Tennessee's formula. SAS now administers the state's teacher ratings based on standardized tests, and its formula is considered private intellectual property.
Sanders has countered critics calling for more transparency by arguing that his formula's complexity makes it more accurate than simpler versions. The "layered model," as it is called by researchers, collects between three and five years of test-score data for each student in as many subjects as possible, including reading, math, science and social studies, in order to make predictions about how a student will score on a given test.
It also looks into the "future," says Sanders, recording how students do as they progress on to the next grade and giving credit to their previous teachers for how they perform.
The equations don't factor in individual student characteristics, like poverty or special-education status, in contrast to formulas in Florida and Washington, D.C. By comparing individual students to themselves over long periods of time, Sanders argues, statistical errors are reduced.
For some Memphis teachers, the biggest concern with the new system is the fact that the majority of teachers don't teach in subjects with standardized tests.
"That's the piece I don't like," said Detra Humble, a science teacher at Manassas High School. "My level of performance is on the backs of other teachers."
School administrators argue that shared scores will lead to more collaboration among teachers, however.
"The big lift is on" teachers of tested subjects, said Kriner Cash, superintendent of the Memphis City Schools. "What I say is it should not only be on them. It should be on everybody."
© 2012 Scripps Newspaper Group — Online