Teachers Take Issue With LA Times “Evaluations”

In a major front-page story on Sunday, the Los Angeles Times rated teachers based on student test scores they’d obtained from the LAUSD:

Seeking to shed light on the problem, The Times obtained seven years of math and English test scores from the Los Angeles Unified School District and used the information to estimate the effectiveness of L.A. teachers – something the district could do but has not.

The Times used a statistical approach known as value-added analysis, which rates teachers based on their students’ progress on standardized tests from year to year. Each student’s performance is compared with his or her own in past years, which largely controls for outside influences often blamed for academic failure: poverty, prior learning and other factors.

Though controversial among teachers and others, the method has been increasingly embraced by education leaders and policymakers across the country, including the Obama administration.

Although the Times article later acknowledges the limitations of this method, they still plowed right ahead and are using it – with the names of actual LAUSD teachers – to evaluate teachers in a massively public way:

No one suggests using value-added analysis as the sole measure of a teacher. Many experts recommend that it count for half or less of a teacher’s overall evaluation….

Nevertheless, value-added analysis offers the closest thing available to an objective assessment of teachers. And it might help in resolving the greater mystery of what makes for effective teaching, and whether such skills can be taught.

As most of you know, I was a teacher myself, teaching history and political science at the University of Washington and at Monterey Peninsula College from 2002 to 2009. I love teaching and hope to do more of it someday. I also taught a graduate seminar on pedagogy (the study of teaching), where we extensively examined the literature on student testing and teacher evaluation.

In both my experience as a teacher and my review of the literature on the topic, it is extremely clear that it is a very bad idea, highly likely to produce misleading results, to rely solely on test scores to evaluate either student learning or teacher effectiveness. Testing is very useful, but it is NOT the only way to evaluate a teacher.

That in turn is a primary reason why you haven’t seen districts like LAUSD publish this information. They and teachers alike prefer to conduct more holistic reviews that don’t reduce teaching to test scores. And that’s why UTLA is slamming the LA Times article:

One of the biggest critics is the L.A. teachers union. The head of the union said Sunday he was organizing a “massive boycott” of The Times after the newspaper began publishing a series of articles that uses student test scores to estimate the effectiveness of district teachers.

“You’re leading people in a dangerous direction, making it seem like you can judge the quality of a teacher by … a test,” said A.J. Duffy, president of United Teachers Los Angeles, which has more than 40,000 members.

Why would it be a “dangerous” direction? Because by naming teachers and providing a flawed ratings system for those teachers, it gives the public a deeply misleading view of teacher effectiveness. And it can undermine public support for teachers as a result.

The LA Times would have done better to not take into its own hands the making of education policy for the LAUSD. That’s a matter more appropriately done by parents, teachers, and the school district, in collaboration with each other. So I share the UTLA’s concerns with how this analysis is unfolding and proceeding.

12 thoughts on “Teachers Take Issue With LA Times “Evaluations””

  1. Here’s a really good blog post that discusses the limitations of value added assessment and the particular weaknesses with the analysis done by the LA Times.

    http://schoolfinance101.wordpr

    This article and the follow up are disgraceful and ethically-challenged journalism.  Since when is a reporter qualified to evaluate teachers?

  2. The Times acknowledged the limitations of the method, but it can still be a valuable piece of information in evaluating a teachers’ effectiveness.  Holistic reviews sound nice, but the current teacher reviews done by LAUSD are a joke.

    There is a disconnect between UTLA and the teachers themselves, it appears, based on their reactions.  The highlighted teachers in the article both desired to improve their teaching skills after learning that they scored low.  UTLA called for a boycott of the Times.  I think that says it all.

    The teachers unions in California are just like the GOP when it comes to education reform: they are the party of No.

  3. I don’t think, as the article suggests, that the analysis is itself a measure of a good teacher versus a bad, or indicative of who should be fired. But, I do think it’s a great question generator and I think it would be valuable if this level of analysis was provided to school administrators. Currently, administrators try to develop this analysis themselves, which of course takes time away from other tasks.

    When you see a result like this:

    With Miguel Aguilar, students consistently have made striking gains on state standardized tests, many of them vaulting from the bottom third of students in Los Angeles schools to well above average, according to a Times analysis. John Smith’s pupils next door have started out slightly ahead of Aguilar’s but by the end of the year have been far behind.

    it’s a good opportunity to ask, Why? Not, “Hey, fire that second teacher and all will be swell” but why is the first teacher apparently getting better results? Is it that the kids are different, that the first teacher has some great technique that perhaps could be shared? Are the kids really coming out that different – ie do teachers in the next grade notice which kids came from which class?

    When I went to school, kids were not randomly assigned to classrooms. Even though demographics might be similar, in fact the kids were different.

    I watch test scores at my daughter’s school and I see huge inconsistencies. In our case, there is one class per grade, so the kids mostly have the same path.   I can see that in nearly every class, there’s an apparent spike in performance and an apparent regression over the 5 years of elementary testing, sometimes a significant one, but what I’ve seen is that it happens at different grades for different classes.

    You also have to beware of the tyranny of small numbers. In a class of 20 students, each student is 5%. Thus, you might see a “huge bump” where in 2nd grade 45% were proficient or better and in 3rd 55% were proficient or better – but reality is, this is two students. Maybe they were two students who only missed “proficient” by one or two questions last year. Maybe this year they got lucky. Maybe last year they took the test sick. Maybe this year two new students moved in or out. Or maybe this is a real gain. The numbers themselves won’t tell you.  


  4. teacherm at 8:41 PM August 16, 2010

    I am a nonunionized teacher who works at a charter school. This article is so shallow its ridiculous. Yes test scores are low district wide, yes its a problem, but to determine the “effectiveness” of a teacher based on one standardized test is completely preposterous! This test is only covers math and language arts. There are four other subjects elementary school teachers are still required to teach. In all the state mandated proffessional development I have participated in, I have been taught to assess the WHOLE child. I taught one second grader who was able to read at fourth grade reading level and consistently had high math test grades, yet he scored basic on the test. Should I retain him because of this? NO! How can teachers be assessed by just looking at test scores? The WHOLE teacher needs to be assessed. This article doesn’t even come close to describing what an effective teacher is and how one should be evaluated.

Comments are closed.