I like the idea of asking the kids:
How useful are the views of public school students about their teachers? Quite useful, according to preliminary results released on Friday from a $45 million research project that is intended to find new ways of distinguishing good teachers from bad.
Statisticians began the effort last year by ranking all the teachers using a statistical method known as value-added modeling, which calculates how much each teacher has helped students learn based on changes in test scores from year to year. Now researchers are looking for correlations between the value-added rankings and other measures of teacher effectiveness.
Research centering on surveys of students’ perceptions has produced some clear early results. Thousands of students have filled out confidential questionnaires about the learning environment that their teachers create. After comparing the students’ ratings with teachers’ value-added scores, researchers have concluded that there is quite a bit of agreement.
In value-added modeling, researchers use students’ scores on state tests administered at the end of third grade, for instance, to predict how they are likely to score on state tests at the end of fourth grade. A student whose third-grade scores were higher than 60 percent of peers statewide is predicted to score higher than 60 percent of fourth graders a year later. If, when actually taking the state tests at the end of fourth grade, the student scores higher than 70 percent of fourth graders, the leap in achievement represents the value the fourth-grade teacher added.
I know all professions are vulnerable to what I think of as “fads” and teaching seems to be particularly defenseless when presented with magic bullets and over-hyped theories.
Is value-added modeling just another loser in a long line of teacher evaluation theories, or is there anything to it?