Student Evaluations
Yesterday's NY Times Magazine, a special issue devoted to higher education, contained a particularly interesting story of professor hired to teach a course at Wesleyan, and then let go the following year because her student evaluations weren't good enough. Part of the point of the story is that three quarters of the students actually did think the course highly effective, although reactions were polarized. More broadly, the story points out how colleges and universities have increasingly made retention decisions based on student evaluations.
Here I'll make two main observations. First, as the magazine story itself notes, at research universities (the only sort of institution at which I have taught professionally), teaching still rarely plays a substantial role in retention decisions (although I have seen speculation about the teaching effectiveness of an entry-level candidate with no prior teaching experience used as a criterion in appointments decisions). Truly incompetent teaching may prevent a marginal scholar from receiving tenure, just as stellar teaching can help secure tenure for a marginal scholar, but research universities simply do not value teaching nearly as much as scholarship. (That said, nearly all of my colleagues have tended to care a great deal about their own teaching, including those who were regarded as poor teachers by their students.)
Second, here's a radical idea: How about measuring teaching effectiveness by outcomes? There are obvious problems, here, of course. One is the development of good outcome measures. For a law school contracts class, do we look at how students later fared on the contracts questions on the bar? Any test-based measure could have the unfortunate effect of encouraging teaching to the test. Moreover, what about undergraduate courses that have education for its own sake as their goal? How do we measure the "outputs" of a course in music appreciation or meta-ethics? The solution to these problems would have to differ by area, but for law school courses, one easy method would be to administer evaluations a few years after graduation. I have often been told by alumni that they thought my civil procedure course was mystifying, boring or worse when they took it, but that years later they really appreciated it when thorny issues arose in practice. (Presumably there were also students who liked the course when they took it but then later realized they didn't learn what they needed to. These students tend not to contact me.)
Tracking would be difficult for a proposal such as this one, as would the complication of the exam: Evaluations are typically administered before exams so students don't simply praise the teachers who gave them high grades and criticize those who gave them low ones. By delaying evaluations for years, however, my proposal would permit most former students to get past their grades. And in any event, the current practice itself introduces distortions: In some courses, students don't appreciate how effective the teaching was until they study at the end of the semester.
In any event, my goal here is to start a conversation on how to improve teaching evaluations. Additional suggestions are welcome.
Posted by Mike Dorf
Here I'll make two main observations. First, as the magazine story itself notes, at research universities (the only sort of institution at which I have taught professionally), teaching still rarely plays a substantial role in retention decisions (although I have seen speculation about the teaching effectiveness of an entry-level candidate with no prior teaching experience used as a criterion in appointments decisions). Truly incompetent teaching may prevent a marginal scholar from receiving tenure, just as stellar teaching can help secure tenure for a marginal scholar, but research universities simply do not value teaching nearly as much as scholarship. (That said, nearly all of my colleagues have tended to care a great deal about their own teaching, including those who were regarded as poor teachers by their students.)
Second, here's a radical idea: How about measuring teaching effectiveness by outcomes? There are obvious problems, here, of course. One is the development of good outcome measures. For a law school contracts class, do we look at how students later fared on the contracts questions on the bar? Any test-based measure could have the unfortunate effect of encouraging teaching to the test. Moreover, what about undergraduate courses that have education for its own sake as their goal? How do we measure the "outputs" of a course in music appreciation or meta-ethics? The solution to these problems would have to differ by area, but for law school courses, one easy method would be to administer evaluations a few years after graduation. I have often been told by alumni that they thought my civil procedure course was mystifying, boring or worse when they took it, but that years later they really appreciated it when thorny issues arose in practice. (Presumably there were also students who liked the course when they took it but then later realized they didn't learn what they needed to. These students tend not to contact me.)
Tracking would be difficult for a proposal such as this one, as would the complication of the exam: Evaluations are typically administered before exams so students don't simply praise the teachers who gave them high grades and criticize those who gave them low ones. By delaying evaluations for years, however, my proposal would permit most former students to get past their grades. And in any event, the current practice itself introduces distortions: In some courses, students don't appreciate how effective the teaching was until they study at the end of the semester.
In any event, my goal here is to start a conversation on how to improve teaching evaluations. Additional suggestions are welcome.
Posted by Mike Dorf