Making Course Evaluations More Useful

Student teaching evaluations are notoriously flawed. At best, they are unreliable. (They are, on the other hand, reliably sexist.) Educators are understandably cynical about them, not only because of student bias but also because of the arbitrary ways they’re sometimes used. “The less time we spend talking about teaching, the better,” I once heard a senior academic say during a job search; this same professor was rumored to have used poor teaching evaluations to sink the tenure application of someone he disliked. A friend of mine even believes a colleague wrote fake Rate My Professors entries to use against him at a promotion hearing.

Yet student course evaluations are part of life for most college instructors. Is there any way to make them more useful, or at least to mitigate the harm they can do? I’ve found two things moderately helpful.

First, I’ve found it’s a good idea to explain to students ahead of time what evaluations actually are. I recently saw someone, apparently an undergraduate, tweet that she couldn’t wait to fill out course evaluations and “rip my professors apart” (or words to that effect). I remember thinking something similar when I was in college. Undergraduates often imagine that someone in the administration will read their words and act on them—perhaps by firing the instructor.

When I prepare my students to complete their course evaluations, I gently tell them that this isn’t how it usually works. Most of the time, exactly one person reads their evaluations carefully, and that’s me. I explain that I read all their remarks (assuring them that their anonymity is completely protected) looking for things I can do better next time.

I ask my students to keep this in mind and write about specific things that did and didn’t work for them. Were the assignments effective? Was something especially confusing? Did they see a better way to do something? No matter how positive or negative their comments are, I explain, if they’re specific and clear, I can make use of them when considering how to teach the course the next time. And I assure them that I want them to be completely honest.

My impression is that this has worked. Since I started explaining how I use the evaluations, the number of comments that amount to “the course was awful” or “Wilson was awesome” has decreased, while the number that reflect on, e.g., how well the homework assignments prepared them for the exams has increased.

Second, it’s useful to write some of your own questions.

Many instructors do this with their own anonymous evaluations, either by passing out paper forms or providing a link to an electronic survey they designed. One of my current employers, however, had the excellent idea of allowing instructors to include up to twelve questions of their own in the university’s official evaluation forms.

First, I use these optional questions to ask students (on a standard one-to-five scale) how they perceive the course’s effectiveness in various specific content areas. They can “strongly agree,” “agree,” “disagree,” etc., with statements like “The course increased my understanding of connections among different parts of the world” or “The course increased my understanding of different belief systems (religions, philosophies, ethical systems, political views, etc.” The answers to these questions have been helpful to me in identifying the weaker points in my coverage.

Then I ask students to reflect on the big picture:

  • “The course helped me see familiar facts and concepts in new ways.”
  • “Taking the course changed my mind about something, and/or challenged me to defend ideas I already had.”
  • “This course helped me understand topics or ideas I have studied in other courses.”

With these prompts, I’m asking students to assess how effective the course was as part of the overall college curriculum and how much they think it contributed to what is sometimes called a liberal education. Obviously, the responses are subjective. But subjective impressions are often a valid way of assessing this dimension of education.

Finally, I include a prompt that I brazenly stole from my old course evaluations at La Salle University: “Hard work was necessary to get a good grade in this course.” Again, the results are subjective. But they provide a useful reference point for anyone wondering whether high or low scores on the rest of the evaluation are due simply to the course’s difficulty level.

I don’t think these methods do away with the problems inherent in student evaluations of teaching. But they do help me document my teaching effectiveness in slightly more convincing ways, and they give me a little more insight into the ways my courses could improve.

 

Leave a Reply

Your email address will not be published. Required fields are marked *