A New Way To Do Course Evaluations

scantronThe end-of-course instructor evaluation is dead. One could argue that it never really had meaning or value in the first place. Forcing students to wait until the end of the term to provide their review, when they are simply relieved to have gotten through fifteen weeks of work and are nervously awaiting the final and wishing simply to fast forward through the coming week to get to the break, has only by luck yielded feedback that reflects how they felt as they progressed through the course. Now that the evaluations have been moved online rather than filled out on Scantron forms in class, most students are skipping the step altogether or, worse, just filling in the virtual bubbles with little regard for what they mean. No matter how much you suggest that their feedback is valuable and will help determine which faculty are granted tenure and promotion and which aren’t, I’m not sure most students care. They simply want to be done and move on.

Our course evaluations at Lewis currently consist of fifteen characteristics the students are to rate on a Likert scale, followed by three or four open-ended questions to which they are supposed to provide text responses. The open-ended questions ask the students to identify the strengths and weaknesses of the instructor and the course. They are the only part of the evaluation I ever read. I honestly couldn’t tell you what my numeric scores say, because I don’t care about them. They do nothing to tell me what I do right and what I do wrong as a teacher. They only tell me whether students like me.

Obviously,  my goal as teacher isn’t to get students to like me. If it were, I’d simply give all of them A’s and maybe even buy them pizza and beer every week to celebrate our incredible camaraderie. I consider only the open-ended comments, because those are what can teach me to be a better educator. Unfortunately, I don’t get to see the comments until well after the course ends, by which time their value has greatly diminished, particularly if I’m not teaching that course the following semester.

We need a new kind of course evaluation, an ongoing one that can constantly track the pulse of the class to identify how it is going. We don’t need more scores on a scale of 1 to 5. Instead, we need more comments, delivered in the heat of the semester, when instructors can actually adapt how they’re teaching the course to address students’ feedback and therefore improve their craft.

During the debates for the US Presidential race last year, citizens turned to Twitter to voice their impressions of the candidates. Fact-checking was done in real time as the audience watched or listened to the debate on one screen and tweeted about it on another. A Computer Scientist doing research in data mining even developed a real-time scorecard. He scoured Twitter for tweets about how the candidates were doing as the debate ensued, collecting an immense number of them. Using algorithms he developed, he was able to identify which tweeted comments were positive or affirmative in tone and which were negative or critical. From this, he was able to compute an instantly updated scorecard for each participant to determine who was winning minute-by-minute. In other words, he took the pulse of the debate as the participants argued their sides, tracking how national sentiment changed as the candidates made their respective points. That eventually blossomed into the Twitter Political Index, which tracked voter sentiment state-by-state by categorizing tweets made about each candidate.

Interestingly, we could do the same thing with course and teacher evaluations. Here’s how. Something like a Facebook page or Twitter feed could be set up for each course section. Whenever the feeling struck, a student could post anonymously to the feed. The student might wish to commend the teacher for a good lecture, thank her for answering questions patiently and thoughtfully, criticize him for rushing through something too fast, or recommend that she review something at the beginning of the next class. It would be the teacher’s responsibility to review the comments on the feed often and adjust his teaching based on what he reads. The teacher would not have the ability to delete any of the posts, because that would allow the system to be gamed. Instead, the teacher’s course section evaluation feeds would provide a comprehensive, qualitative, ongoing, and dynamic evaluation of his performance that, if he takes seriously, will improve his teaching and the course. If he ignores the comments, it is likely the tone of the posts will worsen, and his unwillingness to react to criticism and student needs will be appropriately highlighted.

If a more quantitative measure is still desired, an analysis could be performed similar to what was done during the presidential debates. An algorithm could categorize posted comments as either positive or negative and then provide a simple count as the verdict on whether students approved or disapproved of the instructor’s teaching. This would give the teacher an up-to-the-day class pulse he could use to determine whether he’s doing an effective job.

Then, at the end of the term, the students could be asked just two questions:  (1) Did the course achieve its stated learning objectives? (2) Would you recommend this teacher to other students? Ultimately, those yes-no questions are the only ones that should guide tenure and promotion decisions, as you certainly wouldn’t want to reward teachers whom students won’t recommend and who fail to teach to the course’s stated objectives.

I don’t see a downside to this proposal. Instructors would get honest, timely feedback they could use immediately to improve their teaching. Furthermore, the committee that decides tenure and promotion awards would have a rather full picture of how the instructor’s teaching is received by students, whether the instructor responds to student feedback, and, ultimately, whether the instructor is someone from whom students learn and want to learn.

Surely this would be a tremendous improvement over the going-through-the-motions exercise to which we subject students in December and May. We would have more meaningful and helpful evaluations as a result. It’s worth a try.

 

About Ray Klump

Professor and chair of Mathematics and Computer Science Director, Master of Science in Information Security Lewis University http://online.lewisu.edu/ms-information-security.asp, http://online.lewisu.edu/resource/engineering-technology/articles.asp, http://cs.lewisu.edu. You can find him on Google+.

2 thoughts on “A New Way To Do Course Evaluations

  1. September 3, 2013 at 9:29 am

    Really love this idea, Ray. Are you aware of any other schools or Universities who have embraced a similar model or at least something closer to this?

    1. September 14, 2013 at 2:40 pm

      Thanks, Eric. RateMyProfessor is similar, except it’s not well-policed (in terms of who can participate) and not as focused in ways that provide meaningful tips for improving teaching.

Leave a Reply

Your email address will not be published. Required fields are marked *

Powered by sweet Captcha