New study says provides useful information

When pre-registration results are released this weekend and the unlucky among us scramble to find courses to complete our schedule for next semester, will likely see an increase in traffic from the Hilltop. A new study suggests that students will be flocking, it turns out, to a relatively accurate product.

In a November issue of the e-journal Practical Assessment, Research & Evaluation, University of Wisconsin-Eau Claire psychology professor April Bleske-Rechek and her student Amber Fritsch published a study that considers the value and accuracy of the reviews on

Testing the assumption that student ratings were unreliable, the study (PDF here) considered a data set of 366 professors at a large, public university who had between 10 and 86 reviews on the website. 37 percent of the professors were from the humanities, 28 from math and science, 18 from social science and 17 from pre-professional majors.

The study found that the number of ratings had little effect on the degree of variance of the professor’s overall rating: “Instructors with 10 ratings showed the same degree of consensus in their quality ratings as did instructors with 50 ratings.” In other words, student reviewers on the website quickly reach a consensus about a particular professor. The study also found that variance is even lower for professors who have very high or very low quality ratings. Everyone agrees about the best and worst professors.

For professors with more than ten ratings on posts, the study’s “findings suggest that with at least 10 ratings instructors may be able to extract crude judgments — exceptional, adequate, or unacceptable (McKeachie, 1997) — of students’ perceptions of their clarity and helpfulness.”

The study’s results held across disciplines as well. It found that students don’t rate math and science professors lower in quality than other professors, and that as much consensus was reached in math and science as in other disciplines. Regardless of sex and discipline, the students agreed about which professors were effective and which were not.

Bleske-Rechek and Fritsch ultimately concluded:

We demonstrated strong student consensus about instructor quality, which did not hinge on instructor easiness. Trends in student ratings on RateMyProfessors mirror those found on traditional student evaluations of teaching (Coladarci & Kornfield, 2007; Sanders et al., 2011). In the aggregate, is providing useful feedback about instructor quality.

Last year, Bleske-Rechek and Kelsey Michels published a similar study of (available here), defending student reviews from charges of bias. While noting the high correlation between easiness and quality ratings, they nevertheless concluded, “Students who post do so for a variety of reasons and not just to complain or exclaim; they are similar academically to students who do not post; and patterns in their ratings suggest that easiness and quality are not synonymous to them.”

(h/t: GW Hatchet)

Leave a Reply

Your email address will not be published.

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>