Critical Approaches to Culture + Communications

A Weblog by Ron Burnett (Founded in 1994 and now celebrating 23 Years!!)

This site began as one of the first academic sites in Canada when the World Wide Web was in its early phase of development. I have maintained it through many iterations since 1994.

University Rankings

Malcolm Gladwell has a terrific article in the most recent New Yorker on college ratings in the United States. Gladwell shows quite conclusively that the ratings are largely determined by the individuals and companies that produce them. The categories used turn out to be far more subjective and therefore subject to the whims and prejudices of the rankers than previously thought. Ratings in general have been a destructive means of differentiation between different universities and colleges in Canada. The Maclean’s (one of Canada’s few national magazines) survey which is roughly equivalent to the US News and World Report survey uses categories from the experiences of campus life to quality of teaching that cannot and should not be summarized in a simple question and answer format. Even though both magazines claim that they do a great deal of research, have you ever seen questions and categories that relate more directly to curricula within the humanities? How about a question about whether students have the chance to spend some time being creative outside of the context of demands largely defined by the credit system? The various categories used for the surveys could be described as ‘soft.’ How much money comes in for research? (This will inevitably favor the large universities and always does.) Here is a quote that describes one of the categories:

STUDENTS & CLASSES (20 per cent of final score) Maclean’s collects data on the success of the student body at winning national academic awards (weighted 10 per cent) over the previous five years. The list covers 40 fellowship and prize programs, encompassing more than 18,000 individual awards from 2005 through 2009. The count includes such prestigious awards as the Rhodes scholarships and the Fulbright awards, as well as scholarships from professional associations and the three federal granting agencies. Each university’s total of student awards is divided by its number of full-time students, yielding a count of awards relative to each institution’s size.

When you go through the award winners the large universities always garner the most not only because of their student numbers, but also because they have an infrastructure to support and seek out awards. This would not necessarily be a bad thing were it not for the fact that the “student” category is so important to the overall placing of the school. Or take another category entitled, faculty. Awards and research monies are tallied and even though these are adjusted to FTE (full time students) size, inevitably large universities gather in the most money. The problem as Gladwell so carefully explains, is that comparisons between large and small schools are fraught with problems not the least of which is that it is difficult to measure the impact of an institution on its faculty, students and community without in-depth research in each community. “The first difficulty with rankings is that it can be surprisingly hard to measure the variable you want to rank— even in cases where that variable seems perfectly objective.” (New Yorker of Feb 14 and 21, 2011 p. 70) Without going into too much more detail, weighting of different categories is ultimately a subjective choice.

Ironically, the large institutions all follow and devote themselves to rankings which have become the holy grail upon which an institution’s status stands or falls. This has also led governments and policymakers into centralizing the majority of their funding with institutions that are “the best.” I will let Gladwell have the last word:

There is no right answer to how much weight a ranking system should give to these two competing values (efficacy and selectivity). “It’s a matter of which educational model you value more.” (p. 74)