An article today in the NYT states that a number of (for the most part) liberal arts colleges are no longer going to participate in U.S. News and World Report's annual college rankings. Furthermore, the consortium of these schools has plans in the works to come up with their own ranking system.
At first, I applauded the desire to not be subjected to arbitrary outside judgment. For example, I once taught two sections of Principles of Microeconomics (not at Harvard) with exactly 49 students each. I thought this was a little bizarre, and when I inquired I was told that things were done this way because U.S. News and World Report has a category for the number of classes with fewer than 50 students. (As a sidenote, I doubt the students would have had a less positive educational experience had there been 98 students in one class. I do believe that there are some subjects and courses in which a smaller class with a discussion format is specifically desirable, but introductory microeconomics is not one of those courses.) Therefore, at least in some cases, the rankings result in inefficiency due to the fact that they cause nonproductive effort to be exerted on things like arbitrarily reducing class size. I would have to guess that this is not the only area that results in inefficient use of resources, and I am trying to think of a way to analyze rigorously the degree to which the rankings alter the priorities of schools and whether the changes are productive or unproductive.
However, the more I thought about this, the less positive I was toward the decision. First, wouldn't any ranking system that was designed by schools that, for whatever reason, found it not optimal to participate in the current system be inherently biased in favor of these schools? U.S. News can at least say that it is an independent entity with little incentive to favor any school or set of schools. I would certainly take the results of any in-house ranking system with a grain of salt. Second, isn't all publicity good publicity? Maybe I am not representative, but if I were using the U.S. News rankings as a guide, I would either fail to realize that a non-mentioned school existed at all or conclude (being on the losing side of asymmetric information) that the school refused to participate because it wouldn't have ranked well anyway. (So much for "no news is good news".) My assumption regarding a non-mentioned school's quality would likely be worse than what an actual ranking would have indicated. If potential students are like me, then refusal to participate doesn't appear to be optimal, though it may give the school administrators some utility from being able to issue a "so there".
Apparently (but not shockingly) I am not representative. Alex Brown, Colin Camerer and Dan Lovallo have done research that shows that "cold opening" a movie (i.e. not showing it to critics before it is released) can, for some types of movies, increase box office revenue (compared to a counterfactual of a reviewed movie with similar attributes). Economic theory would suggest that if a movie is not reviewed prior to opening, there must be a reason, since the review generates publicity for the movie if nothing else. Therefore, iterated thinking would lead an individual to believe that any cold-opened movie was a lemon and would stay away. However, this type of iterated thinking doesn't seem to always happen in the real world, and it seems that in some cases providing no information can be preferable to providing negative information. I suppose we'll have to see how those liberal arts colleges do in coming years.
Subscribe to:
Post Comments (Atom)
1 comment:
In the case of movies, there are plenty of examples where some movies appear to be "critic-proof." Take for example last week's top grossing movie, The Fantastic Four: Rise of the Silver Surfer. It was panned by critics, but it still managed to take in $58.1M, which is almost three times the take of the second highest grossing movie of the week, Ocean's Thirteen. Will it be able to recoup its $130M budget (which I assume does not take into account marketing costs)? That is anyone's guess. It does suggest that widely available negative information can be overcome in the short-term either through marketing or other means. There are other factors in play such as Ocean's Thirteen being in its second week of release, etc., but it does not explain why people would still spend their money to see what the critics consider a "bad" movie.
If this can hold true for the school rankings, then is there really a downside to participating the ranking survey? One key assumption that I am making is that the schools in question would not have been near or at the top of the respective rankings. In regards to the NYT article, this appears to hold true for the 2007 rankings (Barnard #26, Kenyon #32, Sarah Lawrence #45). My guess right now is that these schools that have decided not to participate really had no chance of moving up significantly in the rankings or be at or near the top (I checked the 2006 rankings and the schools moved only a few spots). Other than the momentary publicity of being mentioned the NYT article, I do not see any advantage in going with this "we can do it better ourselves" approach. As you noted, self rankings are generally perceived as biased and the schools are losing a relatively easy opportunity to market themselves to prospective students.
What will also be interesting to see is if the midrange schools that continue to participate will benefit from the departure of their closest competition.
Post a Comment