In the past week two more college ratings have been released, both based on the highly questionable assumption that the way to rate colleges is by how much money their alumni make.
The Economist‘s ratings are based on the difference between the average salary reported and the salary expected from a regression that contains a huge number of predictors (some of which strike me as rather dubious in their data quality).
The Brookings Institution’s ratings are calculated in a similar way, but with a much smaller number of predictors, omitting some of the most important ones.
Both ratings are trying to look for “value added”—a difference between how much the alumni make and how much the same cohort would have been expected to make based on their socioeconomic status, GPA, SAT, and other characteristics.
The data seems to be very noisy and subject to all sorts of weird biases, so that the rankings have little to do with each other, even though both are using the same underlying data set (the Department of Education’s “college scorecard” website). The Economist admits to some serious limitations:
First, the scorecard data suffer from limitations. They only include individuals who applied for federal financial aid, restricting the sample to a highly unrepresentative subset of students that leaves out the children of most well-off parents. And they only track students’ salaries for ten years after they start college, cutting off their trajectory at an age when many eventual high earners are still in graduate school and thus excluded from the sample of incomes. A college that produces hordes of future doctors will have far lower listed earnings in the database than one that generates throngs of, say, financial advisors, even though the two groups’ incomes are likely to converge in their 30s.
Second, although we hope that our numbers do in fact represent the economic value added by each institution, there is no guarantee that this is true. Colleges whose alumni earnings differ vastly from the model’s expectations might be benefiting or suffering from some other characteristic of their students that we neglected to include in our regression: for example, Gallaudet University, which ranks third-to-last, is a college for the deaf (which is why we excluded it from our table in print). It is also possible that highly ranked colleges simply got lucky, and that their future graduates are unlikely to make as much money as the entering class of 2001 did.
Finally, maximising alumni earnings is not the only goal of a college, and probably not even the primary one. In fact, you could easily argue that “underperforming” universities like Yale and Swarthmore are actually making a far greater contribution to American society than overperformers like Washington & Lee, if they tend to channel their supremely talented graduates towards public service rather than Wall Street. For students who want to know which colleges are likely to boost their future salaries by the greatest amount, given their qualifications and preferences regarding career and location, we hope these rankings prove helpful. They should not be used for any other purpose.
I don’t like either rating scheme, reducing college education to just an income enhancer, but of the two terrible schemes, The Economist‘s is slightly less terrible. (Note: UCSC does not come off well in either rating scheme, though I’m not sure why—could it be that we send too many on to grad school?) On the Economist’s rating, UCB is below UCSC, but on the Brookings rating, UCB is quite high—probably reflecting the number of engineering students at UCB, since major choices are treated very differently between the rating schemes.