In the beginning of the month, DesignIntelligence released its 2011 America's Best Architecture & Design Schools rankings. Cornell was ranked first again, for the sixth time in seven years. The usual hubbub ensued: critics weighed in, students and alumni tweeted and blogged away, parents and high school students scrutinized the list like a consumer review book, looking to get the best for their (large sums of) money.
by Ann Lok Lui
Four years ago, I applied to Cornell University's bachelor of architecture program for two reasons. First, because I was eager to live somewhere new, and New York was as far as I could get from California; second, because Cornell was the ranked number one. Now, entering my thesis semester, I've gotten my taste of the east coast (they're right — it's cold). But I still have questions about the rankings that led me to Ithaca.
In 2010, Cornell suffered a string of six student suicides, provoking difficult questions of student wellness and mental health. Had the DesignIntelligence editors noticed what happened at Cornell, I wondered, had they seen for themselves the tall fences lining the gorges? Did these issues affect the 2011 rankings — or if they didn't, why not?
Since university rankings began 27 years ago with the US News & World Report, colleges and students have come to expect and demand objective, empirical measures of a good education. Everyone wants to know: Who's the best?
"People always need to be wary in the case of rankings," said Ellen Hazelkorn, the dean of the Graduate Research School at the Dublin Institute of Technology, whose book Rankings and the Reshaping of Higher Education will be published next year. "They measure what the producers think are the most important criteria and they've also weighted them. They may not be your criteria and they might not be important to measure."
University ranking schemes, sometimes called "league tables", rank colleges against one another. Most of these schemes use various "indicators" — which range from size of faculty, to the age of the university, to average SAT score of incoming students. These indicators are weighed in order of importance by editors and tallied to result in a final, qualitative score. The method is comparable to a weighed GPA. Consequently, editors are able to produce singular lists of schools while factoring in many variables.
"The authors of these rankings are imposing a specific definition of quality on the institutions being ranked," wrote Alex Usher and Massimo Savino, from Toronto's Educational Policy Institute, in their paper A World of Difference.
"The fact that there may be other legitimate indicators or combinations of indicators is usually passed over in silence. To the reader, the author's judgment is in effect final."
Editors literally choose the institutions that make the cut with their choice of indicators. Throw out one measurement — and a university will suddenly drop — or weigh another measurement higher — and another university will rise to the top of the list.
DesignIntelligence's "Best Architecture & Design Schools" ranking has one indicator.
"The [Best Design Schools] rankings are based solely on the question we pose to invited participants: 'In your experience, which schools are best preparing students for professional practice?'" wrote Jane Gaboury, the Editor and Associate Publisher of DesignIntelligence, in an e-mail.
I was prepared to write an extravagant paragraph explaining what indicators I thought the DesignIntelligence rankings were missing — but it turns out, it's missing all them. DesignIntelligence's ranking methodology reviews one indicator: single-response survey results from some 220 firms.
"The methodology we use to use in our Best Schools research stems from the project's history," wrote Gaboury in an e-mail, when asked about why only one indicator is used in the DesignIntelligence rankings. According to Gaboury, in the nineties, the rankings were conceived of at a Design Futures Council executive board "think thank," an informal conversation that became the jig that the current survey is modeled from.
In nation-wide rankings, it is already hard to believe that a single set of indicators speak for what all students want. However, US News & World Report has 15 indicators (only one of which is a survey), Times Higher Education Supplement has 9, Melbourne Institute has 26, and The Wuhan University Center for Science Evaluation has a whopping 45. In architecture, a field that attracts diverse students with myriad interests, the idea that one indicator can speak to what any of us want from education is absurd.
Additionally, the professional practice survey indicator raises more questions than it answers. Other ranking systems also use third-party data and data provided by the universities themselves.
"Survey data is scientific in the sense that it records observations accurately," wrote Usher and Savino, "but [...] critics might reasonably question the value of such observations, as very few employers or opinion-makers are likely to have detailed views on or knowledge of every institution under scrutiny."
What firms did DesignIntelligence survey? Do they survey the same firms from year to year? (In which case, we shouldn't be surprised that the rankings are virtually unchanging.) Did the people who were surveyed have contact with students from more than a few NAAB-accredited schools? Did these people have personal biases that make them unreliable sources for data?
Maybe you think that the best architecture school is the one that teaches comprehensive, sustainable green design. Maybe you think it's the school that does ground-breaking research on new CS modeling techniques or 3D-fabrication technology. Or maybe, like me, you think it's important to know how schools fare in terms of mental health. But choose your vice: in the top 20 count, it doesn't matter.
What architecture schools would come out on top if you added one or two more variables? Cornell's six student suicides brought light questions about mental health. According to the National College Health Assessment , 30% of students nationwide in 2010 found that they had at least one time felt "so depressed that it was difficult to function."
These are especially important issues in design programs, where students pull multiple all-nighters, work in a competitive environment, and are tested by design issues that challenge our worldviews. Architecture is a uniquely creative and demanding field, which taxes any psyche — even a healthy one. While sometimes I thrived in Cornell's work-hard, party-hard environment — I've since found my own success at the 'number one' university — there were other times when I was profoundly scared, depressed, and anxious. Many of my friends and classmates have seen both the bright and dark sides of architecture education. It ranges from taking a semester off, to getting therapy, to struggling and being institutionalized for numerous addictions or mental illnesses, to the simple but unquestionable daily grind of trying to stay afloat.
Empirically looking at mental health may seem like an oxymoron, but if rankings are here to stay — and it looks like they are — it's something that needs to be considered.
"We do collect [mental health] data and compare it to national data," said Greg Eels, director of Counseling and Psychological Services at Cornell , whose program I believe makes great strides in helping struggling students. "So we do assess general well-being, and there are market differences." Mental health data is out there and available; and I believe it is as important, if not more, than what 220 anonymous firms think of an architecture school's preparedness.
In the end, rankings, which at first seem to be mainly for bragging rights or diploma prestige, are influential in the way schools operate. College, after all, is a for-profit affair. Cornell, specifically, has made efforts to stay on the right side of rankings , for better and for worse — from adding faculty to its Sociology program when it didn't fare well in NRC assessments, to manipulating its alumni donor count because of US News & World Report ranking methodology. Public perception presents an opportunity to make a change.
I wish I could simply encourage high school students and parents to disregard rankings entirely. But they are here to stay: they influence potential employers after we graduate, and with the economy, as it is, I wouldn't strike "Cornell" from my diploma for a quarter of a million (tuition) dollars. For me, the DesignIntelligence rankings were right and at Cornell, I found what I wanted as a designer and a student. But it took four and half years, ridden with anxiety and stress to get on track with the things I want.
Rankings present themselves as empirical data. But the reality is that they are far from objective: the so-called hard science behind them is riddled with problems. DesignIntelligence's methodology is especially lacking. What I find myself asking is: can a good education even be recorded in numbers, in a list? Are the things that I wanted and the things that I got from Cornell even quantifiable? Of course I have only gone to one design school. But I believe that I can't tally an education in numbers: the opportunity to work with a certain starchitect who profoundly changed my views on design; discovering the writings of Colin Rowe, whose game-changing essay inspired the title of this article; the taste of College Town Bagels; the mentorship of some professors and failure of others; the bright sunlight on snow outside the bell tower; the feeling of contentment at the library at 4 a.m. at night; the tragic loss of a peer to suicide who had went through first year studio with me. These are not things that you can record in figures and lists.
Rankings can be useful and influential for universities and parents; as a student, they should not mean anything more to you than what they honestly are. It's easy to conflate a Top 20 list with a list of things we want. Don't forget: What they call the 'best' — at the end of the day — is nothing but a single indicator.
Ann Lui is a fifth year student at Cornell University's bachelor of architecture program. She is a former Arts & Entertainment Editor of the Cornell Daily Sun, and has contributed to Metropolis Magazine POV, Architect's Newspaper, and ArchNewsNow. She is currently living in Chicago while taking a semester off.
Ann is a SMArchS candidate in History, Theory and Criticism of Architecture & Art at MIT.