In the beginning of the month, DesignIntelligence released its 2011 America's Best Architecture & Design Schools rankings. Cornell was ranked first again, for the sixth time in seven years. The usual hubbub ensued: critics weighed in, students and alumni tweeted and blogged away, parents and high school students scrutinized the list like a consumer review book, looking to get the best for their (large sums of) money.
by Ann Lok Lui
Four years ago, I applied to Cornell University's bachelor of architecture program for two reasons. First, because I was eager to live somewhere new, and New York was as far as I could get from California; second, because Cornell was the ranked number one. Now, entering my thesis semester, I've gotten my taste of the east coast (they're right — it's cold). But I still have questions about the rankings that led me to Ithaca.
In 2010, Cornell suffered a string of six student suicides, provoking difficult questions of student wellness and mental health. Had the DesignIntelligence editors noticed what happened at Cornell, I wondered, had they seen for themselves the tall fences lining the gorges? Did these issues affect the 2011 rankings — or if they didn't, why not?
Since university rankings began 27 years ago with the US News & World Report, colleges and students have come to expect and demand objective, empirical measures of a good education. Everyone wants to know: Who's the best?
"People always need to be wary in the case of rankings," said Ellen Hazelkorn, the dean of the Graduate Research School at the Dublin Institute of Technology, whose book Rankings and the Reshaping of Higher Education will be published next year. "They measure what the producers think are the most important criteria and they've also weighted them. They may not be your criteria and they might not be important to measure."
University ranking schemes, sometimes called "league tables", rank colleges against one another. Most of these schemes use various "indicators" — which range from size of faculty, to the age of the university, to average SAT score of incoming students. These indicators are weighed in order of importance by editors and tallied to result in a final, qualitative score. The method is comparable to a weighed GPA. Consequently, editors are able to produce singular lists of schools while factoring in many variables.
"The authors of these rankings are imposing a specific definition of quality on the institutions being ranked," wrote Alex Usher and Massimo Savino, from Toronto's Educational Policy Institute, in their paper A World of Difference.
"The fact that there may be other legitimate indicators or combinations of indicators is usually passed over in silence. To the reader, the author's judgment is in effect final."
Editors literally choose the institutions that make the cut with their choice of indicators. Throw out one measurement — and a university will suddenly drop — or weigh another measurement higher — and another university will rise to the top of the list.
DesignIntelligence's "Best Architecture & Design Schools" ranking has one indicator.
"The [Best Design Schools] rankings are based solely on the question we pose to invited participants: 'In your experience, which schools are best preparing students for professional practice?'" wrote Jane Gaboury, the Editor and Associate Publisher of DesignIntelligence, in an e-mail.
I was prepared to write an extravagant paragraph explaining what indicators I thought the DesignIntelligence rankings were missing — but it turns out, it's missing all them. DesignIntelligence's ranking methodology reviews one indicator: single-response survey results from some 220 firms.
"The methodology we use to use in our Best Schools research stems from the project's history," wrote Gaboury in an e-mail, when asked about why only one indicator is used in the DesignIntelligence rankings. According to Gaboury, in the nineties, the rankings were conceived of at a Design Futures Council executive board "think thank," an informal conversation that became the jig that the current survey is modeled from.
In nation-wide rankings, it is already hard to believe that a single set of indicators speak for what all students want. However, US News & World Report has 15 indicators (only one of which is a survey), Times Higher Education Supplement has 9, Melbourne Institute has 26, and The Wuhan University Center for Science Evaluation has a whopping 45. In architecture, a field that attracts diverse students with myriad interests, the idea that one indicator can speak to what any of us want from education is absurd.
Additionally, the professional practice survey indicator raises more questions than it answers. Other ranking systems also use third-party data and data provided by the universities themselves.
"Survey data is scientific in the sense that it records observations accurately," wrote Usher and Savino, "but [...] critics might reasonably question the value of such observations, as very few employers or opinion-makers are likely to have detailed views on or knowledge of every institution under scrutiny."
What firms did DesignIntelligence survey? Do they survey the same firms from year to year? (In which case, we shouldn't be surprised that the rankings are virtually unchanging.) Did the people who were surveyed have contact with students from more than a few NAAB-accredited schools? Did these people have personal biases that make them unreliable sources for data?
Maybe you think that the best architecture school is the one that teaches comprehensive, sustainable green design. Maybe you think it's the school that does ground-breaking research on new CS modeling techniques or 3D-fabrication technology. Or maybe, like me, you think it's important to know how schools fare in terms of mental health. But choose your vice: in the top 20 count, it doesn't matter.
What architecture schools would come out on top if you added one or two more variables? Cornell's six student suicides brought light questions about mental health. According to the National College Health Assessment , 30% of students nationwide in 2010 found that they had at least one time felt "so depressed that it was difficult to function."
These are especially important issues in design programs, where students pull multiple all-nighters, work in a competitive environment, and are tested by design issues that challenge our worldviews. Architecture is a uniquely creative and demanding field, which taxes any psyche — even a healthy one. While sometimes I thrived in Cornell's work-hard, party-hard environment — I've since found my own success at the 'number one' university — there were other times when I was profoundly scared, depressed, and anxious. Many of my friends and classmates have seen both the bright and dark sides of architecture education. It ranges from taking a semester off, to getting therapy, to struggling and being institutionalized for numerous addictions or mental illnesses, to the simple but unquestionable daily grind of trying to stay afloat.
Empirically looking at mental health may seem like an oxymoron, but if rankings are here to stay — and it looks like they are — it's something that needs to be considered.
"We do collect [mental health] data and compare it to national data," said Greg Eels, director of Counseling and Psychological Services at Cornell , whose program I believe makes great strides in helping struggling students. "So we do assess general well-being, and there are market differences." Mental health data is out there and available; and I believe it is as important, if not more, than what 220 anonymous firms think of an architecture school's preparedness.
In the end, rankings, which at first seem to be mainly for bragging rights or diploma prestige, are influential in the way schools operate. College, after all, is a for-profit affair. Cornell, specifically, has made efforts to stay on the right side of rankings , for better and for worse — from adding faculty to its Sociology program when it didn't fare well in NRC assessments, to manipulating its alumni donor count because of US News & World Report ranking methodology. Public perception presents an opportunity to make a change.
I wish I could simply encourage high school students and parents to disregard rankings entirely. But they are here to stay: they influence potential employers after we graduate, and with the economy, as it is, I wouldn't strike "Cornell" from my diploma for a quarter of a million (tuition) dollars. For me, the DesignIntelligence rankings were right and at Cornell, I found what I wanted as a designer and a student. But it took four and half years, ridden with anxiety and stress to get on track with the things I want.
Rankings present themselves as empirical data. But the reality is that they are far from objective: the so-called hard science behind them is riddled with problems. DesignIntelligence's methodology is especially lacking. What I find myself asking is: can a good education even be recorded in numbers, in a list? Are the things that I wanted and the things that I got from Cornell even quantifiable? Of course I have only gone to one design school. But I believe that I can't tally an education in numbers: the opportunity to work with a certain starchitect who profoundly changed my views on design; discovering the writings of Colin Rowe, whose game-changing essay inspired the title of this article; the taste of College Town Bagels; the mentorship of some professors and failure of others; the bright sunlight on snow outside the bell tower; the feeling of contentment at the library at 4 a.m. at night; the tragic loss of a peer to suicide who had went through first year studio with me. These are not things that you can record in figures and lists.
Rankings can be useful and influential for universities and parents; as a student, they should not mean anything more to you than what they honestly are. It's easy to conflate a Top 20 list with a list of things we want. Don't forget: What they call the 'best' — at the end of the day — is nothing but a single indicator.
Ann Lui is a fifth year student at Cornell University's bachelor of architecture program. She is a former Arts & Entertainment Editor of the Cornell Daily Sun, and has contributed to Metropolis Magazine POV, Architect's Newspaper, and ArchNewsNow. She is currently living in Chicago while taking a semester off.
11 Comments
Just to clarify not one of the 6 students that committed suicide last year were from the College of Architecture, Art and Planning.
This detail was conveniently left out.
so what was this article about again? rankings or suicides/mental health?
regardless of rank or degree field when you put alot of talented type 1 personality individuals you are going to get an intense environment. it doesnt matter if you go to south dakota or harvard.
If I understand Ann correctly, the issue she has with DI's ranking is the singular criteria they informed her that they use to assess the schools. Yet she admits that they were correct and she is finding the education that she expected, albeit took her four and half years to find. Since no other ranking was mentioned I must infer that the DI ranking was the only ranking criteria that she used to decide where to attend school.
Does anyone else see the irony in that logic?
I can only assume that the DI ranking was the criteria that convinced her to decide on Cornell since there are numerous schools of architecture on the east coast to meet her distance requirement. And if she visited all the schools up and down the east coast during the warm seasons, I could see how a school in Atlanta or Miami could compare to one in Ithaca. But the school year is a much different time and the weather may play a factor to what makes Cornell so strong; students do not want to be outside.
The intense environment that is Cornell is created by the students and student body, if the studio culture is too intense, it is up to the students to change that culture for the better.
And Ann, it is not lost on me that you didn't know, what you didn't know four and half years ago. That is why we all go to an architecture program, to learn from others and it should take the entire program to 'get it'. Best of luck when you return to Cornell, enjoy and take advantage of living in Chicago while you are there.
The same could be said for many schools...Cornell has nothing at all to do with this.
If you're going to write objectively you'll have to leave half the personal stories out (it diminishes your argument)...as you're not alone...every single person at any university in the world has to deal with the pressures of constant ups and downs.
Cornell ranks on the top of the list because it attracts great professors and students, forces a strong pedagogy and comes from a rich tradition of theory and exploration. This inevitably leads to students graduating with strong perspectives and the ability to clearly express them, hence them being described as well prepared and well rounded.
Revist your article in a few years, when you have a bit of perspective...if you want to be critical of DI don't do it by being critical of your university at the same time, they are two separate discussions.
I found it a well-written op-ed piece, with some due diligence in research and references. Were the writer to remove a first-hand narration of Cornell, we would have read a simple by-line to the effect of 'DI has one criteria - rankings compiled from 220 firm survey'.
If indeed that were true, however, I wonder why the published book looks as thick as it does. Are there 100 pages of advertisements by Kalwall and Bega following 10 pages of ranked institutions?
We love/loathe lists: Georges Perec knew this keenly.
I think we all experience these uncertainties, because architecture and design practices are not about getting a right or wrong answer. I am finishing my BArch at UNAM, which is consistently ranked as Latin America's top university. But here in Mexico, architecture schools in the US and Europe are stereotyped as being far better than schools in third-world countries.
When I had the opportunity to experience first-hand the vision and strategies of architectural learning at the Architectural Association in London, I got the evidences to compare them with those at public, budget-struggling, a-thousand-enrolled-a-year School of Architecture at UNAM. An the reality was, they couldn't be less comparable!! The AA in my opinion, is neither better nor worse than UNAM; they are just different. This difference, with all its complexity, is impossible to reflect in a ranking.
@justavisual, when you say:
"Cornell ranks on the top of the list because it attracts great professors and students, forces a strong pedagogy and comes from a rich tradition of theory and exploration. This inevitably leads to students graduating with strong perspectives and the ability to clearly express them, hence them being described as well prepared and well rounded."
Everyone can say this about their own school! This is all so subjective!!
I think the point on this article is precisely that rankings have nothing scientific or objective about them, especially when it comes to creative professions.
I think this article opens the question of what indicators should be included in the DI rankings. Adequacy of mental health support is good suggestion. What else?
@dot - incorporating employment data (% employed upon graduation, median and average starting salaries) would be a helpful start towards legitimizing the DI rankings.
Employer satisfaction with graduates is one relevant criteria for distinguishing between schools, but more so for professionals than prospective students and parents. DI does further break down their rankings by specific aspects of education (design, sustainability, construction administration, etc.) which helps give a better overall profile. When one skims the rankings, the overall range of excellence offered by several schools complicates singling out any one 'best' school.
The author does not really address how both DI and the select 220 firms' own interests might figure into these rankings. It seems likely that the architects from these firms could have ulterior motives in their choices (pumping up their own Alma mater and thus their own CV, for example). Also, one wonders why these specific firms, from the many thousands in the world, were picked by DI. I doubt it is a coincidence that the profile of DI seems very similar to that of many of the schools it consistently ranks highest.
While I agree the article is distracted at times, I commend the author for highlighting how an imperfect system is used to rank her own school as the best.
I question if it's even necessary to rank a "best" school when it comes to Architecture. Although we all end up with the same degree, the curriculum in which we arrived there can wildly vary from school to school. Without having specialized degree programs like the medical field does where we could compare school to school and actually be comparing like things, I think ranking schools that collectively are quite different is a failed system no matter how many other categories you add into the mix.
All these rankings do is convince parents to convince their children to make misinformed decisions. So what is Cornell is the "best" school of architecture? Does it provide curriculum that you are interested in? What does the school focus on in it's design studios? What is the studio culture? How is the faculty? There are many objective questions that SHOULD be asked, and rankings should not factor into the equation, esp. ones that are as one sided as DI.
What school you went to is just used as a social signal to potential employers that you had a good training, and maybe even a good network. It doesn't really indicate how bright a person is.
Block this user
Are you sure you want to block this user and hide all related comments throughout the site?
Archinect
This is your first comment on Archinect. Your comment will be visible once approved.