For Onur Yüce Gün, the study of computation is the study of the human condition itself. A mission to anchor humans at the center of AI and computational design brought Gün on an educational journey from Middle East Technical University in his native Turkey to the MIT Design and Computation Group, and a professional journey from Kohn Pedersen Fox to his current position as Director of Computational Design at the global footwear brand New Balance.
Whether foot soles or skyscrapers, Gün recognizes the power of blending the hard-won 'generalist specialist' skill of the human designer with the unprecedented power of AI and computation to unlock new scales and resolutions for us to explore, sculpt, and craft. While new waves of generative AI tools such as Midjourney and ChatGPT amplify conversations over job replacement, biases, and a loss of meaning, Gün's position is unwavering: "No matter how advanced the design tools, or how advanced the computational techniques are, the human intention, human perception, and human ideation remain central to the design processes and processes of making."
In July 2023, Archinect’s Niall Patrick Walsh spoke with Gün about his life, work, and reflections on design, computation, and humans, drawing on his engagements across architecture and product design. The discussion, edited slightly for clarity, is published below.
This article is part of the Archinect In-Depth: Artificial Intelligence series.
Niall Patrick Walsh: In your work, one can see confluences of art, design, science, fashion, and computation. How did all of these interests converge on a single career?
Onur Yüce Gün: I can trace some of this back to high school in Turkey where I became interested in organic chemistry and advanced calculus. I was mostly interested in organic chemistry because it had a visual structure underpinned by rule sets. Today, I know this as ‘rule-based design’ though at the time I wasn’t aware of the term. My artistic side also goes back to childhood. My mother was a painter and teacher so I was immersed in painting and reading about art.
After high school, I studied engineering but found that I was more comfortable in the theater society next door, or exploring the architecture department and being mesmerized by the models. I ended up transferring to the architecture school and graduated top of the class, after which I was admitted to MIT. This was in 2004 when parametric design was gaining interest although Grasshopper hadn’t yet been released. The following years saw me immerse myself in this emerging world at the intersection of design and computation, such as contributing to Robert Woodbury’s book Elements of Parametric Design. At MIT, my thesis revolved around the idea of making 3D Max render images at the user’s command, which used foundational logic and algorithms of what we call machine learning to evaluate random images overnight. This idea of computational aesthetics was fairly new at the time, but now we see it popularized through models such as Midjourney.
I studied engineering but found that I was more comfortable in the theater society next door, or exploring the architecture department and being mesmerized by the models. — Onur Yüce Gün
After MIT, I worked with Kohn Pedersen Fox in New York for three and a half years, mostly working on large-scale buildings such as skyscrapers. There was a small but active network of computational designers in New York City at the time which I tapped into, while institutions such as the Pratt Institute and the University of Pennsylvania asked us to carry out some mentorship for students. These were all formative moments of learning through making and teaching for me.
In time, I moved back to Turkey to launch an undergraduate program with the professor who had originally encouraged me to apply to MIT seven years previous. In this program, we sought to apply computational design ideas to first-year students, which was somewhat of a first in Turkey. When I left KPF and moved back to Turkey, I thought my journey of discovery in computational design would be confined to the limitations of architectural practice and construction, But a new question kept coming back to me: how could I re-assess the position of the active-perceptive human in the realm of computation that is driven by zeros and ones?
To fully confront this question, I felt the need to equip myself with a deeper knowledge of computation, and I returned to the MIT Design and Computation Group to embark on a Ph.D., which is a five-year commitment. This was personally challenging for me. I could have continued on the path I was on, such as finding a long-term position in academia in my current field. But I wanted to come back, isolate myself, and give myself space to think, read, write, and build up a portfolio of work, even if it meant restarting as a student in my mid-thirties.
How could I re-assess the position of the active-perceptive human in the realm of computation that is driven by zeros and ones? — Onur Yüce Gün
The Ph.D. was deeply rewarding, especially towards the end. I knew I wasn’t going back to architectural practice, or at least the genre of architecture I had been engaging in, such as making large-scale buildings, simulations, and post-design rationalization. While I was writing my final dissertation, and somewhat floating in the air, New Balance reached out to me about parametric design, which was surprising at first. I told them that I knew nothing about shoes, plus the parametric design position they were suggesting was something that I had done around a decade ago; that I would have been too restless in the role they wanted me to perform.
Fortunately, New Balance is an interesting company full of open-minded people, so they asked me what I wanted to do instead. In an environment where parametric design was still new, they wanted to know what could lie beyond it in the world of computational design. I gave them my thoughts on new work methods, generative design, digital manufacturing techniques, and how we could use data to generate new ideas.
To test this, I was given a text file containing underfoot pressure data consisting of 128,000 data points. The task was open-ended. I demonstrated techniques for parsing the data, interpolating it in space, and algorithmically modifying lattices by using it. This was back in 2016, prior to the emergence of implicit modeling tools. I made it clear that these were only suggestions and not definitive answers. Everything would need testing for validation. I think that as much as these technical skills, my critical approach was key in being asked to help write a job description for myself.
As powerful as the rational core of computation is, it needs human-centric inputs, human intention, experience, and expertise. — Onur Yüce Gün
Fast forward seven years, and we have a dedicated computational design team that I direct. We built this team from the ground up, always with the mission of using computational techniques in a human-centric way. You cannot do otherwise in an athletic performance context. In a way, my effort to bring back the human touch to the computational creative process during my PhD transformed into something: the design of products that directly impact the way you move.
This is a hint for computational design as a whole. We must be aware that when using technology to find better solutions to problems, the solutions do not always emerge from computational techniques alone. As powerful as the rational core of computation is, it needs human-centric inputs, human intention, experience, and expertise.
This idea of human-centric computation is one I often see in your work. You gave a TED talk where you said: “No matter how advanced the design tools, or how advanced the computational techniques are, the human intention, human perception, and human ideation remain central to the design processes and processes of making.” In the context of generative AI today, there are two trends I want to ask you about with this human-centric view in mind.
One is about job replacement, particularly the relationship between humans and computation in generating design ideas, and which actor is in command of that process. The other is the idea of ‘meaning’ in generative AI. Taking Midjourney as an example, the power to create hundreds or thousands of images in a day can lead to people getting lost in the ‘noise,’ unable to extract meaning. Both of these trends speak to the human condition in AI, which you articulate so well in your talks and your work. Could we unpack both of these points more?
I have a very clear answer for job replacement in the article 'When and how will your work be replaced by AI?' but let’s start with the fear of replacement. In 2018, I gave a presentation at LaGuardia Place in New York on AI and architecture. One of my final slides addressed job replacement by AI—I simply said “Find another job!” One common claim is that new human jobs are going to emerge as well, as technology evolves, which is true. Whether your job is under threat from AI or not, my answer is still 'finding another job' or finding a way for your existing job to evolve. The best scenario would be for everybody to not really have a 'job' but be able to sustain their life while being productive for society.
Let’s not be afraid of replacement. Let’s adapt, learn from the past, and explore how we can use both our own intelligence and artificial versions of intelligence to move towards a better, human-centered future. — Onur Yüce Gün
The world is in constant change but it is worth noting that seismic change does not happen overnight. When someone walks onto a stage and says that their technology is going to change the world forever, I become skeptical. Look at NFTs, cryptocurrencies, 3D printing, or parametric design as examples. It is as if when a new technology emerges, we disregard everything that came before it and assume that entire histories and methods will disappear overnight. On the contrary, the fields of design and production are built over centuries, not years. AI will affect these fields, but only at certain moments and portions of them. Even software cannot overhaul a system overnight. Software does not float in the air; it depends on hardware, manufacturing, production, transportation, connectivity, and so on, all of which need to advance in tandem with one another.
Let’s not be afraid of replacement. Let’s adapt, learn from the past, and explore how we can use both our own intelligence and artificial versions of intelligence to move towards a better, human-centered future. After all, design is an incredibly human act. Design is about taking risks for the betterment of whatever you are trying to do. If there is no risk involved, you are simply replicating whatever came before you. If you take a risk, maybe there is a portion of your work that becomes innovative. Again, innovation is about incremental improvements. This holds true for architecture and construction too.
The other keyword you used earlier was “meaning,” which is a word I use in the same spirit as when I say “value.” These words are also incredibly centric to our being. Whatever we do, whatever we design, whatever technique we invent, we keep living. This is often missed. The purpose of living a meaningful life is also perhaps linked with this idea of sharing meaning and value with others, which connects to design again.
Let’s take this latest technology, Midjourney. This technology, and generative AI more broadly, is far more democratized than previous tools. Of course, you need to pay for some versions of Midjourney and ChatGPT, and if you want to run Stable Diffusion locally you will need a GPU, so it isn’t entirely democratized. But what you do not need are years of training on form generation, or to learn how to use software to generate non-standard geometries, and so forth. Now, you type a prompt, and you get a result.
In the same way that not every attribute can be distilled into numbers, not every nuance of meaning can be captured in words. — Onur Yüce Gün
This is where we can introduce the idea of “noise,” as you mentioned. If we can produce 10,000 images in one week, how do you filter through those? Not too long ago, some thought that ‘prompt engineering’ would be the next big job title, but I always felt that within six months, this hype would disappear. I wrote an article to that effect. A prompt is simply chaining words together to produce a pattern; something that the machine will excel at far more than humans regardless. Language, while a powerful medium, often falls short of fully encapsulating the essence of "meaning." In the same way that not every attribute can be distilled into numbers, not every nuance of meaning can be captured in words.
To continue on this theme of human-centric computation, you wrote an article in 2021 where you said: “Always remind yourself that the computer screen, numbers, and colors might become deceptive. But regardless, use them to their fullest potential and keep exploring the uncharted realms, so we can produce better-performing products, a more sustainable mindset, and create better values that respond to our human needs overall.”
To me, this idea of “deception” speaks to the dangers that many in this space are aware of regarding bias and prejudice in AI systems; people can read our conversations with Felecia Davis and Morehshin Allahyari for more on that. My interpretation of your article is that you are beginning to chart a course for how we navigate through those biases. You are arguing that data is not reality; it is an interpretation of reality. The model cannot predict or encompass everything. Somehow, the ‘fuzziness’ of the human condition means we are still the best placed to digest and assign meaning to the outcomes of these models, rather than blindly follow them.
Addressing the danger of AI bias, the lesson I take from your article is that we should still embrace AI as a companion, but must also be aware and seek to mitigate AI ‘deceptions’ in both the building of AI systems and in our use or interoperation of their outcomes. Is that a fair reading?
I agree, and there are a lot of important points here. This question goes back to the idea of education. When I say education, I am talking about formal education within schools and colleges, but also self-education. I loved my architectural education, and I wouldn’t want to have undertaken any other kind of education. But I also find it amusing, in a way. We go to architecture school and are trained to become the next god. It is as if we are going to go off into the world as the grand orchestrator, telling everybody else about materials, form, shape, light, and so on. Of course, the reality is the opposite. When you become a junior architect, everybody tells you what to do, and your job is to get it done. This discrepancy needs urgent addressing in our education system.
Given the amount of people who graduate from architecture schools each year, many of us find ourselves moving away from the crowd to shape our career whether that is becoming a specialized traditional architect or finding another outlet to make a difference. There’s an intriguing phenomenon I see in these cohorts. Many architects become generalists, but some of us become ‘generalist specialists.’ I meet many people who are both aware of the wider landscape in which architects operate but also know several subjects in incredible depth.
Many architects become generalists, but some of us become ‘generalist specialists.’ — Onur Yüce Gün
This is a striking skill that only comes from spending large amounts of time relentlessly pushing yourself to understand and learn. There is no other way, no quick wins. An AI system alone is not going to gift this to you because if that is your weapon, it is everybody else’s weapon too. Your weapon is still in books, in history, in honing your skills, learning your technique, and using it until you exhaust it. That said, if you do this and share every step along the way, you are creating noise again. Instead, you should commit to something, spend your time and energy on it, and bring the valuable product or idea back to the table.
When architects today talk about AI, you can immediately tell whether or not they understand how AI works. An effective way to understand how AI works is actually to spend some time understanding how humans work. What is intelligence? What is human intelligence? What is consciousness? How much do we know about these topics? To explore this, you can read about neuroscience, ontology, and creativity. You can read about the philosophers and their views on ‘being’ and ‘living’ and ‘existing’. Finally, then, you can read about the history of computation.
To this point, I shared an article on Medium recently talking about a class I took from Patrick Winston who led the AI Laboratory at MIT. His class was called 'Human Intelligence Enterprise,' not 'Machine Intelligence Enterprise.' We would read seminal papers starting from Turing’s 1952 paper and, across all of these milestones, we saw how all these great innovators were interested in how to abstract and model human intelligence. The same applies to today: we have an artificial model that we think represents how human intelligence works; it is a model, not the thing itself. I talk about this in the article: Why is human thought so hard to replicate? — Information processing model of the mind. We know that Convolutional Neural Networks is indeed a model, not a replication of natural intelligence.
The fact that Midjourney can create incredibly strong images doesn’t mean that it has a mind, or that it is working in the same way that the human mind works. It doesn’t mean that it has consciousness, and it certainly does not mean that it is hallucinating! These are loaded words that imply entire states of the human mind that are hard to explain to ourselves, let alone trying to connect them to modeling and computation.
That said, when we step back and admit the downfalls of our models, do we just give up? Of course not. We should be open and honest about the pitfalls and be aware that what is interesting about these systems today won’t be as interesting tomorrow, given the rate of change. The challenge is to determine what is valuable in today’s new models, and plug it into something that already exists, such as a production technique, a design workflow, or a human environment, and see if we can enhance it even by 5%.
Train yourself, educate yourself, and be hard on yourself. — Onur Yüce Gün
We need to constantly challenge ourselves, and step outside of ourselves. AI can help us become more collaborative, such as fulfilling the more redundant sides of our work quickly, but we definitely need to do so with a critical view. That comes with education. Train yourself, educate yourself, and be hard on yourself. If you are hard on yourself, it helps you build the strength and filters out redundancies to deliver something truly valuable.
One of the great opportunities I see in computation is how it enables us to operate across massive scales. You’ve talked about how, at New Balance, you use AI to make minor adjustments in the lattice work within footwear on a scale that would be near-impossible for the human eye. Without the aid of technology, we struggle to perceive or comprehend forces on the macro scale of, say, trillions of data points, as well as on the incredibly micro level of, say, a DNA lattice structure. AI doesn’t have such evolutionary limitations on scale.
I can see significant architectural implications to this when it comes to the billions, even trillions of data points generated by urban environments every day, down to the development of micromaterials. To me, your work and its deployment of AI in product design exemplifies this. What are your thoughts?
We can put two topics on the table here. One is ‘statistics and patterns,’ and the second is ‘scale and resolution.’ If you think about AI, it is saturated with statistical models, which I find terribly uninteresting. If you read The Black Swan by Nassim Taleb, he’s pretty harsh on statistical models. The subtitle of his book is ‘The Impact of the Highly Improbable,’ the ‘highly improbable’ being the black swan. You don’t often see it, because swans are white, but once you do see it, it is like “Wow!” It makes the chemicals in your body run differently. There is something interesting there.
Today, we dump large data sets into an AI model that in turn searches for and identifies recurring patterns. But does it really have any insight into that pattern? Probably very little, and definitely not in the way we do. Speaking of DNA, if all of us happen to be brought together as a statistical accumulation, we would be just average creatures. The natural life as we know it says otherwise. This is something to be mindful of when we build statistical models.
If all of us happen to be brought together as a statistical accumulation, we would be just average creatures. The natural life as we know it says otherwise. — Onur Yüce Gün
Then there is the ‘scale and resolution.’ One of the strongest aspects of computation is the ability to find something in these huge AI models that you wouldn’t have access to alone. For footwear, this is deeply interesting. People ask me: “How do you take your skills from designing 500-meter-tall skyscrapers to designing shoes?” In reality, the modeling techniques are similar, of course with nuances in surface and volumetric modeling. The real difference lies in how design problems are formulated. Now you have something that you are ‘putting on’ instead of something to ‘be in’ or ‘stand next to.’ Footwear always touches you, it has to live with you. That is why we have so many models of shoes; everybody has different needs. When you work on a performance shoe, you really increase that resolution you mentioned.
The small lattice datasets you referred to are an example of going down to a micro-scale to do something that has never been done before. When you manufacture a shoe, there is a foam midsole that has its own pores, tiny pockets of air, distributed across the material. This is the natural result of a chemical reaction. When we introduce lattices to this context, we are bringing structural elements to the conversation for the very first time. So as in the foam, the 3D print resin has mechanical properties. On top of that, you fine-tune the behavior of a midsole in accordance with the “events” that take place under your foot! While doing that, you also deal with the topology of the human body. After all, we are not symmetrical or blocky creatures. The fingers, toes, and the bottom of the foot are all a crazy map of nerves, muscles, etc.
In this process, we learn by improving the resolution. We released shoes with 3D-printed components in 2019 which was super challenging because we combined the structural design I just described with the development of chemicals, resins, a manufacturing system, and an assembly line. So going back to your questions, if we wanted AI to help us with these design challenges, we had to already have possessed the statistical information about that resolution, which for us didn’t exist at the time. Could we use AI for simulations of how our new design might behave? Sure. But could we use it to create the 3D-printed custom shoe for us? No, because it is a wicked problem that deals with a very sophisticated product in turn that fits into a huge production mechanism.
Where would all data come from and how could it be studied, sorted, and modeled? There are companies now that create synthetic data sets for these kinds of problems. Their goal is to create ‘fake’ AI models for want of a better word. If you have a dataset of 100 pieces, they might try to expand it to 100,000 pieces to allow you to train your model. An incredibly interesting landscape will emerge from that perspective of scale and resolution. Yet potentially the limits of statistics will remain as such expansions will truly be synthetic and natural surprises will hardly emerge.
To conclude, what is one final message you would like people to hear about the themes we have talked about?
I will close with three words: design, computation, human. This is a title that I have been using across my lectures, writings, and YouTube channel. I tried to create a name that captured these three words—until I eventually realized those three words would best describe themselves! No AI is needed there!
"Design, computation, human" – these aren't direct answers, they pose questions, puzzles to be unraveled. — Onur Yüce Gün
"Design, computation, human" – these aren't direct answers, they pose questions, puzzles to be unraveled. We can use those three words to find answers, but maybe more importantly to ask meaningful questions. We can use them for our endless journey, to invite ourselves into the process of making and thinking about design, making and thinking about computation, but above those, making and thinking about humans. It’s a long road, but we will slowly get there.
Niall Patrick Walsh is an architect and journalist, living in Belfast, Ireland. He writes feature articles for Archinect and leads the Archinect In-Depth series. He is also a licensed architect in the UK and Ireland, having previously worked at BDP, one of the largest design + ...
No Comments
Block this user
Are you sure you want to block this user and hide all related comments throughout the site?
Archinect
This is your first comment on Archinect. Your comment will be visible once approved.