It’s not breaking news that Artificial Intelligence has started to affect the field of architecture in previous years. One of the most influential academic projects that changed the field was the 2019 Harvard GSD thesis by Stanislas Chaillou’s “AI+Architecture,” influencing studios around the world.
In this article, we will take a look at the approach toward AI that Die Angewandte put together for empowering architecture students, the designers of the future.
Kaiho Yu, faculty at Die Angewandte, Vienna’s design school, is highly interested in crowd simulations. During his past experience as a teacher, he has been addressing the problem of designing circulation efficiently and informing design decisions with data that architects can understand and share with clients. Kaiho’s approach is a mix between design and technology and his very first goal is providing alternative skills to designers that are facing the architecture industry today, during a radical shift of the profession where technology seems to always become more determinant.
From March 17, 2021, and May 30, 2021, the class of Urbanism at Die Angewandte went through a workshop co-taught by Alessio Grancini, Prototype Engineer and Magic Leap and Unity specialist.
The class was merging design and technology through an introduction to the use of Machine Learning and Augmented Reality in the context of an Urban Planning studio that in the past has been highly influenced by crowd simulation studies.
“The workshop was finalized to empower the designers of the future with tools that are significant in the industry,” says Alessio Grancini. “I was truly excited to see how students would have approached such an overwhelming series of lectures, that would have let them reflect on their future and their career as architects.”
The workshop was subdivided into 3 parts:
An introduction to the technology and setting up the environment that is necessary for using machine learning within the context of Unity, a popular game engine also used in many architecture schools.
An overview of a possible use case of the technology.
The additional usage of Augmented Reality to display the work in a legible way, creating interfaces that allow one to see the student’s work in real life, at full scale.
The full workshop can be accessed for free at this link.
The use case that has been displayed to the students was the following:
Agent as an external entity from the crowd (the example I will complete below)
We are deploying around 10-15 Boston Dynamics spots that need to scan the airport area to inform a central database of how many people are in the airport and in what specific area.
Every spot needs to be trained for understanding the best route and not disturbing the flow of people in the area.
We have “brute force” coded crowd simulations of people moving around the environment with a wide range of randomized volumes during certain hours of the day.
How would you proceed in this direction?
This exercise was targeting the learning of the following areas for the architecture students
Understanding how to create an efficient crowd simulation
Understanding how to create an efficient AI crowd simulation that coexists with the previous one
We are taught that any automatization process can highly diminish the creative process in our projects. However, this workshop is proving this to be wrong.
When we think about spatial simulation, what we are trying to achieve is recreating reality in a contained space. Reality has a lot of parameters, which are impossible to track at once. AI gives us the opportunity to do work with more data and inform the design, always providing the final word on what to do.
In this precise use case, we are running “a simulation into the simulation,” having a crowd of people that is inhabiting a space, and another group of entities understanding how the crowd is inhabiting it. We could understand it like:
What’s the preferred route of people inhabiting space?
What’s the most utilized area within the building?
What’s the most crowded entrance?
What are the circulation patterns that this building imposes to people?
And so on.
This coexistence of entities within the space is what makes this tool so powerful. Students took this input and made mind-blowing projects in the blink of an eye.
Anni’s approach inverted the main premises of the workshop making the space being “the learner”.
If we look back at agent simulation-based design methodology in the last decades, studies such as space syntax, agent-based semilogy have been investigating, simulating, and predicting spatial occupation patterns. However, the results are restricted to analysis and evaluation, which are yet to have the space become responsive.
So the project begins with a simple question: ‘What if the space could reconfigure itself based on agent behaviors?’
Instead of applying machine learning to agents(crowd) as was explored in the workshop, this project inverted the method and applied machine learning onto walls, thus giving agency to walls instead of agents themselves, allowing the walls to learn about their best position relative to the crowd simulation, therefore resulting in a space reconfiguration based on agent simulation.
The machine learning principle is quite straightforward: if a wall is too close or too far to an agent, it will receive a penalty. But if a wall is within good distance to an agent, it will be ‘rewarded’. After training for a length of time, a wall will learn the best position relative to the nearest agent.
We then apply this training principle to all the walls in a project.
The training results typical development:
Entry Creation
Area Reduction/Increase
Area Redundancy
Room Creation
New Connection
Here is the entire documentation of the project with 20 trained residential buildings of 3 different resolutions, so 60 models, in detailed illustration.
https://issuu.com/annidai0202/docs/_book_space-co-creation
Architectural & Agent Setup
Why choose residential buildings? As it is a continuous and stable typology throughout history and different cultural backgrounds, therefore an ideal study subject for this project. The choice of buildings spread throughout history and are from different cultural backgrounds.
For each building, 3 different wall resolutions are being experimented with, the resolution influences the results significantly, even for the same building.
The majority of the chosen buildings are one floor only. As this condenses the various agent activities into one level, thus making it more comparable among different residential buildings. Spreading to different levels will involve more unnecessary considerations such as the use of lifts and stairs, which are not the virtue of this project. The chosen ones which have more than one floor, their agent activities have been simplified and only the ground floor, where most activity happens, has been taken into consideration.
Agent simulations are done separately from machine learning, with a reasonable assumption of how people use the space. Depending on the building, three kinds of agent behaviors (master, guest, and staff) are being simulated. Each building's scenario is different, the relevant agents are included and excluded for each building. The complexity of the behavior is intentionally developed to use as much space as possible to gain the ideal training result. There is randomness in each agent's behavior that allows a degree of unpredictable developments.
Given the understanding of how AI could diminish the architectural design creative process, and the particular strengths of AI and its ability to process huge amounts of data beyond that of which humans are capable, what if we architects take the strength of AI’s data processing ability and incorporate it into the design process?
As space reconfiguration is an interactive process between agents and walls, architects can observe how the walls interact with agents in realtime, which gives the opportunity for architects to intervene and co-create with AI.
It happens often that the AI-developed results are not ideal and still need post-adjustment from the architect. So why not integrate and streamline this process through integration at the design stage? In this project, architects can adjust each room’s size and wall height while the programme is still performing the machine learning model. It is like architects working together with and against AI at the same time. When AI’s suggestion does not satisfy an architect's tacit knowledge, architects can intervene and adjust it accordingly; where architects need scientific suggestions on where a new connection or less room is needed, the AI can do the job.
So how efficient is the tool? What does the result look like? All 60 of the co-created trained results are documented in the book, each of them varying because of different levels of human interaction and wall resolutions. However, there are common observations among all of them.
3 Resolutions
As mentioned before, each building has 3 levels of wall resolution models. The low resolution follows structural principle, the mid resolution follows room divisions, the high resolution is fragmentation of a building. Even though each building’s 3 resolution’s observations may vary, the high resolution has always proved to be the most interesting as it created more variation and unexpected openings, connections, etc. while the low resolution has more restrictions in terms of how many walls can be moved.
Regulated Plan versus Open Plan
Open plan buildings turn out to have much more interesting trained results than circulation regulated buildings. The results of open plan buildings are much different than the original building, while the regulated building did not change much from its original design. There is clear reason for the distinction between the two types of building, the agents have more space to interact with walls in the open plan building, thus allows reconfiguration at parts, while the agents in regulated building do not have enough space to interact, the walls keep getting penalties in the training process as it cannot avoid touching of an agent.
Functional Space versus Non-Functional Space
Circular space can be of different functions in a residential building depending on when the building was built, and the culture background it comes from. How would the same space geometry, which is located both at the center of a building, generate different results from this programme which is majorly based on how agents use the space? The answer is both obvious and non-obvious at the same time. When the central circular space doesn't have the function that allows many agents to pass or stay, it is considered unnecessary in this programme, while when it is heavily used, small openings happen on the circulation boundary gives more creative solutions on connecting with surrounding rooms.
This project is a manifestation of an architect's co-creation with machine learning. It explores the plausibility to employ machine learning based architectural design alongside agent simulation as a new generation design method, and how it potentially influences the architectural design process. Through a selection of 20 housing precedents as research subjects, which share a continuous typology ranging across architectural history, this project allows interactive reconfiguration between the user and a trained machine learning model, and provides alternative results of architectural layouts based on co-creation and agent simulation.
To see Anni Dai's full thesis project click here.
No Comments
Block this user
Are you sure you want to block this user and hide all related comments throughout the site?
Archinect
This is your first comment on Archinect. Your comment will be visible once approved.