Archinect's Lexicon focuses on newly invented (or adopted) vocabulary within the architectural community. For this installment, we're featuring a term that has risen to prominence with the advent of generative AI tools such as ChatGPT and DALL-E.
"Hallucination" in the context of artificial intelligence refers to the phenomenon where an AI model generates or interprets something that isn't grounded in the actual input data it was given. This can happen when AI is asked to create content such as text, images, or speech. The AI may infer patterns or details based on its training but doesn't truly comprehend the information it is handling. It can produce results that seem plausible but might not correspond accurately with the provided input or real-world context.
In the field of building design and architecture, AI hallucinations might be relevant in several ways:
Generative Design: AI tools are often used to generate creative designs based on given constraints or design rules. An AI trained on a dataset of architectural designs might "hallucinate" by creating building designs that incorporate elements not present in the training data, or it may combine design elements in new and unexpected ways. While these generated designs might be innovative and inspirational, they could also be impractical or impossible to build in reality.
Image-based Rendering or Reconstruction: AI can be used to render or reconstruct 3D building models from 2D images or sketches. The AI might "hallucinate" by interpreting ambiguous elements in the sketches or images, filling in gaps based on its training data. For example, it might render windows, doors, or other architectural elements where none were actually indicated.
Predictive Analysis: AI can be used to predict future trends in architectural design or the performance of a design under various conditions. The AI might "hallucinate" by predicting outcomes that aren't based on the actual data but instead are artifacts of the patterns it learned during its training. These predictions might seem plausible but could be misleading or inaccurate.
Deepfakes in Architecture: Similar to how AI can be used to create realistic but fabricated images of people (i.e., deepfakes), it can also be used to create "deepfake" architectural imagery. An AI could be trained to generate photorealistic images of buildings that don't exist, incorporating architectural styles or elements it has learned from its training data.
In all these cases, the AI is not intentionally creating falsehoods or deceptions. It is simply trying to fulfill its task based on the data and instructions it has been given. It is up to the human designers and architects to understand the AI's limitations and ensure that its outputs are validated against real-world constraints and requirements.
This article is part of the Archinect In-Depth: Artificial Intelligence series.
No Comments
Block this user
Are you sure you want to block this user and hide all related comments throughout the site?
Archinect
This is your first comment on Archinect. Your comment will be visible once approved.