In the wake of artificial intelligence’s recently popularized presence in the architectural field and beyond, many are pondering what changes may be on the horizon in the discipline, if any. For Stanislas Chaillou, AI’s dissemination in architecture may refocus the profession’s attention on the importance of semantics as a way to describe and design the built environment. More than a conventional label, Semanticism may give a new name and direction to the practice of architecture, reflecting the conviction that AI-supported design abides by the principle of ‘form follows meanings.’ Below, Chaillou unpacks Semanticism and its agenda, which was first introduced in his recent book AI & Architecture, From Research to Practice (Birkhauser, 2022).
This article is part of the Archinect In-Depth: Artificial Intelligence series.
From an architect's standpoint, artificial intelligence can prove less straightforward to tackle than other topics. Media chatter, blatant myths, and intrinsic technical complexities are among the barriers that still keep our discipline at a cautious distance from this technology. After all, as a relatively remote topic in the architectural world, why should practitioners care about AI?
A first answer can be found when turning to one of AI’s many facets: projection. Condensed to its simplest definition, “projecting” refers to the operation of sending a known quantity from one domain to another. In essence, it is the art of computing mathematically, or of constructing geometrically, such projections. The invention of perspective is an everlasting testimony of its importance in architecture, where perspective views can be constructed using a handful of geometrical rules, allowing architects to project information contained in plans and elevations to build three-dimensional representations. With Gaspard Monge, Carl Friedrich Gauss, Pierre Bézier, and many others, successive generations of scientists pushed the envelope of descriptive geometry, so as to provide fields concerned with the study and manipulation of spatial information with an increasingly powerful toolset.
Architecture, and countless adjacent fields, have benefited over time from the gradual progress of projective techniques. The formulation of spatial transformations even became part of the architectural practice itself; a reality that the rise of modern-time computerization keeps reminding us of. Conversely, anyone, from theorists to practitioners, can today empathize with the central importance of projection.
That is also why the steep progress of geometrical projective techniques, thanks to AI’s latest developments, matters to architecture. Out of the many research projects, NeRF perhaps best expresses this pace of progress. Using a handful of 2D views of a complex object, this AI model can recreate an entire 3D model, inferring the geometry of hidden parts of the object. From 2D to 3D, from partial representation to complete description, NeRF raises the bar of what descriptive geometry has long been hoping to achieve. Many other models, operating on spatial abstractions like graphs, point clouds, etc., contribute to the same overall impression: The progress made by spatial projective techniques using AI is simply breathtaking in its speed and magnitude while also offering immediate relevance to our work as architects.
Architecture’s many topics and modes of representation find their place surprisingly well in the results of certain models
However, the very definition of projection encompasses a broader reality that represents an even more promising avenue for architecture. Far from solely achieving geometric projections, certain models today allow mapping domains as diverse as sound, video, text, and others into entirely different mediums. To only take a few examples: With AIs such as DALL-E, Midjourney, or Stable Diffusion, entire texts can be converted into images (Figure 2), while with models like AudioLM, texts can be translated into musical recordings, and with Make-A-Video, two images of the same scene can be converted into a video clip. More experimental projects using AI bend these projections to even more extremes: The artist Anna Ridler, in Drawing with Sound, maps the tracing in her drawings into musical harmonies; Hannah Davis, in her Symphonologie, converts the emotions conveyed in a collection of texts into a musical composition; Ross Goodwin, in his project world.camera, translates a sequence of short video clips into written novels. All in all, the sheer diversity of projections found today among scientific and creative projects is a testimony to the maturity levels of current AI models, allowing designers to perform complex translations between entirely different domains.
Meanwhile, the descriptiveness of these projective techniques has significantly improved to encode higher-level, more structured concepts. The latest generation of models, such as diffusers and LLMs, can today encode signification-rich abstractions, far richer than the low-level descriptors architecture saw with parametric modeling. Today’s mainstream “text-to-image” models demonstrate how fairly complex text prompts, using references to style, history, artistic movements, and many more high-level concepts, can be transformed into high-fidelity visuals. This gradual improvement of AI’s descriptiveness translates into fairly tangible outcomes for our discipline, as architecture’s many topics and modes of representation find their place surprisingly well in the results of certain models. Figure 2 above illustrates this reality, where each AI-generated image was synthesized from a text prompt as the sole input.
This inflection point corresponds to a refocusing of design, and more specifically of architecture, on the importance of semantics
The pace of progress in these areas of research is staggering. Running the same prompt through such models demonstrates the speed at which these technologies have begun to emulate concepts dear to our discipline and refine that ability over time. Figure 3 presents a benchmark, spanning from the model GLIDE (December 2021) to Midjourney V5 (March 2023), and demonstrates how the realism of images such as floor plans has improved drastically in just 18 months.
In summary, the multiplication of bridges between mediums, the general increase in descriptiveness of these projections, and the current pace of innovation together contribute to the ongoing turn witnessed by design disciplines at large. At its core, this inflection point corresponds to a refocusing of design, and more specifically of architecture, on the importance of semantics.
A building is a work of art only insofar as it signifies, means, refers, symbolizes in some way. — Nelson Goodman, in How Buildings Mean
As established earlier, AI does not place architecture in an alien environment. Instead, the emergence of AI raises considerations often familiar to our discipline. However, AI’s most promising encounter with architecture arguably occurs within another arena deeply entrenched in our discipline: That of its long-standing tradition of analogies with linguistics. AI today accelerates a transition from grammar into semantics; a movement that, in fact, began much earlier in our field and that we wish now to briefly unpack.
Linguistics studies the various aspects of language. On one hand, this discipline provides the means to pull apart, describe, and analyze sentences. On the other hand, it enables assembling, composing, and generating new texts. “Analyzing” and “generating” are the two inseparable sides of the linguistic medal. Architecture, in its proximity to the same requirements — that of describing built forms, and that of creating new ones — has naturally embraced many analogies with linguistic’s frameworks.
Over recent decades, architecture has, in fact, considerably borrowed from grammar and its concepts. In linguistics, grammar is interested in the formation, structure, and assemblage of language. It studies and formulates the rules (also called heuristics) by which words come together to create phrases; their function in the language, rather than their meaning (Figure 4, left).
Grammar’s foundation in systems and rules found an echo in architecture (Figure 4, right) at a time when the architectural discipline sought to formulate, organize, and replicate knowledge at scale. Whether theories like Shape Grammar and Parametricism, or tools such as BIM or visual programming, the wealth of experiments of the past fifty years have proved how much architecture can learn from grammar, particularly at its intersection with computation.
However, a strict grammatical translation of architecture leaves us with frameworks that do not fully account for many of its aspects. The influence of context (geographical, cultural, aesthetical, sociological, etc.), the plurality of design answers, and the constant evolution of built forms represent some of the most obvious limitations of the grammatical approach.
A strict grammatical translation of architecture leaves us with frameworks that do not fully account for many of its aspects
For these reasons, the use of grammatical concepts is today fading away, making way for new frameworks grounded in semantics. In linguistics, semantics approaches associations, rules, and constructs through the lens of signification; their meaning in the language, rather than their function (Figure 5, left). Similar to grammar, semantics offers to describe and generate sentences within a language. Propositions can be pulled apart to unfold their semantic structure (Figure 5, left) or can be put together to create semantically-valid propositions. The power of semantics as a descriptive framework explains its success over recent decades across countless fields. To only take one of the most famous examples, the Internet is today structured “semantically,” as per the prescriptions of computer scientist Tim Berners Lee. Web pages are organized using semantic “tags” and “markups,” following the OWL taxonomy so that their structure eventually reflects the signification of their content.
Like many other disciplines, architecture began to embrace semantics' principles to analyze built forms (Figure 5, right). At its core, this new approach to architecture meant considering forms’ significations as an important driving force of their organization. The 1970s and 1980s witnessed this entirely new discussion unfold, in the footsteps of “semioticians” like Umberto Eco, Charles Jencks, and others. Out of the many contributions of the time, the work of the American philosopher Nelson Goodman maybe best clarified this direction. Goodman, in his essay How Buildings Mean, aptly articulated to what extent meaning is to be found in architectural forms. If buildings do not 'mean' literally as other art forms might do (literature, cinema, music, etc.), argues Goodman, they do “refer [or] symbolize in some way.” They display properties, more than they express a discourse. Their meaning is, therefore, both indirect (referencing some style, symbol, cultural or spatial concepts, etc.) and latent (more or less expressed, somewhat sunken, or woven in the architecture).
Indirect and latent, yet very much present. So many aspects of our built forms are conditioned by this play of reference to a context, to some symbols, to some stylistic considerations, or even to some other architectural parts. Semantics precisely allows for the addressing of all of these dimensions, after decades driven by the predominance of the functionalist (architecture driven by its utility) and formalist (architecture driven solely by its form) agendas. With semantics, the study of architecture seeks to articulate the nature and repartition of references across built forms (Figure 5, right). That effort has led, over the past decades, to the development of new ontologies to describe building data. Schemas today like the ifcOWL, Urban System Ontology, and many other semantic abstractions offer new means to describe the network of significations running through built forms.
Instead of formulating rules and guidelines explicitly, as was done before with shape grammar’s procedures or parametric scripts, using AI implies inducing correlations through our machines’ repeated observation of architecture
However, in visual fields such as architecture, the generative side of semantics has historically lagged behind; a gap that is drastically closing today due to recent AI developments. Technically, this technology represents the advent of statistical learning as an alternate, more robust computational method to generate forms. Conceptually, this substitution represents a significant epistemic pivot for architecture. Instead of formulating rules and guidelines explicitly, as was done before with shape grammar’s procedures or parametric scripts, using AI implies inducing correlations through our machines’ repeated observation of architecture. In that sense, AI represents a complete methodological inversion for our discipline. In return, our built world's deep complexity lends itself surprisingly well to AI’s learning process. Clear examples include the results of models such as Stable Diffusion, DALL-E, or Midjourney, or the wealth of architecture-specific research projects (HouseGAN, Graph2Plan, ArchiText, our own research, and so many others).
All of these AI projects point today in the same direction. The generative side of semantics is closing in, thus completing its framework for architecture’s benefit. As this condition unfolds before us, delineating its specificities matters immensely.
Semantization is inevitable. — Charles Jencks, in Semiology and Architecture (1969)
Today, semantics is set to provide a robust and mature framework, both analytical and generative, to architects and designers. It represents an important inflection point, rounding up a “semantic momentum” for architecture and design disciplines at large. Semanticism will give a name to this new direction. Narrowed down to a few convictions, Semanticism corresponds to the idea that:
As with any theory, however, Semanticism stands with its epistemic gains (distinction, polysemy, and transposition) and limitations (style, control, and space). Both should be measured against well-known challenges within our discipline. Starting with the former, we see at least three clear avenues of contribution for Semanticism, each one directly addressing a key concern of the architectural agenda. These are distinction, polysemy, and transposition.
Semanticism’s first contribution to architecture is to provide a framework, grounded in computation, to weigh out differences among forms
Distinction: “The means of architecture is given by our capabilities to make and sense physical distinctions in space.” This assertion, opening William Mitchell’s seminal book The Logic of Architecture, perfectly captures how much our shared ability to objectify what is dissimilar in the built world matters. Describing and measuring the subtle differences among forms speaks directly to the purpose of the architectural discipline. This is also precisely semantics’ aim in linguistics: To disambiguate meaning in language by untangling significations from one another. By analogy, Semanticism’s first contribution to architecture is to provide a framework, grounded in computation, to weigh out differences among forms.
Over time, many methodologies have been developed to tackle this challenging task. Contributing to that effort, Bill Hillier — the father of Space Synthax — formulated in the 1980s how much statistics could be of help. For Hillier, a statistical approach to architecture represented a more robust method to qualify the specificities of built forms despite the apparent randomness of their patterns and the varying degree of expression of their characteristics. Decades later, AI rehabilitates and expands over Hillier’s conviction, bringing statistical learning to our discipline. This technology represents an alternate, more powerful way for architects to compute differences between built forms and ultimately refine or rethink the categories inherited from the past.
In AI’s multimodality, Semanticism sees the possibility for a polysemy of architectural forms
Polysemy: Architecture thrives when the design process is able to acknowledge the sheer diversity of built forms. Without fail, the very same conditions (the same brief, site, program, etc.) always permit a plurality of design answers. This “one-to-many” mapping finds its place in semantics through the notion of polysemy, the idea that a term can refer to various meanings. Similarly, this principle is materialized within AI’s latest generative models, through the concept of multimodality, where one input can be translated into multiple outputs. In AI’s multimodality, Semanticism sees the possibility for a polysemy of architectural forms. When a single semantic abstraction can be translated into entire fields of shapes (exemplified in Figure 7), rather than into a single form, the design process is provided with a wealth of relevant options for the architect’s immediate benefit.
Transposition: In architecture, it is a given that contextual factors (stylistic, typological, cultural, historical, sociological, etc.) have an impact on built forms. Similarly, in semantics, words acquire their meaning in context. Emulating this principle, AI’s latest generative models are increasingly able to synthesize shapes while accounting for the influence of contextual notions: The aesthetics of a style, the features of a typology, the atmosphere of a given place, etc. When compared to generative grammars of the past, these models represent a quantum leap forward. Semanticism sees here a tremendous opportunity for architects to play off such transpositions. When it becomes feasible to synthesize forms while having them register the influence of chosen contextual factors, architectural design and research together draw from an entirely new framework. As an example, the images generated in Figure 8 below play with such transposition, applying the notion of a nave within the context of a house, or that of a kitchen within the context of a church. Similarly, at the intersections of known typologies, styles, programs, etc., lie brand new families of forms, ready to be brought to the world.
Today, however, AI’s ability to emulate, merge, and hybridize styles fosters a new form of eclecticism, underpinned by a deep sense of creative freedom
Semanticism, however, is not immune to certain limitations. These remain important to formulate and study, as they complete our shared understanding of the times ahead.
To begin with, Semanticism is not a style. The all-to-famous motto of Viennese secession, “To every time its art. To art its freedom.”, expressed how much stylistic concerns characterized architecture’s most emblematic periods. Today, however, AI’s ability to emulate, merge, and hybridize styles fosters a new form of eclecticism, underpinned by a deep sense of creative freedom. In that context, Semanticism stands as a catalyzer from which styles can emerge. Turning to a (simplified) definition of the latent space can help unpack this reality.
As AI models learn, their architecture refines a compressed representation of the data they get exposed to. That representation takes the shape of a multi-dimensional space — a “ semantic map” of sort — that designers can then navigate through to generate images, texts, etc. On that “map,” architectural concepts such as styles are assigned specific locations. And as designers today roam through such latent spaces, the image they generate either recapture these canonical styles or, more interestingly, reveals hybrid blends, found in between known aesthetics. On one side, this steady unveiling of the vast, untapped fields left in-between known styles is deeply fascinating for designers. On the other side, however, the guidelines historically provided by stylistic principles tend today to fade away. The design process seems to unshackle itself from the imperative to abide by the rules of given aesthetics. And, although architecture can appreciate this sudden absence of constraints, grasping the almost limitless agency it implies can be overwhelming at times. To set aside the support and security of stylistics certainly leaves creators with much more moving pieces than before and, let’s be clear, with more responsibilities, too.
In tandem, Semanticism trades space for its abstractions. Working with AI requires us to clarify our intent so as to convey it synthetically to our machines. The reduction of architecture into clear, yet schematic, descriptions appears as a pre-condition to designing with this new technology. For this reason, graphs, diagrams, language, etc. can gradually become the mediums by which architects communicate their design intent to their machines. These “semantically-rich” abstractions condense architecture so as to express its signification while stripping away other dimensions. If the grammatical turn already introduced a sense of abstraction, by offering procedures, scripts, and schedules to designers, the semantic times reinforce this principle by inviting architects to act on even higher-level abstractions. As a consequence, space, in its more down-to-earth experiential reality, might be less immediately addressed when working with AI than with previous frameworks. In other words, if the design of space remains the aim of Semanticism, under this new framework, the relation to space tends to become more indirect.
Semanticism’s reliance on AI, therefore, comes at a price: Leveraging AI as an alternate computational framework implies, at first, loosening our grip over the design process
Finally, Semanticism reconfigures our control over the design process. In other words, when using AI, control is relinquished to be regained, loosened to be retightened. The former happens as a result of AI’s very functioning. Neither a “white box” — a fully controllable algorithm — nor a “black box” — an airtight model leaving no control to the end-user — AI stands as a “gray box” in the computation landscape. This expression, introduced by Andrew Witt, aptly describes the balance that AI strikes between control and complexity. As the growing intricacy of the models enables the approximation of ever more challenging problems, the legibility and the interpretability of its deeper computation can sometimes fade away. Semanticism’s reliance on AI, therefore, comes at a price: Leveraging AI as an alternate computational framework implies, at first, loosening our grip over the design process. Control, however, can be regained as tools become “grayer.” Models can be adapted and retrained, while software leveraging AI can help clarify these models’ internal complexity.
This movement is, in fact, happening across the board and well beyond the sole realm of architecture. Biology, economy, engineering, and countless other fields today entirely repurpose off-the-shelf AI models to meet the challenge of their respective domains. Architecture is no exception to this rule. The past six years have shown how much architects can divert existing models. Many projects are recrafting the very structure of certain networks. Others are focusing on retraining specific models on domain-specific data. Finally, integrations with existing design software let these AI permeate through the stack used by architects and accelerate designers’ reappropriation of generative methods. The repurposing and integration of AI models is a movement that Semanticism anticipates spreading throughout our disciplines in the coming years; it represents in itself a fascinating creative avenue for architecture.
To conclude this article, let us briefly turn to Semanticism’s agenda. This new program is paving many different avenues for architecture and design disciplines at large. In their current form, these different threads are intended to be unfolded, pursued, and enriched. However, they together offer a baseline for the many directions opened by Semanticism.
Integration between scales: This first avenue speaks to the magnitude and depth of the semanticist’s agenda. In clear, Semanticism proposes to permeate through every scale, from the morphology of cities, all the way to the design of objects. As it spans all levels of the built world, Semanticism is mindful of the distinct semantic fields that correspond to each specific scale, their terminologies, their hierarchies, their articulations, etc. Nested models and chained abstractions will be the semanticist’s means to capture these various levels and model their interdependence.
Trained on vast repositories of historical data, models today invite practitioners to navigate through times, as geologists would through geological layers, using the lens of semantic and visual affiliations
Genealogy of architectural forms: This program anchors architecture back into history, so as to unfold the different threads of its formal evolution. We see here a clear difference with the grammatical approach, which would strive to establish “timeless” patterns and absolute frameworks to describe architecture. Forms are rather addressed through the evolution of architectural patterns. The use of AI serves that purpose: Trained on vast repositories of historical data, models today invite practitioners to navigate through times, as geologists would through geological layers, using the lens of semantic and visual affiliations. In such a way, tracing the many genealogies of architectural patterns constitutes one of Semanticism's most promising directions.
Ecology: This direction aspires to balance, and maybe reconcile, ideas of performance and culture in the built world. In a time where ecology is a key consideration in the planning of our cities, performance, conceived in a broad sense (carbon footprint, optimization of material use, structural efficiency, energy consumption, etc.), does matter. However, the very principle of performance in previous frameworks has, at times, bypassed the imperative for forms to encode cultural factors. Overly optimized schemes have often proven to lack social acceptability. Learning from the errors of the past, Semanticism aspires to weave together efficiency and signification so as to maintain both high ecological standards and strong cultural fit. Anchored in AI, Semanticism can blend the power of affordable surrogate simulations with the semantic modeling of cultural factors. Threading the needle between both worlds represents a promising area of investigation for semanticists.
Anchored in AI, Semanticism can blend the power of affordable surrogate simulations with the semantic modeling of cultural factors
Interdisciplinarity: As AI’s projections connect mediums and domains, Semanticism aspires to bring together the various creative fields. The exchange of themes, styles, and references between these disciplines will be a source of intense cross-pollination in the coming years. However, Semanticism’s interdisciplinarity goes well beyond metaphors of “dialogues,” or “conversations” between fields. It rather seeks to translate into collective creative work; actual tangible projects realized among practitioners of entirely different creative backgrounds, using AI. Many academic initiatives are, in this respect, providing brilliant examples of this upcoming porosity.
Stanislas Chaillou is an architect, designer and AI-researcher working today at Rayon. His practice, publications, teaching and exhibits tackle the back-and-forth between geometry, artificial intelligence (AI) and culture. Trained both as an ...
8 Comments
It's not AI though. It's machine learning, technically abductive logic programming . Actual AI wouldn't require specific prompts or external reprogramming to do something. Current 'AI' is only mimicking "cognitive" functions that humans associate with other human minds.
Still impressive though.
You're right in pointing out that current 'AI' largely utilizes machine learning methods, including abductive logic programming, to mimic certain human cognitive functions. However, it's essential to understand that machine learning is a subset of AI and not a separate entity.
Your assertion that 'actual AI' wouldn't require prompts or reprogramming to perform tasks seems to refer to the concept of Artificial General Intelligence (AGI), which is designed to comprehend, learn, and apply its knowledge to any intellectual task, similar to a human being. This form of AI doesn't exist. What we have today are examples of Narrow AI that are designed to perform specific tasks and require particular prompts and data to learn.
Your comment has accurately identified that current AI models mimic certain cognitive functions. However, it's worth mentioning that these models don't 'understand' or 'perceive' in the same way humans do. Their functions are based on mathematical and statistical models, not human consciousness.
So, in essence, while it's true that today's AI relies heavily on specific prompts to perform tasks and largely mimics human cognitive processes, it's also crucial to recognize that this AI is part of the broader AI field, not a deviation from it. The idea of an AI that can act autonomously across any task is a future goal, not a current reality.
Correct.
What we currently have is Artificial Narrow Intelligence that cannot perform a task outside of what it has been programed to do. Artificial General Intelligence is able to perform tasks that it has not been programed to do. ANI is a tool that we can use, nothing more. AGI would be actual creativity being done by the the program.
Great piece, and thanks.
Technically, this technology represents the advent of statistical learning as an alternate, more robust computational method to generate forms.
It looks like statistical learning is the only way the AI can get outside itself, but it is a blind and mechanical approach that runs the risk of being arbitrary. Ultimately, some larger understanding of culture, of site is needed to make sense of the data. Neoplatonism may no longer make sense as a philosophy, but it goes a long way in understanding the design of renaissance buildings, as Wittkower demonstrates in Architectural Principles. And so on.
That line bothered me as well. It's not statistical learning. As you said, it's a blind approach that searches for previous solutions to the task the ANI has been asked to complete.
Novelty and creative problem solving is a big leap…not a small step. I’d be surprised if that could be achieved sooner than 100 years.
My understanding is that based on our current progression it's about 10- 20 years away. There is a possibility (about 50%) that true AGI will never be possible though. If that is the case it's speculated that ANI will be very advanced and include so much programing that it will be able to assist in solving / performing nearly anything.
https://medium.com/mlearning-a...
Block this user
Are you sure you want to block this user and hide all related comments throughout the site?
Archinect
This is your first comment on Archinect. Your comment will be visible once approved.