Over the past seven decades, the invention and evolution of computing technology have had a profound and transformative impact on the field of architectural visualization. From early computer-aided design (CAD) software to today's photorealistic 3D rendering, virtual reality, and generative AI tools, the advent of digital architectural visualization has offered the profession the means to produce imagery faster and cheaper than the exclusively analog visualizations of previous centuries.
Moreover, computing has enabled such visuals to be produced at an increasingly intuitive, accessible level; while the entry-level skills required to generate architectural visuals plummets, the photorealistic quality of such visuals soars.
In this latest edition of Archinect In-Depth: Visualization, we explore how advances in computing from sectors beyond architecture, from aviation to art, mechanics to media, were adopted, shaped, and molded by architects throughout the 20th and 21st century in the pursuit of visualizing architectural space.
The relationship between computing and architectural visualization began in the 1950s, coinciding with the development of the first commercial computers. Initially, such machines were primarily used for mathematical calculations and geographical mapping, owed perhaps to computing’s heavy funding from the U.S. military. The Whirlwind computer, for example, began development at MIT as part of the U.S. Navy’s Airplane Stability and Control Analyzer project, geared towards creating programmable flight simulation graphics.
The theme of the aerospace sector spurring the development of computer graphics continued with William Fetter, a graphic designer for Boeing who was referenced in a previous Archinect conversation with Molly Wright Steenson. Fetter recognized the potential for computer graphics to aid the visualization of dynamic environments, famously through his creation of the ergonomic human figure 3D model known as ‘Boeing Man.’
“There has been a long-standing need in certain computer graphics applications for human figure simulations, that as descriptions of the human body are both accurate and at the same time adaptable to different user environment,” Fetter recalled in a 1978 interview. After leaving Boeing, Fetter would ultimately become the Southern Illinois University Design Department Chairman, working with the celebrated architect Richard Buckminster Fuller.
Fetter’s intersection with Buckminster Fuller is somewhat of a metaphor for the longstanding relationship between architecture and computation, which existed from the earliest days of computing. As Wright Steenson outlined for our Archinect In-Depth: Artificial Intelligence series, Skidmore, Owings, and Merrill became early adopters of computing in the 1950s, while Christopher Alexander made use of the IBM 7094 computer housed jointly by MIT and Harvard. The 1964 Architecture and the Computer conference in Boston drew speeches and contributions from Walter Gropius and Christopher Alexander, while computer mouse inventor Douglas Engelbart spoke of the ‘augmented architect’ to describe a professional who harnesses the power of modern technology to amplify their problem-solving capabilities and design skills.
For the popularized use of computation in architectural visualization, however, the first major breakthrough arguably came in 1963 with the development of Sketchpad by Ivan Sutherland at MIT. Recognized as the first computer-aided design (CAD) program, the system allowed users to draw directly on a screen using a light pen, introducing the concept of interactive graphics to architectural design. Crucially, Sutherland strove to create a system and interface accessible to artists and draughtspeople, not exclusively programmers.
The seeds of digital transformation were being planted in research laboratories and academic institutions, where early experiments with computer graphics began to suggest new possibilities for architectural representation.
“Ivan Sutherland’s Sketchpad is one of the most influential computer programs ever written by an individual, as recognized in his citation for the Turing award in 1988,” wrote Alan Blackwell and Kerry Rodden for a 2003 forward to Sutherland’s paper introducing the invention. “After 40 years, ideas introduced in Sketchpad still influence how every computer user thinks about computing. It made fundamental contributions in the area of human-computer interaction, being one of the first graphical user interfaces.”
Despite such early advancements, commercial architectural visualization in the 1950s and 1960s remained predominantly manual, with hand-drawn perspectives and physical models serving as the primary means of representation as detailed in previous editions of Archinect In-Depth: Visualization. However, the seeds of digital transformation were being planted in research laboratories and academic institutions, where early experiments with computer graphics began to suggest new possibilities for architectural representation.
The 1970s witnessed the emergence of commercial CAD systems, though their adoption was initially limited due to high costs and technical complexity. The launch of commercial CAD systems such as Dassault Systemes’ CATIA and Autodesk’s AutoCAD, both in 1982, marked a turning point, making digital drawing tools more accessible to architectural practices. Such early CAD systems primarily focused on 2D drafting, essentially digitizing traditional drawing techniques rather than introducing new forms of visualization. Nonetheless, the rapid increase in efficiency heralded by digital drawing, in addition to the rapidly falling price of computer memory and storage, set the stage for computers to integrate into architectural workflows beyond 2D drafting in the ensuing decades.
Despite the emerging promises and commercial accessibility of digital drawing systems, UCL architectural history professor Mario Carpo notes in a 2023 essay for e-flux that the architectural profession at large exhibited little interest in digital adoption through the 1970s and 1980s. Rather, the landscape of digital adoption echoed that of decades previous, where breakthroughs came first as a result of innovations in the fields of automotive, aviation, and product manufacturing, only then to be adopted to building design in what Carpo dubs architecture's 'digital turn.'
“Throughout the 1970s and 1980s, while architects looked the other way, computer-aided design and computer-driven manufacturing tools were being quietly, but effectively, adopted by the aircraft and automobile industries,” Carpo notes. “But architects neither cared nor knew that back then — and they would not find out until much later.”
Unlike the static, committal nature of the hand-drawn perspective that had dominated architectural visualization for centuries, virtual models could be viewed from any angle or elevation, manipulated by texture and lighting, and animated with life, offering a seemingly unlimited supply of visual images.
By the early 1990s, however, digital adoption began to hit a critical mass in the architecture profession. Central to the shift was the arrival of software programs that moved beyond 2D drafting to also embrace 3D modeling, today commonly referred to as BIM. With the launch of Sonata in 1986 and Archicad by Graphisoft in 1987, followed by both SketchUp and Revit in 2000, architects could understand and easily manipulate structures digitally in 3D space. Unlike the static, committal nature of the hand-drawn perspective that had dominated architectural visualization for centuries, virtual models could be viewed from any angle or elevation, manipulated by texture and lighting, and animated with life, offering a seemingly unlimited supply of visual images.
“From the end of the 1990s, and to some extent into the present, digital streamlining has been seen as the outward and visible sign of the first digital turn in architecture: The image of a new architecture that until a few years ago, without digital techniques, would have been impossible — or almost impossible — to design and build,” Carpo notes, referring to the adoption of digital technologies in the profession at the turn of the century as a “cat out of the bag” moment.
While advances in digital architectural visualization throughout the twentieth century were dominated by architecture adopting innovations from the aviation, automotive, and adjacent manufacturing sectors, it could be argued that the twenty-first century has thus far seen this source of adoption shift to the media and entertainment sector. Reflecting on the nature of the architecture profession’s embrace of computing in the decades previous, this is perhaps unsurprising. While the development of digital systems by engineering-based sectors such as aviation was driven by a desire to solve design problems, the adoption of such systems by architects was initially driven more by drafting and representation, with design solutions remaining exclusively in the purview of the human architect.
“[The 1980s] is when architects and designers realized that PCs, lousy thinking machines as they were — in fact, they were not thinking machines at all — could easily be turned into excellent drawing machines,” Carpo writes. “Designers then didn’t even try to use PCs to solve design problems or to find design solutions […] They looked at computer-aided drawings in the way architects look at architectural drawings: Through the lenses of their expertise, discipline, and design theories.”
While the development of digital systems by engineering-based sectors such as aviation was driven by a desire to solve design problems, the adoption of such systems by architects was initially driven more by drafting and representation.
As Carpo would later note, the turn of the century would see architecture firms also embrace computing as an agent for design ideation, most notably through the parametric design methods practiced by notable firms such as Zaha Hadid Architects or Gehry Partners. Even today, however, few would disagree with the assertion that the architecture profession as a whole has ceded design representation to computers far more proactively than they have ceded design ideation, even with the advent of generative artificial intelligence.
Through this lens, it seems only natural that the media and entertainment industry would outpace engineering and manufacturing disciplines as the primary external influence for the field of architectural visualization in the twenty-first century. With the notable exception of Twinmotion (2005) and Enscape (2015), which were indeed developed with architecture at their forefront, some of the most pronounced tools used in architectural visualization today owe their origins to the media and entertainment industry. Among these are Adobe Systems (1982) targeting photography, video, and animation, Blender (1994) and Autodesk 3ds Max (1996) both targeting animated film, and Unreal Engine (1995) targeting gaming.
The influence of the media and entertainment industry has brought unprecedented sophistication to architectural visualization. Rapid advances in rendering engines and computer processing power enabled the creation of photorealistic imagery increasingly indistinguishable from photographs. Meanwhile, virtual and augmented reality technologies driven by the pursuit of creating new media experiences for consumers have been adopted by the architectural profession to allow clients and designers alike to walk through and interact with unbuilt space using virtual reality headsets. Elsewhere, the development of real-time rendering engines, particularly those derived from gaming technology such as Unreal Engine and Unity, has transformed the visualization workflow, with architects able to create and modify high-quality visualizations instantaneously.
“Technical innovation allows us to keep doing what we always did, but faster or for cheaper; which is a good enough reason for technical change to be happening at all,” Carpo asserts in the introduction to his e-flux essay. By this standard, the technical changes to architectural visualization brought about by computing, whether they originate from aviation, art, or artificial intelligence, have proved their worth. The advent of computer-aided architectural visualization has offered the profession not only the means to produce imagery faster and cheaper than the exclusively analog visualizations of previous centuries but also at an increasingly intuitive, accessible level; while the entry-level skills required to generate architectural visuals plummets, the photorealistic quality of such visuals soars.
For the augmented architects of 2024 and beyond, the primary challenge posed by such new-found abilities in representation is no longer ‘Can you draw what you deliver’ but ‘Can you deliver what you draw.’
Niall Patrick Walsh is an architect and journalist, living in Belfast, Ireland. He writes feature articles for Archinect and leads the Archinect In-Depth series. He is also a licensed architect in the UK and Ireland, having previously worked at BDP, one of the largest design + ...
1 Comment
...and yet clients are still blown away by a good hand drawing.
Block this user
Are you sure you want to block this user and hide all related comments throughout the site?
Archinect
This is your first comment on Archinect. Your comment will be visible once approved.