Neil Leach is a British professor and licensed architect currently based in California. He has worked for NASA developing a 3D printer for the Moon and Mars, and is co-founder of DigitalFUTURES. Having authored over 40 books on architecture and digital design, and taught at some of the world's leading architecture schools, including the AA, Harvard, SCI-Arc, and Columbia, his in-depth understanding of architecture's professional and academic landscapes allows him to speculate how artificial intelligence will impact the future of design.
For Leach, conversations limited to popular AI tools such as Midjourney and ChatGPT detract from a broader reckoning that the architecture profession must have in the face of ever-more-capable AI models and platforms. This AI-induced reckoning includes, though is not limited to, the supply and demand of architectural labor, liabilities, and insurance, and the future of all pillars of the architectural community, be it practice, academia, or licensure.
In June 2023, one year after the publication of Leach's book Architecture in the Age of Artificial Intelligence: An Introduction to AI for Architects, Archinect's Niall Patrick Walsh spoke with the author and theorist for a wide-ranging discussion on the topic. The conversation, edited slightly for clarity, is published below.
This article is part of the Archinect In-Depth: Artificial Intelligence series.
Niall Patrick Walsh: We last spoke on March 14th, 2023, which was one day after the release of GPT-4, and one day before the release of Midjourney V5. In some ways, we were on the precipice of another leap in the capability of popularized AI tools. I recall earlier versions of Midjourney possessing a somewhat dreamlike, hallucinogenic aura in its imagery, while now, such images are near-indistinguishable from photography. What are your thoughts on how the AI landscape has shifted since was last spoke?
Neil Leach: Let me start by saying that we have now begun to realize how astonishingly capable AI can be. This is precisely because of the launch of ChatGPT and the upgrading of Midjourney, which has now released V5.1 and is about to release V6. The sheer success of ChatGPT as the fastest-growing app in Internet history is evidence of this. People are just blown away by it. And the same goes for Midjourney, which has become so popular among architects. I post my work on Instagram, and many other architects are doing the same. Some of the results are outstanding. It is now possible to generate images in a matter of seconds, often rendered so perfectly that it is difficult to tell them from reality. In fact, in a recent questionnaire posted on Instagram, most thought that one of my AI-generated images was either a photo or a rendering. Of course, those of us already working in the field of AI had always been aware of its promise, but I doubt whether anyone had anticipated quite how well it would work. AI has just been racing ahead, improving at an incredible rate. That is why some of the leading figures in the field are growing concerned and calling for a halt or slowing down of the development of Large Language Models (LLMs) that are behind ChatGPT and Midjourney.
If AI is terrifying, it is because it is astonishingly good. — Neil Leach
If AI is terrifying, it is because it is astonishingly good. And it is its very capabilities that will no doubt pose a threat. Many architects, however, whose understanding of AI is limited to Midjourney, believe that AI won’t be a threat to architects. But this is rather perverse. Anybody who really knows about AI, including top people such as Elon Musk and Geoffrey Hinton, tells us that we should be worried about it. For my part, I am more interested in predicting long-term what the future of AI might hold for the profession than in images generated by Midjourney. Nonetheless, although the impact of AI will go well beyond image generation, the sheer quality of these Midjourney-generated images does offer us a tantalizing glimpse of the astonishing potential of AI in other domains. Clearly, AI is both incredible and terrifying.
I’ve spoken to a wide cross-section of the profession on AI over recent weeks and months. I have heard much about how AI is incredible but little about how it is terrifying. What are some of the most significant threats you would want architects to know about with regard to AI?
Leaving AI out of the equation for a moment, we can see that the profession is struggling already. In the UK, for example, the privileged position that architects used to enjoy in charge of the construction site has been undermined. More than 50% of all building contracts involving architects are design-build, where the architect is now working for the developer. So the authority of the architect has already been eroded, and I think AI is going to make things worse. Months ago, I was reading an article claiming that architects don’t need to worry about AI, and I thought: “What do you mean? ‘We don’t have to worry?!’” Of course, we have to worry. Architects really need a wake-up call to be aware of these issues.
If this is indeed the case, we could potentially see 80% of all architects unemployed. — Neil Leach
There are a number of trends that need to be examined in this context. A study conducted back in 2013 evaluated what jobs were susceptible to AI. Architecture fared well here, coming in at 82nd out of approximately 700 jobs. In the researchers’ eyes, architecture wasn’t at as much risk as other jobs, such as that of telemarketers, which has all but disappeared. However, the flaw in this study is that it compared AI to humans directly. I take a different view. It is not about AI versus humans; it is about humans using AI versus humans not using AI. Wanyu He, the CEO of AI software developers XKool, recently speculated that one architect using AI would be as capable as five architects not using AI. If this is indeed the case, we could potentially see 80% of all architects unemployed. Obviously, there are particular domains of design where AI will be less effective than in others, so we need to clarify in detail what exactly this 1:5 ratio means, but it remains an urgent issue nonetheless.
I am particularly worried about a repeat of the early 2000s when computation became more prominent in the industry. I recall architects from Foreign Office Architects telling me they couldn’t employ anybody over the age of 40 because they weren’t proficient enough in using the software. Going back further, I recall being a student at Cambridge at a time when computers were banned from the studio. All this did was to produce a generation of unemployable graduates.
This also brings us to the question of supply and demand, articulated so well by Susskind and Susskind in their book, The Future of the Professions. I often hear the argument that AI will perform more menial, administrative tasks in the architecture studio, freeing architects up to engage in more creative work. This assumes, however, that there is an ever-expanding amount of work. In fact, the reality of the construction industry is that there is only a finite number of buildings that need to be produced, and architectural fees are based on a percentage of their construction costs. There is a danger that if architects can perform their work in a more expedited manner, they will begin to undercut each other even more radically than they do today.
Due to competition laws, architectural fees are not protected in any way, and there is nothing to stop a race to the bottom. If the fee-per-project drops, and there is a finite number of projects, then inevitably, there is less money to go around to employ architects. Of course, architects can always diversify. Over the past few years, we have seen architects engaging with other realms, such as the Metaverse, but it is unclear whether these will survive or go the same way as NFTs and cryptocurrency. So the future of the profession is by no means secure.
Once we have AI systems capable of doing this, the idea of the licensed architect may be a thing of the past. — Neil Leach
The important question is ‘When?’ At the moment, we are seeing a version of Moore’s Law play out, where progress in AI is speeding up at an exponential rate. There is a frenzy among big players, such as OpenAI, Google, etc., to keep pace with each other, and this is set to speed up even further in the future. It is, therefore, important for us to ask exactly when this future I spelled out might happen. This goes back to the question of how long AI remains a prosthesis-like extension of human capability, and when it becomes capable enough to perform relatively — if not totally — autonomously. When will we see the point when AI goes from being a useful assistant to something likely to take over architectural jobs? Within our current economic condition, AI tools are very helpful and allow architects to perform more efficiently, but, as I mentioned before, this condition is unlikely to last.
It isn’t just the profession that will be impacted by this, but academia, too. If we will only need a fraction of the number of architects, will we not also only need a fraction of the number of graduates? Moreover, if these tools will do most of the work for architects, will students really need five years to study architecture? And if education is reduced to three years, would there not also be a reduction in the number of architecture professors needed? And what of licensure? XKool has already been asked to use its AI software to evaluate competitions. If you can use AI to evaluate competitions, obviously, you can also use AI to enter competitions — and XKool has already beaten leading architectural firms, such as MVRDV, in competitions. Meanwhile, XKool is currently developing AI software to be released in around three years' time, which will allow a single platform to generate proposals, calculate costs, conform to planning and building code constraints, ensure structural stability, optimize environmental performance, and so on.
Once we have AI systems capable of doing this, the idea of the licensed architect may be a thing of the past. Planning proposals will be based not on the name of the architect but on the data itself. Having said this, we live in a world where things are slow to change, and we often cling to processes that are frankly obsolete, such as stamping passports when going through immigration. So, I’m not saying with certainty that the licensed architect will disappear quite yet, but I can see how we could arrive at that point.
If AI systems are deployed in this way, it raises the obvious question of liability. Part of the reason there is such a crisis of indemnity insurance among architects is how exposed our profession is to being sued for defects or accidents arising from the construction and operation of buildings. If an AI tool is deployed that claims to generate design proposals as you describe above with no human oversight, and commits an error, does liability for such errors and defects switch to the creators of the tool?
Phil Bernstein noted in his book Machine Learning that the End User License Agreement for Autodesk software emphasizes that people use the software entirely at their own risk and that the company assumes no responsibility whatsoever for its fitness for purpose, accuracy, or other outputs. Is a property developer going to be willing to stake millions of dollars on a design proposal generated by AI whose company contains this style of user agreement, without the involvement of a licensed architect who, being cynical here, a developer can sue if something goes wrong?
There are two points I would make here. Firstly, developers are already asking architects to use AI, as it guarantees their return on investment and doesn’t make the same mistakes that humans do. Secondly, on the topic of insurance premiums, we need to think about self-driving cars. Computer scientist, Toby Walsh, is the author of a book called Machines That Think, in which he predicts that we will eventually be banned from driving. Taking his argument step-by-step, he predicts that self-driving cars will become increasingly reliable and very convenient, such that we will become increasingly reliant on them. As we do this, our own driving skills will decrease. Eventually, self-driving cars will become more reliable than human drivers, so insurance premiums will increase dramatically for people who opt to drive manually, to the point where they become unaffordable for most people. Eventually — according to Walsh — we will be banned from driving, and nobody will even notice or care.
Now, I think the model of the self-driving car is important for architects because just as a self-driving car doesn’t need a driver, so a fully capable AI AEC tool doesn’t need an architect. Maybe the architect will oversee things, but they will not be performing the kind of roles they have performed in the past. So, personally, I take the view that insurance premiums will encourage the phasing out of human architects, not protect them.
I want to continue the transport metaphor here and move from cars to airplanes. Even in the aviation industry today, airplanes largely have the ability to fly themselves. The human pilot is nonetheless still there to act as a troubleshooter, if nothing else. You could argue that the human is still in charge of that airplane and oversees and validates the actions of the airplane, even if the airplane itself is performing most tasks autonomously. Is there an analogous future here for architects? Or are you suggesting that autonomous AI design tools will be operated directly by a developer, without approaching architects as a third party to deploy the tools on their behalf?
Firstly, computer scientists have realized that ChatGPT is already able to write code and that many of their jobs are at risk, although inevitably, there will always be a small number of coders left behind to oversee things. Perhaps we could see a similar scenario play out in architecture. Secondly, we should also be aware that the role of the architect has been to take the words of a client as an instruction and translate them into a visual proposal. This is uncomfortably close to the role of Midjourney today. Architects should be aware that not only does this make them vulnerable, but also that a developer could indeed skip out the architect entirely.
Be more aggressive in how you position yourself in this new AI-augmented world. — Neil Leach
Having said that, one comment that arose from a discussion I had with Autodesk is that architects need to become more aggressive. In other words, architects could actually take over the role of the developer just as proactively as the developer could take over the role of the architect. Up until now, the developer world has seemed rather inaccessible to architects, in that they don’t know much about how it works. However, even now, we could simply ask ChatGPT to outline the key steps to becoming a developer. All of a sudden, that pathway to becoming a developer seems less daunting.
Whether becoming a developer or taking on some other role, this has been my main piece of advice to architects: Be more aggressive in how you position yourself in this new AI-augmented world. Architects actually have quite a marketable range of skills. They are trained not just to design but also to think three-dimensionally, understand material behaviors, etc. This skill set has traditionally been channeled narrowly into building construction, but it could actually be redeployed elsewhere. For Behnaz Farahi, whom you recently spoke to, one area has been 3D printed fashion, as it has been for Neri Oxman, Philip Beesley, and several others. NASA has also employed many with an architectural background, as has the movie industry. And so, if architects are getting marginalized by developers, why not become developers? Architects need to start thinking in these more ambitious terms because, right now, they are trying to feed themselves off an ever-shrinking slice of the pie.
I want to push this point further, as we are starting to move from diagnosis to treatment. You’ve already mentioned the need for architects to expand the purview and application of their skills; is this your central piece of advice for architects seeking to respond proactively to AI’s growing presence in the profession?
Absolutely. We have already seen how disciplinary silos are breaking down. The most important advice I have is that right now for architects is that they need to design not just another building but the very future of their profession itself. They need to be realistic and reimagine what it means to be an architect in the broadest possible sense. Today, there is a lot of cocooning happening. Architects are operating in their own bubble, often oblivious to what is happening outside. Architects need to burst that bubble and expose themselves to what is coming down the line. Moreover, if architects do not move with the times in the age of AI, they risk the danger of becoming extinct.
The most important advice I have is that right now for architects is that they need to design not just another building but the very future of their profession itself. — Neil Leach
Certainly, institutions such as RIBA won’t protect them. Their own survey from 2017 found that only 6% of new homes are designed by architects. All that RIBA, ARB, and others seem to be interested in is defending the title of the architect itself, which is a bit absurd. For example, in a further survey in 2013, the RIBA found that some architects today don’t even market themselves as architects because they feel their work is much broader than that, involving industrial design, product design, consultancy, Metaverse design, etc. If all these institutions do is protect the name of the architect, even though architects don’t find that name very useful anymore, what contribution are they making?
In the lead-up to this conversation, you mentioned that although advances in LLMs such as GPT-4 have prompted warnings from leading tech figures such as Geoffrey Hinton, Elon Musk, and Sam Altman, you have not seen the same urgency emanating from the architectural community. Do you have thoughts on why that is? When we think back to Buckminster Fuller’s famous line, ‘We are called to be architects of the future, not its victims,’ there’s a notion that architecture and the future are entwined with each other. Why do you feel this hasn’t happened in the AI discussion?
First of all, I don’t think architects are futuristic at all. They like to think that they are. However, if you look at architectural education, there are many courses on the history of architecture, but seldom any speculating on the future, apart from those taught by genuinely futuristic thinkers, such as Liam Young. Beyond this, when architects engage with futurism, too often, they see it in terms of form. They ask about what the ‘style’ of the future will be rather than the ‘logic’ of what is going to happen. Back in 2009, I guest edited an issue of AD titled Digital Cities, and many of the contributions were speculating about the architectural style of the future city, such as Patrik Schumacher talking about Parametricism. However, there was one incredible article at the end by Benjamin Bratton titled iPhone City, which talked about how a simple device such as the iPhone was changing the way we negotiate the built environment. To me, this was the way forward in terms of how architects should engage with the future. Most architects get seduced by form. But they should focus instead on informational processes — especially in the age of AI.
If you look at architectural education, there are many courses on the history of architecture, but seldom any speculating on the future. — Neil Leach
For example, does an Uber car look any different from an ordinary car? No, because it is an ordinary car. It just uses different informational processes. Similarly, the city of the future might not look much different from the city of today, but it will operate in a fundamentally different way. I believe that much of the existing building stock will remain in place but be retrofitted with AI-based technologies to make it more sustainable. Likewise, traffic — and other operations in our cities — will be completely controlled by AI.
I want to end on a question you grappled with in your book Architecture in the Age of Artificial Intelligence, which fascinates me; I don’t know that I have fully internalized the question, let alone the answer. Can AI be creative?
That’s a big one. I stopped short of even answering it in the book because I want to write a separate book entirely devoted to the question, such as its importance. But briefly, I take a provocative view of creativity. In the book, I gave the example of AlphaGo, an AI defeating a world champion, Lee Sedol, in the highly complex board game Go, in which there are more potential moves than there are atoms in the universe. Nonetheless, AlphaGo made a series of strategically brilliant moves that defied human comprehension. But I believe that in the end, it was just a machine finding the best solution to a problem in the most efficient way possible. I don’t think it was being creative. By extension — and this is where I am really being provocative — I wonder if what we call creativity in human beings might also be something of a myth.
I wonder if what we call creativity in human beings might also be something of a myth. — Neil Leach
Arthur C. Clarke once noted that any sufficiently advanced technology is indistinguishable from magic. Likewise, he noted that any science that we do not fully understand, we call ‘magic.’ In reality, of course, there is no such thing as magic; the magician simply conceals the operations in a trick and fools the audience into thinking that it is magic. I’ve often wondered if we couldn’t compare creativity to magic, and say similar things about ‘creativity.’ Maybe human ‘creativity’ is also, in fact, a straightforward ‘scientific’ process — not dissimilar to how AI works. But because we don’t understand it at face value, we call it ‘creativity.’
At any rate — whether or not we can call AI ‘creative’ — it is clear that it can operate at a level that we simply cannot comprehend. Just as a dog has a greater range of hearing and smell than human beings, so AI is exposing the limits of the human mind. In fact, Geoffrey Hinton has observed that perhaps these machines have a different form of intelligence than humans, but it is a superior form of intelligence nonetheless.
Rather than judging ‘AI creativity’ in human terms, we should perhaps judge ‘human creativity’ in AI terms, and recognize its limitations. — Neil Leach
Either way, we must be aware of the astonishing capabilities of AI. In my writings and lectures, I call for a ‘second Copernican revolution.’ Just as Nikolas Copernicus had recognized that the universe does not revolve around the Earth, so too do we need to recognize that we, too, are not the center of intelligent life. We have now discovered that there is something out there potentially more capable than us. Likewise, rather than judging ‘AI creativity’ in human terms, we should perhaps judge ‘human creativity’ in AI terms, and recognize its limitations. Moreover, whatever capabilities AI has now will become vastly superior in the future.
That is why I find AI both exhilarating and terrifying.
Niall Patrick Walsh is an architect and journalist, living in Belfast, Ireland. He writes feature articles for Archinect and leads the Archinect In-Depth series. He is also a licensed architect in the UK and Ireland, having previously worked at BDP, one of the largest design + ...
20 Comments
As with all current Artificial Narrow Intelligence is only capable of solving problems in the field it has been programed to work on. The programming is all done by human input and as such the ANI is only presenting possible solutions that it has already seen humans propose. Basically ANI is simply going through iterations or variations of vast solutions that humans have already come up with. I would argue that this isn't actual creativity.
True AI would be Artificial General Intelligence where a program could solve a problem it has not been programed to work on. I would propose that this is actual creativity.
@chad, that is true enough, but also feels nitpicky. The first EV was built in 1830's, the first practical EV built in the 1880's. Lots of stuff happened in between, history history history, then Tesla EV appears as if nobody thought of it before. And now it seems as though the future is electric and everyone else is working hard to catch up. Kinda feels like we are going through a sped-up version of that process with AI.
Can't help but agree with Scott Galloway - that AI is not going to take our jobs, it will be someone who understands AI that is going to take our jobs. A recent article on AI at ZHA in the NY TIMES discusses their use of data to design offices to match behaviour better and to ideate layouts and planning of their office tower designs. Dezeen wrote about their use of generative AI to test form-making. Seems like this is the actual importance of AI, more so than the imagery, which you point out is kind of bland because is it can easily end up mimicking a design by committee bland-ness.
While AGI is not likely to happen for a while yet, one of the definitions of creativity is to mash up ideas in unexpected ways, and that is something AI is good at. It doesn't care about taboos and has no hang-ups about what is supposed to go with what, so that helps us get out of our own heads and find new opportunities. The creativity is still with the human controller, but that seems to be the point. We still need to go through a lot of effort to get to something good, but the range of what is possible might have broadened somewhat, and the pace of exploration might be sped up.
Not that it is not dangerous or worrying. Personally I am more worried about privacy and AI than whether our creativity is being challenged. Legally speaking we are really bad at protecting the former (and seem to be collectively OK with giving them away in exchange for a dopamine hit with all kinds of apps), but at least have some rules about the latter.
You may view it as feeling nictpicky but it's not.
The difference between ANI and AGI is HUGE. It's like difference between driving a model T and traveling the speed of light.
I like how ANI allows us to draw parallels between unrelated and unconsidered solutions to design problems. There are limitations to ANI though. It's output is only as good as the solutions already created by humans. Now AGI on the other hand . . . that's an entirely different subject matter.
On related note. AGI is speculated to be a few decades away. This is assuming that progression on this tech continues at it's current rate. Those involved in the field say it is there is a real possibility (about 50%) that AGI is not possible.
https://medium.com/mlearning-a...
nitpicky only that you are dismissing the impact because it is not AGI. It is an argument that comes up a lot. It seems like the coming impact of the current version of AI is already looking pretty big, whether it is AGI or not.
I'm not dismissing the impact at all. I never commented on the programs impact.
I said that I don't view ANI as being creative - ie the program isn't being creative. It's the same with any other tool that we use in the profession. I only speak about ANI because AGI doesn't currently exist.
May I ask if you've used an ANI in your design work?
I will leva this here without comment,
Very fantasy adventure like!
Now detail it. ;)
I can imagine greek and roman architects puking after seeing this
And since when is that cartoonish rainbow buildings are classical?
I would be curious to see a decision tree on how Midjourney came up with these images. It is utterly agnostic to the word "beautiful," a word impossible to define to anyone's satisfaction, but it might have picked up bald reflections from the Shubow crowd and others that equate beauty with classical inflections. But there is more going on here, a sense of fantasy wholly precious and artificial, which it likely got from scanning prevalent imagery, or imagery it has received. It may have other parameters built in. Ultimately, they are one read of part of the culture, and, yes, this is terrifying. They are hideous. AI will other takes, however.
AI will have other takes.
It is just odd that we debate the merits of something that obviously makes decisions but can't explain itself in terms we care about.
That's a great point. AI doesn't explain anything in terms we care about because it's not capable of careing, therefore it will be a useful tool but can never substitute for the things we care about most.
Then again, we might start talking about beauty in this language: 'This model is desgined by adding five MLP layers on top of (frozen) CLIP ViT-L/14 and only the MLP layers are fine-tuned with a lot of images by a regression loss term such as MSE and MAE." From the Aesthetic Predictor, 18x32's link above. We do things like that.
Rather than seeing AI in opposition to architecture, a possible threat, it might be more useful to consider it as a continuation of trends already in place—dissipation of cultural reference, dilution of personal experience, abstraction in our creations removed from actual place and cultural and individual identities, economic reductions and simplifications in construction, especially of labor, the growing influence of developers and what they understand and represent and do not, the desire to appeal to mass taste, the need to get attention—all this in just about everything we do now. We have lost what has become another abused word, authenticity. AI will accelerate all these trends.
Whether or not AI is creative is problematic because the word "creativity" has become diluted almost to the point of being meaningless. "Creation" has devolved from the power of creation myths in world religions to the inspiration of the rare individual genius inspired by experience, skill and training, insight, and cultural depth to how the the word is used today. Now almost anything that is odd, "interesting," and attracts attention is called creative now, though it may not be more than a juggling around of a few simple elements. It's not hard to imagine, however, an AI that develops a new esthetic, based on how we see creativity today.
Can AI create a Jackson Pollock? Of course it can:
https://neural.love/ai-art-generator/1ed49d02-e698-6ae8-a7c6-91c598b8026f/abstract-jackson-pollock
And of course it cannot. Even if the images could be hooked up to a 3D printer or some machine that slings paint, thus creating a painted canvas, I'm skeptical an image could ever be made that represents all the idiosyncratic gestures in a Pollock. They also won't be Pollocks because Pollock did not paint them. To understand and value a Pollock, you have to understand the movements that preceded, surrealism, cubism, what Pollock accepted and rejected, other trends, the temper of the time, Pollock's unique—and troubled—identity, still more, to all of which an AI is blind. And while AI can recombine superficial marks, it can never comprehend work that is deeply expressive and utterly incomprehensible. There are no good analyses of a Pollock. Still, expression and the inexpressible matter to us, as does being inexpressibly human. AI, on the other hand, is indifferent to those and is numbingly literal.
Can it design a Zen rock garden? Here are feeble attempts:
https://neural.love/ai-art-generator/1edf7c2d-2c0a-6f5e-842a-5f0fd9e1a256/a-tranquil-japanese-zen-garden-with-raked-sand-large-rocks-and-manicured-trees-peaceful-serene
But of course AI can do better. But whatever it comes up with will be artificial and un-zen, however close it might get to this:
Again I'm skeptical it can come up with the idiosyncrasies. Also, part of what makes a Zen rock garden is that we know it is a Zen garden inspired by Zen, that it exists at a temple that might be hundreds of years old and has been contemplated all that time, that the sand has been raked periodically by Zen monks after years of Zen discipline. Our appreciation is conditioned by experience and understanding, most by practice and prolonged contemplation. We may reject Zen, or cannot make sense of it, or if we accept it we have to maneuver somehow around the nothing that is everything and that cannot be expressed. In all cases, however, we are left with a landscape that is compelling, that takes us outside ourselves—somewhere—that cannot be created in any other way.
ANI is a tool, no different than modeling or drafting software. It' neither good or bad. Ultimately it's all about how people use it that will determine how 'successful' it is. AGI is a different matter however.
"Generative-AI programs may eventually consume material that was created by other machines—with disastrous consequences."
https://www.theatlantic.com/te...
that is a real problem, TD.
Sundar Pichai was saying the same thing recently, pointing out that incentives for continuing creativity from humans will be needed for AI to not end up going to pablum. He implied that AI providers might compensate creators the same way you-tube does, but seemed hedgey about the entire thing. And never got into data-set bias or any of the ethical issues beyond financial. There are still quite a few more cans with worms in them to be opened.
i'd comment but what's the point? just use chatgpt and ask how wrong you are about everything.
I did. It said something about your mother not understanding binary.
Block this user
Are you sure you want to block this user and hide all related comments throughout the site?
Archinect
This is your first comment on Archinect. Your comment will be visible once approved.