Many moons ago when I moved to London my first job was as a “BIM coordinator” of an architecture studio in Clerkenwell. Even that I was a BIM Coordinator what they really wanted was a Revit drafter and when I started to bring the ideas, I have implemented previously in a studio in Copenhagen I was meet with rejection. My “crazy ideas” was the creation of a set of standards families and CAD/BIM components to help them to be more efficient in drawing production. The studio had a think tank at the time that had spent a year analysing their residential and educational projects and most of them had similarities: most of them include the same typology of doors, walls, windows, etc. At the time there was not so many manufactured ready components so I proposed the partners and leadership to dedicate some of my time to the creation of those and I was met with: but we get pay for drawings if we completely automate that what are we getting paid for? also we cannot admit that most of our design is the same…automation and efficiencies from BIM software was solely rejected at the time and the rumour and claimed that BIM and tech was going to replace architects started to run around. For some months while at the pub I was questioned about replacing peoples job and line weights (I kid you not).
Today most architectural studios including this one mentioned above have benefit and do practice some of this type of set up, where there is some automation by re-using most common used components. Many that they saw their job taken by BIM are not scared anymore and life continues, but funny enough in the last months I started to see the advance of some technologies that made me think: wait a second? can this replace some architectural/design task in the future.
The technology that I am talking is undoubtedly AI (not going to enters into the debate artificial intelligence or augmented intelligence in this article but bear in mind that I am more a believer of the second term, not in any way a luddite) to be more specific stable diffusion. If you are not familiar with stable diffusion is a deep learning technology that converts text-to-image. Initially developed by CompVis group at LMU Munich and later released with the collaboration of Stability AI, CompVis LMU and Runaway with support of more organizations. Even that I started testing Midjourney today is available publicly and you can run it also in a consumer hardware instead that in a cloud service. A general advice when you are trying any AI engine that is not text-to-image based and might work slightly different, ie by you uploading a picture to then generate many other, make sure you read their policies about privacy. Stable Difussion was trained on pairs of images and text/captions, some 5 billion image-text pairs data set. Which was financed by Stability AI and created by LAION. If you want to know more about limitations etc I recommend not to only read wiki but go to Stability.AI website.
So going back when around early summer, I saw Midjourney and it did blow my mind, I got access to their server in the first month and started to type away text and those were converted into pictures. I have very horrible nightmares and recurrent, the classic one is a tsunami (here my love for surfing and Atlantic Ocean meet in darkness), so I started to use Midjourney to create multiple variations and images. I believe by end of June I had created many images for fun but the world of stable diffusion was getting bigger and greater, since is I believe open source, or at least there are many engines available for researches and experts alike, many different tools started to appear in the last months, and my “wait a second” moment happened when I saw the interiorai.com (http://interiorai.com) tool created by one of the loudest and most follow voices in this technology Levelsio. This tool allows you to upload any interior image and generates in many styles various interior design options aka “dear Instagram decorator/interior designer you are out of work from today”. Because it does what many creatives do in the process of creation: it searches and learns from many images, and it gives basic style options. I am not simplifying architecture to that, don’t misunderstand me, but I do believe that many smaller projects or interactions can be somewhat trimmed down by this tech.
Also this is just the first stage of stable diffusion and I believe in the future we will see this evolving, there is, for example, already a paper published on 3D and stable diffusion and in years to come this might become what my old colleagues thought, maybe? what if that think tank of that studio today takes all their projects and allows the engine to learn styles in such a way that the next buildings can be just generated? is this AI engine is a new colleague? another brain? In an era where housing is in crisis and we accept that industrialized construction with a very efficient design and set up needs to come into place, somewhat repetitive in the main guidelines and constrain by manufacturing, will this be another option? another brain?
And that is why the title of this article, I have always seen technology with excitement and with the idea that will help me along to get there, and myself will evolve with it. I accept that task that I do today might be done by a technological tool tomorrow and I always thought that maybe I didn’t have to learn how to drive. I don’t see this as a “I will be out of a job” but more as “another thing that I can learn”. During the last years I had many ideas for product based on AI image recognition and OCR so very excited that the AI technology is here and I can learn another thing.
This below is my repetitive nightmare, sort of created by Midjourney with text input.
Useful articles and links:
https://arxiv.org/pdf/2211.09869.pdf
Komentar