A new AI-powered software for architects can render multiple building design options from a single sketch.
The process of designing a building often starts with a simple, rough sketch. With a few quick lines, an architect can create a silhouette and hint at the shapes of windows, doors, and balconies. But currently, turning this sketch into a detailed facade or a three-dimensional rendering is a time-consuming endeavor. And clients often need to see multiple renderings before settling on a final design.
Researchers at SRI are working on a design toolkit that uses artificial intelligence to rapidly create multiple detailed renderings from a single sketch. The toolkit, called AICorb, can help architects generate different options for a facade and layer a chosen style over a three-dimensional model.
âRight now, architects go through these iterative loops, creating the initial design proposal, getting feedback, and going back to the drawing board,â said Eric Yeh, a senior computer scientist in the Artificial Intelligence Center at SRI. âWeâre aiming to increase the speed at which an architect could ideate and create these proposals.â
Yeh and his colleagues have been working with architects and researchers at Obayashi Corporation, one of the biggest construction companies in Japan that primarily focuses on office and commercial buildings, to build AICorb and incorporate the technology into the design process. The software is intended to integrate with Hypar, a building design platform, to make it easy for architects to adopt.
At the moment, AICorb has two main functions. The first is to use a rough sketch to generate many different looks for a buildingâs exterior. The second function is to take a particular version or style chosen by the architect and convert it to a 3D model.
âIt was a real challenge to take a conceptual sketch, which isnât going to have straight lines or a lot of detail and convert it into something that resembles a real building,â Yeh said. âThe software has to be able to recognize an architectâs intent.â
The researchers trained a generative modelâa type of machine learning model that finds underlying patterns in what it has seen and uses that knowledge to create new, similar dataâwith a large catalog of architectural images provided by Obayashi. Now, when presented with a drawing, the model can create a variety of possible versions of what that building could look like. An architect can use those suggestions or adjust their inputs to generate new ones. Once they have a style that they like, they can use Hyparâs rendering platform to visualize it in three dimensions.
âWe have the generative component that gives you different creative perspectives and then we have the modeler component that recognizes the structural elements in the image and reflects them onto a 3D model,â Yeh said. âUltimately, the idea is that youâll not just get a render, but you will also get a full 3D structure with materials, cost estimates, and other information.â
In addition to expanding AICorbâs modeling abilities, the researchers are also interested in using the software to design building interiors. At the moment, the software is exclusively focused on exterior facades, but it could be adapted to generate interior layouts and could be particularly useful in expanding options for buildings that have specific layout requirements, such as hospitals.
The researchers have trialed the AICorb software at an architectural school, using it to help students expand how they think about design, and are hoping to incorporate it into Obayashiâs internal design processes. They are also looking into possibly releasing the software as a publicly available plugin for the Hypar platform.
âWeâre working with our partners to make a technology that will be useful to architects in the early stages of design,â Yeh said. âAI canât replace human creativity, but it can take away some of the grind work involved in the process.â