From Renders to Data Layers: How AI Is Reshaping Architecture’s Visualisation Stack

Roderick Bates, Head of Product Operations at Chaos, on how AI is compressing multi-stage design workflows, where the company drew the line on automation, and why visualisation platforms are quietly becoming architecture's data backbone.

Share

Ten years ago, “real‑time rendering” was something architecture studios rolled out only for big pitches and flagship projects. Today, if your visualisation stack can’t update a scene on the fly while a client asks for a new façade or a flipped floor plan, it’s already behind. What has quietly changed in the background is not just GPU horsepower, but an invisible layer of AI that denoises, upscales, and tweaks scenes fast enough to make high‑fidelity, real‑time visualisation feel mundane.

Chaos, the company behind tools like V‑Ray, Enscape, and the AI‑powered Veras, sits right at that inflection point. Instead of chasing viral prompts, its teams are using AI to shave away the friction between sketch, BIM model, rendered image, and client approval — compressing what used to be multi‑day, multi‑tool workflows into a handful of targeted actions. Material libraries can be generated from simple inputs, resolution can be boosted without re‑rendering, and even the “entourage” in an image can be surgically edited to better match the people a building is actually for, all from within the same ecosystem.

But the more interesting story for enterprises is what happens behind the pixels. As firms wrestle with data governance, IP ownership, and AI usage clauses in contracts, Chaos is also evolving into a collaboration and data layer: streaming 3D twins from BIM, logging comments and mark‑ups in the cloud, and turning approvals into a durable digital record of design decisions rather than a scattered trail of PDFs and screenshots. That shift raises a bigger question: when visualisation platforms become AI‑infused, workflow‑aware, and governance‑conscious, are they still “just” front‑ends — or the emerging backbone of how the built environment gets designed?

To unpack that transition — from wow‑factor renders to an AI‑native infrastructure for practice — AI & Data Insider spoke with Roderick Bates, Head of Product Operations at Chaos, about where AI genuinely removes bottlenecks, where they’ve chosen not to use it, and how visualisation is quietly becoming one of architecture’s most important data surfaces.

Real‑time rendering seems to have gone from “advanced” to “table stakes” in a few years. What role has AI played in making real‑time, high‑fidelity visualisation viable in everyday workflows? 

Given the newfound popularity of online AI image generators like Stable Diffusion, everyone with an internet connection can generate images of buildings in seconds. However, AI image generators have significant limitations. While you may be able to prompt an image that is generally a building, AI image generation won’t show your exact design, cannot simplify architectural and design workflows, calculate building energy performance, or allow for a targeted redesign based on client feedback. With advanced platforms like Chaos, you can do all of the above, leveraging the latest AI technology on both a hardware and software level. 

ALSO READ: Agentic AI in Production: Why Better Prompts Won’t Bridge the Gap

When you look across Chaos’s customers, what are the biggest workflow bottlenecks that AI is helping to remove right now?

An architect’s design workflow is made up of sequential layers. Sketches turn to models, models convert to visuals, and visuals change to show design changes, with each phase revisited as designs are iterated and refined. Our research shows that the best way to increase productivity is by reducing the processes and the friction when moving between each step. Therefore, by compressing multi-stage workflows into fewer actions, we remove a massive workflow bottleneck for architects and designers.

For example, rather than spending hours building the tens and even hundreds of digital materials required to accurately visualise a project, a designer can forgo the traditional multi-step and multi-software workflow, and instead generate and iterate materials directly from image and text-based inputs.

When it comes to presenting and sharing, real-time rendering can produce visuals that are good, but perhaps lacking the resolution required for large formats or detailed printing. Our tools enable the resolution to be increased, without moving to a more powerful rendering engine or even taking the time to render at a higher resolution.

In situations where the image is almost perfect, but perhaps the entourage in your render doesn’t align with the intended building occupants — The Chaos Enhancer supports highly targeted AI editing workflows. This tool reads the metadata from the render, automatically recognising the people. The user can then select a person in the render and implement specific edits, changing age, ethnicity, even facial expressions. This saves the time of reconstructing a scene just to edit assets, proving AI to be an effective tool for delivering the outcome users want, in a significantly expedited workflow.

“We’re not chasing AI for the sake of AI. Every tool has to remove real friction without undermining the designer’s intent.”

Chaos recently unveiled its AI roadmap and “Chaos AI Lab”. What problems did you decide AI should solve first – and which problems did you deliberately not touch with AI?

Chaos is deliberately targeting workflows where we know our users are experiencing significant friction, and where the quality of the content generated via AI aligns with our high standards. As a company with a reputation built on over 25 years of experience, we don’t believe in AI for the sake of AI. The tools we’ve developed are solving problems that reflect what we know are the most time-consuming for our users, doing so in a way that accelerates workflows without undermining creative agency.

We are deliberately avoiding AI tools that subvert the designer’s intent, or are long on flash but low on substance. We want all of our AI tools to be class-leading and fully capable of working within product workflows.

There’s a lot of talk about digital twins and generative AI for the built environment. Where does a visualisation platform like Chaos sit in that stack – are you just the “front‑end”, or do you see yourselves as a data platform in your own right?

A digital twin is only useful if it can be understood, and Chaos is already functioning as a digital twin during design. One tool, for example, pulls data from the BIM (Building Information Modelling) application, providing a visually legible and highly accessible view from which to share, interrogate, and collaborate on a design. 

ALSO READ: SandboxAQ’s VP of Engineering on Simulation’s Power and Limits

We find our customers are using our tools more and more for collaboration, which we are taking to the next level with our Collaboration Cloud. In this platform, designs can be shared, full 3D models streamed, Pano tours viewed in VR, and design options evaluated. Successful collaboration requires a robust data layer, with commenting, markup, and data tagging, such as for product info, all part of this integrated collaboration service. This is firmly moving Chaos into the data platform space, particularly with the recently released approval workflow, which allows Chaos customers to share content for the explicit purpose of design approval, creating a digital record of design decisions throughout a project.

“We are deliberately avoiding AI tools that subvert the designer’s intent, or are long on flash but low on substance.”

Your surveys show that firms are enthusiastic about AI and real‑time visualisation, but also struggling with technology cost and implementation. Where are you seeing adoption stalls, and what unblocks it?

We’ve found in our research that data governance concerns can slow down the adoption of AI-based technologies. Companies are concerned about the security of their data and whether their intellectual property is sufficiently protected. To unblock this adoption stall, we’ve seen two layers of strategies. 

The first is to only use AI tools that have responsible AI policies that are clear and explicit. Chaos takes a hard stance in this regard, with an AI policy that is aligned with the best interests of the customer. For example, anonymous rendering and usage data is only collected for Veras and Glyph if you choose to share it. You can disable this during setup or globally via our IT configuration guide. Importantly, Chaos does not claim any ownership of your outputs.

Secondly, we are starting to see our customers provide potential clients with AI and non-AI contract options. This allows the customer to choose the frameworks for technical workflows that meet their legal and ethical standards, with the fee structure varying to reflect the efficiency gains when using AI, or losses when the client chooses not to have AI used on their project.

At Chaos, we ensure we are completely transparent with what datasets our AI tools are trained on, security measures taken for data transferred or stored, and how long customer data will remain in our systems. We fully honour the trust our clients place in us with their craft and ensure that we protect intellectual property according to the highest standards.

ALSO READ: Why Data Reliability Now Governs Scaling GenAI

Anushka Pandit
Anushka Pandit
Anushka is a Principal Correspondent at AI and Data Insider, with a knack for studying what's impacting the world and presenting it in the most compelling packaging to the audience. She merges her background in Computer Science with her expertise in media communications to shape tech journalism of contemporary times.

Related

Unpack More