AI in Architectural Design Workflows — State of Play
March 2026

AI in Architectural Design Workflows

3rd Year Architecture Studio Here(after).Hope Studio Practical / Workflow-focused

Introduction

"AI won't design your house. But it will help you think through 200 versions of it before lunch."

This isn't about whether AI is good or bad. It's about becoming fluent with the tools that are changing how conceptual design actually happens — right now, in practice. The goal is to integrate AI into your existing design thinking, not replace it.

Only ~6% of US architects regularly use AI (AIA 2025 study), but 46% have experimented. The profession is still early — which means you can get ahead of it.


How to stay in control when using AI tools for design
Chris Mewburn's Authorial Agency Model — Generative Praxis

Before you open any AI tool, write down what you are trying to achieve. This is your Intent — your spatial idea, your design brief, your qualitative goal. Then use that intent to guide your Prompt. Then Evaluate what the AI gives you against that original intent. Did it get you closer? If yes, continue. If no, stop and change course.

This three-step cycle keeps you in the author's seat. Without it — if you open a tool, generate something, and accept the first plausible output — you have no criteria to evaluate against. The AI's defaults become your design decisions. The tool is in control, not you.

Intent → Prompt → Evaluate. Remember this.

Intent → Prompt → Evaluate cycle diagram — Chris Mewburn Generative Praxis
Chris Mewburn, 2026

The Conceptual Design Workflow Chain

"From napkin sketch to compelling image in under an hour" The Conceptual Design Workflow Chain

The workflow below maps onto how you already work in studio — from first sketch through to presentation. Each stage has a recommended tool and a clear reason for using it.

Brainstorm Miro AI Claude / ChatGPT Sketch Paper → Vizcom Midjourney Traditional Architectural Workflows Drawing, Modelling, Testing, Developing AI Render ComfyUI + ControlNet Gendo / D5 Render Midjourney Refine Photoshop Gen Fill Magnific Upscale Present Runway / LTX Studio Miro AI iterate — each cycle sharpens the design
The AI conceptual design workflow — brainstorm through to presentation

Stage 1: Sketch → AI Render (The First Translation)

RECOMMENDEDVizcom AI — You already have this. Push yourself to use it harder. Best for the moment when you have a rough hand sketch and want to see it as a space, with light and materials, without modelling anything.

  • Tips: Clean line work matters. Cluttered overlapping sketches confuse the AI. One idea per sketch.
  • Renders in <10 seconds for simple sketches, up to 30 seconds for complex ones.
  • Use the style reference feature to maintain consistency across a series of images.
  • vizcom.com

ALTGendo AI — Purpose-built for architects. Upload a sketch or rough 3D export, get photorealistic renders with multiple style options (cinematic, watercolour, hand-sketch, impressionist, or custom). Uses 5–6 specialised models trained on architectural datasets.

ALTReRender AI — Native plugins for Revit, SketchUp, and Rhino. 20+ architectural styles. Free tier: 3 renders/day. rerenderai.com

Common mistake: Treating AI renders as final images. They're thinking tools, not presentation tools. The value is in rapid iteration — generate 10 options, learn from what surprises you, feed that back into your design thinking.
What the AI render gives you → Atmosphere & mood → Material character → Light quality → Spatial impression → Multiple options, fast not the same as What the design still requires → Interpretation of client needs → Structural logic → Spatial program & brief → Circulation paths → Environmental response → Buildability → Understanding of wider social, cultural, environmental, political, and economic contexts AN AI RENDER IS NOT A DESIGN.
An AI render is a sketch of a possibility — not a design

Stage 2: Controlled Image Generation (When You Need More Control)

RECOMMENDEDComfyUI + ControlNet + Flux — The power-user setup. Open-source, node-based, gives you precise control over what the AI preserves from your input (edges, depth, composition) versus what it invents (materials, lighting, atmosphere).

What is ControlNet? A neural network layer that constrains generation using structural information from your input:

  • Canny edge detection — preserves hard architectural edges and forms
  • Line art mode — better for hand sketches
  • Depth maps — maintains spatial relationships from a rough 3D export

The workflow:

  1. Create a rough sketch or export a low-res 3D screenshot
  2. Run it through a ControlNet preprocessor (Canny or Line Art)
  3. Set ControlNet strength to 0.7–0.8
  4. Write a text prompt describing the style, materials, lighting, and atmosphere you want
  5. Generate. Your spatial layout is preserved; the aesthetic quality is enhanced.

Reported efficiency: Concept development from 2 weeks to 3 days for complex scenes.

SIMPLERKrea AI — Web-based interface with ControlNet built in. Free tier: 100 compute units/day. krea.ai

Common mistake: Setting ControlNet strength too high (1.0) — your output looks like a slightly modified version of your input. Too low (0.3) — the AI ignores your composition entirely. The sweet spot is 0.7–0.8.
Sweet spot 0.0 0.7 0.8 1.0 AI ignores your composition Layout preserved — style enhanced Output barely differs from input
ControlNet strength — finding the sweet spot between control and creativity

Stage 3: Image Refinement and Enhancement

RECOMMENDEDAdobe Photoshop Generative Fill — You already have Creative Suite. Now multi-model (Firefly + Gemini 2.5 Flash + Flux Kontext simultaneously). Use it to:

  • Extend AI renders (Generative Expand) — turn a building interior into a wider scene with landscape context
  • Replace specific elements — swap materials, change sky conditions, add or remove vegetation
  • Fill gaps — seamlessly patch areas where the AI output wasn't quite right

The new Reference Image mode lets you feed a style reference, so Generative Fill maintains visual consistency with the rest of your project.

For upscaling:

  • Magnific AI — "hallucination" upscaler. Invents plausible detail (vegetation, texture grain, lighting nuance). Best for making concept images feel more real. magnific.ai
  • Krea AI Architecture Upscaler — trained on architectural software exports from Revit/SketchUp/Blender. Best for enhancing renders from 3D tools. krea.ai/architecture
Common mistake: Over-processing. Running an image through too many AI enhancement steps makes it look uncanny. One pass through Generative Fill + one upscale is usually the right amount.

Stage 4: Animation and Movement (Optional but Powerful)

RECOMMENDEDRunway Gen-4.5 — Physics are now realistic — objects have weight, momentum, and collisions work properly. Use it to turn a static render into a short walkthrough animation.

The quick workflow:

  1. Generate 2–3 keyframe renders from different angles (using Vizcom/Gendo/ComfyUI)
  2. Feed them into Runway as image-to-video inputs
  3. Get a short animated transition between viewpoints
  4. Result: A 5–10 second walkthrough clip, ready for a presentation, produced in under an hour.

Alternatives:

  • Pika 2.5 — fastest generation times, most user-friendly. Good for quick previews. pika.art
  • Kling 2.6 — unique because it generates synchronised audio with spatial video. kling.ai
Common mistake: Trying to make a full architectural walkthrough from AI video. The technology produces 5–15 second clips with limited camera control. Use it for mood and atmosphere, not for communicating a floor plan.

Deep Dive 1 — AI Inside Your Render Engine

"Your renderer got smarter — here's what it can actually do now" AI Inside Your Render Engine

AI features baked into the rendering workflow. The value: you don't leave your software, you don't export anything, the AI just makes your existing pipeline faster and better.

D5 Render — Best Free Option for Students RECOMMENDED

D5 has the most complete AI feature set of any render engine right now, and crucially, it's free for students (full Pro features, no watermark, apply with school email).

Three AI features worth knowing:

  1. AI Atmosphere Match — Take a photo of real-world lighting, weather, and mood. D5 analyzes the colour, atmosphere, and time of day, then applies it to your scene.
  2. AI Texture Generation (PBR Material Snap) — Photograph a material (brick, timber, concrete, anything). D5 generates a full PBR material set: albedo, normal map, roughness, height map. Upscales to 4K.
  3. AI Agents — Select an area of landscape in your scene, click once, and D5 populates it with up to 5,000 nature assets (20 types). Manual placement used to take hours. This takes seconds.

Live-sync plugins (free) for: SketchUp, Revit, Rhino, Blender, Archicad, 3ds Max, Vectorworks, C4D.

  • AI Atmosphere Match needs a good reference photo — poor lighting sources give poor results
  • AI Agents work best on flat or gentle terrain; steep sites need manual placement
  • Users report 50–80% faster project completion with the AI features

Lumion — Free Pro Student License, AI Upscaler

Lumion 2025/2026 has a powerful AI Upscaler: render at half or quarter resolution, then AI upscales to 8K or 16K. Faster render times without sacrificing final quality. Cloud-based upscaler also available in Lumion Cloud (browser).

The Area Placement Tool populates large outdoor spaces with up to 5,000 nature assets in one click.

Free Lumion Pro Student license for 1 full year (renewable while enrolled, all Pro features).

  • Student licenseAI upscaler guide
  • The upscaler enhances resolution, not composition — a badly lit render stays badly lit at 8K
  • 4X upscaling is still in beta; 2X is reliable. Works with Revit, SketchUp, Archicad.

Enscape + Veras (Chaos Ecosystem)

Enscape is a real-time renderer (live walkthrough in your viewport). The AI layer comes from Veras, powered by Nano Banana Pro (a diffusion engine trained on architectural data).

How it works: Take your Enscape render or a rough screenshot from your BIM model → upload to Veras → write a text prompt → Veras generates a photorealistic version that respects your geometry. A "geometry slider" controls AI deviation (0.7–0.9 is the sweet spot for conceptual work).

  • Works with: Revit, SketchUp, Rhino, Archicad, Vectorworks
  • chaos.com/enscapechaos.com/veras
  • Use low geometry-respect values (0.3–0.5) for early conceptual exploration; high values (0.8+) when the model is more resolved
  • Veras is a post-render AI enhancement, not a replacement for the render itself

SketchUp Diffusion — Promising but Unreliable

Built into SketchUp. Two sliders matter: Respect Model Geometry and Prompt Influence.

  • 0.0–0.3: Complete hallucination (AI ignores your model)
  • 0.4–0.6: Vague resemblance (room layouts survive, details don't)
  • 0.7–0.9: Sweet spot (geometry mostly preserved, materials explored)
  • 1.0: Maximum constraint, but still inconsistent

Honest assessment: Convenient because it's right inside SketchUp, but users report real problems — the AI misidentifies stairs as ramps, windows as walls, proportions drift between views. Good for quick early massing explorations, not for anything you'd present. Credit-limited (20–40 renders/month).

Corona Renderer — AI Denoising (For Those Using 3ds Max)

Corona 14 (Nov 2025) added AI denoising that reduces render times by 50–70%. Also has an AI Material Generator (photo → PBR material) and an AI Image Enhancer for people and vegetation. Useful if you're working in 3ds Max or Cinema 4D. blog.chaos.com/corona-14

Twinmotion — Real-time Strength, Limited AI

Best real-time viewport for walkthroughs, but AI features in 2026 are limited to post-render stylisation. No core AI rendering yet. Free Community version available. Worth knowing about for VR walkthroughs, not for AI specifically. twinmotion.com

Quick Comparison

Render Engine Best AI Feature Free Student Access? Works With
D5 RenderAtmosphere Match + AI Textures + AI AgentsYes (full Pro, free EDU)SketchUp, Revit, Rhino, Blender
Lumion8K/16K AI UpscalerYes (Pro Student, 1 year)Revit, SketchUp, Archicad
Enscape + VerasGeometry-aware AI render enhancementEDU license (not free)Revit, SketchUp, Rhino, Archicad
SketchUp DiffusionIn-app AI renderIncluded with subscriptionSketchUp only
TwinmotionReal-time VR (minimal AI)Yes (Community free)Most CAD formats
CoronaAI denoising (50–70% faster)Via 3ds Max/C4D license3ds Max, Cinema 4D

Bottom line: If you don't already have a render engine, start with D5 Render (free, best AI features, live-syncs with everything). If your school provides Lumion, use the free Pro Student license. If you're in the Chaos ecosystem already, Enscape + Veras is the professional path.


Deep Dive 2 — AI Inside Your Existing Software

"You don't need new tools — your tools got new features" AI Inside Your Existing Software

Rhino + Grasshopper

  1. Raven — Conversational AI built into Grasshopper. Describe what you want in plain language and it generates the Grasshopper definition. raven.buildfood4Rhino webinar
  2. Ant — AI copilot for Grasshopper. Select part of your definition, open the command panel, tell Ant what to modify. Works directly on your existing canvas. food4Rhino blog post
  3. ComfyUI for Rhino & Grasshopper — Integrates Stable Diffusion directly into Grasshopper. Input a massing model, generate photorealistic skin options. food4Rhino — Free for personal/student use.
  4. LunchBoxML — Machine learning toolkit inside the classic LunchBox plugin. Regression, neural networks, classifiers. 15 years old, most-downloaded Grasshopper plugin, now with ML. provingground.io/lunchbox

Blender

  • Dream Textures — Open-source. Stable Diffusion running inside Blender for AI texture generation. Free, runs locally. Blender Extensions
  • BlendAI — Chat-based AI assistant inside Blender. Generate textures, automate tasks, get reference images. Free tier: 200 credits. Blender Market

SketchUp

Adobe Photoshop

Photoshop's Generative Fill now runs multiple models simultaneously (Firefly + Gemini + Flux). Key features to know: Generative Fill for replacing or adding elements, Generative Expand for extending renders into wider scenes, and Reference Image mode for maintaining visual consistency across your project.

Miro

  • AI Brainstorming — everyone contributes ideas; AI clusters them by theme. Useful for design workshops, site strategy sessions, and community brief development.
  • Sidekicks — task-specific AI experts integrated into your board. Ask for sustainable design strategies, material research, precedent suggestions while you work. miro.com/ai

Deep Dive 3 — AI for 3D: What Actually Works

"Image-to-3D is real now, but manage your expectations" AI for 3D — What Actually Works

The fastest-moving area. Tools are improving weekly, but they're not yet producing CAD-ready geometry. They ARE useful for rapid massing studies, furniture assets, and conceptual models.

  1. Tripo3D — Text or image to 3D. 2-billion parameter model. Has a Sketch-to-3D feature. Produces quad-based topology (game-engine ready). Best for architectural assets, furniture, landscape elements. tripo3d.ai
  2. Meshy AI — Text/image to 3D with Blender, Unity, and Unreal plugins. Largest user community. Good for detailed environmental and furniture assets. meshy.ai
  3. Spline AI — Browser-based text/image to 3D. Collaborative. Can embed AI-generated 3D directly in web presentations. spline.design/ai-generate
  4. Autodesk Wonder 3D — Brand new (March 2026). Text + image to 3D inside Autodesk Flow Studio. Native Autodesk ecosystem. Autodesk blog
Honest assessment: These tools generate impressive-looking meshes, but the geometry is not architecturally clean. You can't measure off them. You can't produce drawings from them. Use them for visual communication and design exploration, not for documentation.
Common mistake: Spending hours trying to clean up AI-generated 3D geometry to make it work in Rhino or Revit. It's faster to model from scratch using the AI output as a visual reference.

Deep Dive 4 — AI Textures and Materials

"Never UV-unwrap a brick wall again" AI Textures and Materials

  1. Polycam AI Texture Generator — Text prompts or image uploads → seamlessly tileable textures in multiple resolutions. poly.cam/tools/material-generator
  2. Adobe Substance 3D + Firefly — Text-to-texture that automatically generates PBR materials with proper normal maps, roughness, etc. Multiple tileable variations from a single prompt. Adobe Firefly in Substance 3D

Blender-specific: Dream Textures (free, open-source) generates materials directly inside Blender from text prompts.


Deep Dive 5 — What the Best Practitioners Are Doing

"How the people who are good at this actually work" What the Best Practitioners Are Doing

Ismail Seleit (Foster + Partners / "The Diffusion Architect")

Runs the most structured AI-architecture workflow publicly available. His method:

  1. Quick ideation with Flux Schnell (fast model, loose prompts, for mood and spatial orientation)
  2. Image-to-image refinement with Flux Dev + ControlNet (tighter control)
  3. Custom LoRA models trained on specific architectural aesthetics (style consistency)
  4. Integration with his own Form-Finder.com tool

Teaching: "The Diffusion Architect" workshop series (now v3.0: Flux Era). parametric-architecture.com

Tim Fu (Studio Tim Fu / formerly ZHA)

Bridges computational design and AI image generation. Uses ControlNet to riff on uploaded 3D models, photos, or previous renders. Key insight: the AI output feeds BACK into the parametric model, not just forward into presentation. Wallpaper — How to Use AI in Architecture

Hassan Ragab (PAACADEMY)

15+ years across architecture and computational design. Teaches AI as part of a hybrid creative approach — not a render tool, but a design partner. His "AI Conceptual Architecture 4.0" workshop is the most mature pedagogical framework for this. PAACADEMY

Daniel Bolojan (Florida Atlantic University)

Director of the Creative AI Lab. Building Versur.ai — the first "agentic AI platform" built specifically for architects. Allows orchestration of multiple generative models, reasoning agents, and tools in a visual environment. FAU Creative AI Lab

Zaha Hadid Architects

Moved beyond off-the-shelf tools to proprietary AI systems. Key philosophy: AI sits AFTER parametric design and CAD, not instead of them. 50% productivity increase in mid-stage design. Their "Architecture of Possibility" exhibition (Shenzhen, Dec 2025–April 2026) showcases the methodology. ArchDaily — ZHA AI exhibition

Concept Artists from Games and Film

It's worth looking beyond architecture. Professional concept artists in games and film have been integrating AI into their workflows longer than architects, and their methods are directly transferable. Their approach:

  • AI for rapid ideation and mood boards (50% time reduction in early phases)
  • Photobashing + AI compositing for final images
  • KitBash3D for modular 3D environment assembly
  • Human polish is still ~40% of the final work

Key resource: KitBash3D Cargo — free 3D asset manager with modular architecture kits for Blender and Unreal Engine. kitbash3d.com


Workflow for Your Studio Project

"How to use all of this for your studio project" Workflow for "Here(after).Hope" Specifically

For World-Building and Narrative

You've already done initial world-building. AI can deepen it:

  • Use Claude/ChatGPT to stress-test your 2050 scenarios. "What are the second-order consequences of [your premise]? What would daily life look like? What would the architecture need to accommodate?"
  • Future of Life Institute Worldbuilding Contest — useful reference for how to structure speculative futures convincingly. worldbuild.ai

For Early Concept Exploration

  1. Hand sketch ideas for the community's housing — focus on spatial relationships between the 12–16 people and your binding condition
  2. Vizcom → instant renders to test atmosphere, materiality, spatial quality
  3. Gendo → explore multiple architectural styles (what does this look like as brutalist? As timber? As earth-sheltered? As biomimetic?)
  4. Generate 20–30 variations. Print them. Pin them up. Discuss. Curate. Repeat.

For Design Development

  1. SketchUp/Rhino → rough massing model
  2. ControlNet via ComfyUI or Krea → use your massing as a depth/edge input, generate photorealistic explorations
  3. Photoshop Generative Fill → refine, extend, adjust specific elements
  4. Magnific/Krea upscaler → bring to presentation quality

For Presentation

  1. Runway Gen-4.5 → short atmospheric animations
  2. Polycam/Substance 3D → custom material textures for renders and models
  3. Miro AI → organise and present your design narrative
PHASE 1 World-build & Sketch Miro AI — brainstorm Claude / ChatGPT — scenarios Paper sketch → Vizcom Gendo — style exploration PHASE 2 Design Development SketchUp / Rhino — massing ComfyUI + ControlNet Photoshop Generative Fill Magnific — upscale PHASE 3 Presentation Runway / LTX — animation Polycam / Substance 3D Miro AI — narrative Crit / present / iterate
Here(after).Hope — a phased AI workflow for your studio project

Common Mistakes to Avoid

"Save yourself six weeks of frustration" The Mistakes Everyone Makes

1. Treating AI images as designs. An AI render is not a design. It's a sketch of a possibility. You still need to understand structure, program, circulation, environmental response, and buildability. The AI doesn't know any of that.
2. Prompting too vaguely. "A house in 2050" gives you generic sci-fi. "A low-rise cooperative dwelling for 14 people in coastal south-east Australia, 2050, constructed from cross-laminated mycelium panels with passive ventilation chimneys, photographed in morning light" gives you something you can work with.
3. Not iterating enough. One prompt, one image, move on — that's not a workflow. The value comes from generating 10–20 variations and interrogating what works and why. The AI is a sparring partner, not an oracle.
4. Using AI to avoid design decisions. If you're generating endless variations without choosing any, the AI is procrastination with extra steps. At some point, you commit.
5. Ignoring the hand-off problem. AI generates images. Your project needs plans, sections, models. The hand-off from AI image to architectural documentation is still largely manual. Don't leave that until the last week.
AI generates Renders & visualisations Mood & atmosphere Material options Design possibilities still manual plan for this from week one Architecture requires Plans & sections Elevations & details Structural logic Buildable geometry
The hand-off gap — AI images and architectural documentation are different things
6. Forgetting to credit. Document your process. Note what tools you used, what prompts, what iterations. RMIT now asks students to disclose AI tool use in submissions.

Takeaway

The profession is at an inflection point. These tools will be standard practice within 5 years, but the fundamental skills of architecture — spatial thinking, structural logic, environmental awareness, empathy for inhabitants — remain irreplaceable. AI accelerates exploration. It doesn't replace judgment.

For your Here(after).Hope project specifically: use AI to explore the widest possible range of spatial possibilities for your communities, then apply your own critical judgment to curate, develop, and resolve the design. The AI gives you breadth. You supply the depth.


Quick Reference: Tool Recommendations

Workflow Stage Primary Tool Backup / Alternative Cost
BrainstormingMiro AIExisting access
World-buildingClaude / ChatGPTExisting access
Sketch to renderVizcom AIGendo AI, ReRender AIFree (student)
Concept image generationMidjourneyChatGPT & GeminiFree tier / Paid subscription
Controlled generationComfyUI + ControlNet + FluxKrea AI (simpler)Free (open source)
AI render enhancementEnscape + Veras, Vizcom AI, Gendo AI, ChatGPT, GeminiSketchUp DiffusionEDU license / Subscription
AI rendering (free)D5 RenderLumion Pro StudentFree EDU license
Image refinementPhotoshop Generative FillExisting subscription
UpscalingMagnific AIKrea Architecture UpscalerSubscription / Free tier
3D from imagesTripo3DMeshy AI, SplineFree tier available
Textures & materialsPolycam AISubstance 3D + FireflyFree tier / Subscription
AI in GrasshopperRaven / AntComfyUI for GHFree on food4Rhino
AI in BlenderDream TexturesBlendAIFree (open source)
AnimationRunway Gen-4.5Pika 2.5Free tier available
Image to video clipMidjourneyVeo 3 in Gemini (via Flow app)Free tier / Paid subscription
AI film studioLTX StudioKling AI, Runway LMFree tier / Paid subscription