Dana Iti
UX.Front-end.Systems.
Dana Iti
How I WorkCase StudiesWritingContact
Reading mode
HomeWritingPerspectivePeople Can Build Fast With AI, But Do They Understand What They Ship?

People Can Build Fast With AI, But Do They Understand What They Ship?

15 November 2025•8 min read
•By Dana Iti•Perspective
Systems ThinkingAIFutureEngineeringPerspective
Reading mode

The mental model still lives in your head. The tools just help you keep it up to date.

More people are building stuff now, even folks who never touched code before. AI basically handed everyone the keys and no one's blinking twice. You can throw something together in minutes and it works well enough to ship. But do we actually understand what we're putting out there anymore? And in a decade, will anyone even pretend to care?

Probably not. We'll all be too busy building the next thing we don't understand.

Watching my parents build

Both my parents are retired now, but they’ve always been the type to poke at things and figure them out. Mum’s currently extending her smart home with AI generated Flask scripts on a Raspberry Pi.

I was watching her debug a tiny issue and asked if she wanted to walk through the structure together. Not because she’d missed anything, but because it came from ChatGPT. It’s easy to inherit the model’s assumptions without realising.

When you’ve seen one slow render trigger a long line of re-renders, effects, retries, and extra API calls that lock the whole thing up, you stop trusting anything until you’ve walked through it in your head. AI can toss out solutions fast, but your brain still has to build the map or you end up guessing.

People are admitting this out loud

I read a post on Reddit from a guy who built a full app with AI. It's making good money, but he's terrified to touch the code. He doesn't understand any of it. I'm not laughing at him. This is exactly what this workflow does.

You build first, hope it behaves, and pray nothing breaks. There's still a bit too much blind faith involved. Or maybe I should call it what it is. Hope. Hope that the model guessed right.

We've reached peak absurdity. Building profitable businesses on code you're afraid to look at. It's like winning a race in a car you don't know how to drive. Sure, you crossed the finish line, but good luck parking it.

The gap AI’s creating

I’m not anti AI. It’ll keep improving. It’s great for scaffolding and getting you moving. But it doesn’t hand you the mental model. And that’s the trap with vibe coding. You can sit there in flow, firing off prompts, shipping features at a stupid pace, and it all feels fine. But without the map, you’re basically flying blind. The code runs, sure, but you’ve got no real sense of how it behaves once the system’s under proper pressure.

Pressure always exposes the areas you didn't think about. The "harmless" refactor that blew up at scale. The race condition that only appears on one device under traffic. These aren't things you catch while clicking around locally. They show up when the system is already wobbling.

This isn't about being smart or not. It's about what you can see. And half the time the structure is buried under layers of generated code. Even experienced devs fall into this gap because the model builds faster than anyone can reason.

So now we've got code that writes itself faster than we can understand it. What could possibly go wrong? Don't answer that. The bugs will answer for us.

We’re already seeing the cracks

I’ve been hearing more stories lately of teams getting stuck on bugs for months. Not because the developers aren’t good, but because the system was built faster than anyone could understand it. When the issue finally shows up, no one knows where to look because half the structure came from an agent and the rest came from rushed decisions.

Solo builders run into this a lot too. They can ship an entire app with AI, but when something odd happens deep in the flow, the model can't reason its way back through the assumptions it made. You're left holding a system you didn't fully shape, trying to debug something you never designed. It's not a failure of skill, but it's what happens when the build moves faster than your understanding.

Nothing quite like debugging code you didn't write based on assumptions you didn't make for a system you don't understand. But hey, it shipped fast, and that's what matters, right?

Systems thinking isn’t the whole thing anymore

Systems thinking is still useful, but it's only one lens. The newer conversations lean into something closer to complexity thinking. Real systems don't sit still. They move around, grow odd edges, and behave differently once enough pressure lands on them. You can map everything today and the whole picture will already be different next week.

So the skill now isn't just "see the system". It's "keep updating the picture as it moves".

Which is just a fancy way of saying "try to keep up with the mess you're creating in real-time."

AI’s getting better at showing the shape of things

The tools aren’t just generating code anymore. They can sketch out flows, draw diagrams, map dependencies, and explain how data moves. Some agents can even validate behaviour, catch fragile patterns, or point out places where the reasoning looks thin.

They’re not perfect, but they’re narrowing the gap between what gets built and what we can actually understand.

And this is the important bit. The mental model still lives in your head. The tools just help you keep that model up to date while everything else speeds up.

AI can draw you all the diagrams you want, but it can't understand the system for you. That part's still on you. Sorry.

Tools that help you zoom out

You don’t need to whiteboard your whole app to understand it. There are a few AI tools now that help you see the wider structure instead of staring at one file and hoping for the best.

  • Cursor
    Cursor runs proper agent workflows. It reads whole files, explores the project tree, keeps track of the reasoning behind edits, and lets you adjust how much freedom the agent gets. Great for seeing how a change sits inside the whole system.

  • Claude Subagents
    Claude Code lets you create subagents with their own prompts, tools, and context windows. One agent can write code. Another can focus purely on structure, assumptions, weak points, and long-tail effects. Good way to split your thinking without losing the big picture.

  • Claude for high-level diagrams
    Claude can generate flow maps, sequence diagrams, state sketches, and rough architecture layouts straight from your repo. Handy when the system feels too big to hold in your head.

  • Sourcegraph Cody
    Great for tracing behaviour across large codebases and answering “where does this actually go” questions without digging through everything manually.

  • CodeSee maps
    Helpful for visualising relationships when the repo starts feeling larger than your working memory.

  • AI verification and safety checks
    Modern agents can simulate weird timing, broken states, impatient users, slow networks, or risky flows. They’re surprisingly good at catching things early.

These tools don't replace your thinking. They just surface the structure so your thinking stays accurate.

Though let's be honest, half the time we use these tools to avoid thinking about the structure until something breaks. But at least now we have fancy visualisations to stare at while we're debugging.

Why this keeps coming up

AI speeds up the building part. That's fine. But we still need to understand what's going on under the hood. People will ship more and more, but the failures are going to get harder to spot. Things won't break in clean, obvious ways. They'll collapse sideways for reasons no one predicted because no one understood how the pieces were holding each other up.

The future of software development: building faster, understanding less, and being genuinely surprised when things fall apart. What a time to be alive.

If you want to learn more about systems thinking, these are good starters.

  • Thinking in Systems: A Primer – Donella H. Meadows
  • Complexity: A Very Short Introduction – John H. Holland
  • Designing Data-Intensive Applications – Martin Kleppmann
  • Fundamentals of Software Architecture – Mark Richards and Neal Ford

And a few pointers.

  • If the problem feels messy, zoom out. You’re probably too close.
  • Use guardrails. AI still needs direction or it’ll wander off.
  • Fix the thing that actually changes the outcome. There’s always one area that matters more.
  • Systems don’t move in straight lines. They speed up, slow down, and jump around.
  • Focus on what moves the whole thing forward. Everything else is noise.

We're already in a world where anyone can build. The real value now is understanding how the whole thing fits together. If you can see the system, you can build whatever you want. AI just made the gap between building and understanding impossible to ignore.

This isn't doom and gloom. It's just worth paying attention to how the system behaves so we're not guessing later.

Or we could keep building things we don't understand and hoping for the best. That's worked out great so far. Just ask anyone who's ever been texted at 3am because something they shipped six months ago decided to implode for reasons they can't explain.

Ultimately, seeing the system means noticing how you're thinking while you work. That's basically meta-level thinking, or thinking about thinking. But that's a whole other post lol.

Watching my parents buildPeople are admitting this out loudThe gap AI’s creatingWe’re already seeing the cracksSystems thinking isn’t the whole thing anymoreAI’s getting better at showing the shape of thingsTools that help you zoom outWhy this keeps coming up

Previous

An Engineer's Secret - Beyond the Code

Next

The Thinking Stack Behind My Songwriting

Related Posts

View all posts

© 2026 Dana Iti

·
AboutWhat's newContact

Related Posts

Engineering & Architecture

What Fits My Brain (and What Didn’t)

Tried enough tools to know what fits my brain and what doesn't. Some looked clever but made everything harder. Others just worked the way I think.

28 Nov 2024•3 min
WorkflowStack+3 More
AI

Designing AI Workflows

Chaining AI prompts takes five minutes. Keeping them reliable when people actually use them, is where things fall apart.

21 Aug 2025•4 min
AIArchitecture+3 More
Career & Growth

From Builder to Systems Thinker

I used to solve problems fast. Then I learnt to think in systems and realised speed without direction is utterly pointless.

18 Aug 2024•2 min
SystemsDevelopment+2 More