Dana Iti
UX.Front-end.Systems.
Dana Iti
How I WorkCase StudiesWritingContact
Reading mode
HomeWritingAIDesigning AI Workflows

Designing AI Workflows

21 August 2025•4 min read
•By Dana Iti•AI
AIArchitectureSystem DesignVercel AINext.js
Reading mode

It's easy to fall into the trap of over-engineering once you start chaining prompts, caching responses, and handling retries. Before you know it, you've built a Rube Goldberg machine where five AI calls do what one simpler approach could have done.

But the goal isn't complexity. It's reliability.

When I built Jobby, my AI logic had grown quickly, job description parsing, CV rewriting, tone adjustments, summary generation, and PDF composition. Each step worked fine on its own, but they weren’t designed as a system yet. That’s where the trade-off thinking kicked in.

I started treating each AI call like a function in a pipeline, not a black box. Every prompt has an input, output, validation layer, and cache boundary. If any part fails, I know exactly where to look.

Prompt chaining

Instead of one giant prompt that tries to do everything, I now break them down into smaller, predictable steps. One for extracting job details, one for analysing tone, one for rewriting sections. Each prompt produces structured JSON that the next step consumes. It’s more predictable and easier to debug.

There’s a small cost in extra API calls, but the benefit is huge, each piece becomes reusable across different contexts. The same tone analysis logic can now be used in other products or experiments.

Caching

I cache everything that doesn’t need to change on every request. For example, if the same job description is analysed twice, the cached result is returned instantly. In Jobby, I use Supabase for storing recent AI responses, keyed by a hash of the prompt and parameters. It’s simple, predictable, and keeps the API bill under control.

The trade-off is that cached data can become stale. But I’d rather risk a slightly outdated tone analysis than waste tokens or slow down the UI. For user-facing tools, speed matters more than perfect freshness.

Validation

Every AI response passes through a schema validator before it's accepted. If it doesn't match the expected format, the system retries or falls back to a simpler default. This avoids brittle chains that collapse when a single model output goes off-script. Because AI loves to surprise you with creative interpretations of "return valid JSON."

It also keeps the codebase clean, no random "if includes('error')" logic scattered around.

I use Zod for this because it pairs cleanly with TypeScript, and I can enforce shape consistency between front-end and back-end. That’s been key to keeping everything predictable.

Keeping it simple

The biggest mistake I see in AI projects is over-engineering, people build complex orchestration layers, queuing systems, or plug-ins before they even know what needs to scale. Like building a five-story parking garage before you own a car.

I've gone the other way. I design like a minimalist: every moving part has to justify itself.

If a chain doesn’t need to exist, I cut it. If caching adds more complexity than it saves, I remove it. If a validation layer stops me from shipping, I simplify it.

It’s not about doing less, it’s about doing enough, just enough structure to stay reliable, not enough to slow you down.

Why it matters

This mindset came directly from learning to articulate trade-offs. Every decision in an AI workflow has one, accuracy vs latency, freshness vs cost, flexibility vs predictability. Understanding those early keeps your systems small, maintainable, and explainable.

It’s the same principle I apply to design and product work now. Move fast, but make your reasons explicit. That’s what keeps your architecture honest.

Next, I’ll write about how these AI patterns are influencing how I design multi-phase user systems, where workflows, memory, and context start blending into one continuous experience.

Prompt chainingCachingValidationKeeping it simpleWhy it matters

Previous

What Fits My Brain (and What Didn’t)

Next

I Don’t Want AI to Replace Thinking. I Want It to Help People Think Better

Related Posts

View all posts

© 2026 Dana Iti

·
AboutWhat's newContact

Related Posts

Engineering & Architecture

Building Jobby

Everyone assumes I built Jobby because I got laid off. Wrong. The real reason nobody saw coming.

28 Sept 2024•4 min
AICareer+3 More
Exploration

Building My First AI Story Prototype

One month to build an AI storytelling platform. I learnt more from what broke than what worked.

22 Sept 2024•2 min
AINext.js+3 More
Perspective

People Can Build Fast With AI, But Do They Understand What They Ship?

Tools are getting smarter. Code ships faster. The hard part never changed. You still have to see the system and understand how it moves.

15 Nov 2025•8 min
Systems ThinkingAI+3 More