Last July, I sat down with a personal chef in Austin for what was supposed to be a Lindy workshop.
Michelle was spending four hours every single week writing menus by hand. She had ten clients, each with their own list of proteins, carbs, allergies, spice preferences, and weekly modifications. Some wanted rice, others wanted melon salad. Some were paleo. Some were dairy-free, but only for certain things.
She’d already built a custom GPT to handle it. She called it “Menu Maestro.”
And it was a disaster.
Every session, it forgot someone’s profile. Every session, the CSV export failed. She kept having to start over, re-explaining the same preferences, getting halfway decent results before the whole thing broke down.
“He’s missing clients,” she said, pulling up the screen. “Again.”
She thought the tool was broken.
It wasn’t. The design was.
The Most Common AI Mistake I See
When most people build their first AI workflow, they do what feels logical: they give it the full picture and expect it to handle everything.
One GPT. One prompt. Ten tasks.
The problem is that language models aren’t multi-taskers. They’re not like a human assistant who can juggle context across fifteen different things and still remember that Janet is lactose intolerant.
A GPT does one job with high consistency. It does five jobs with mediocre results across all of them.
This is what I call the “one agent, one job” rule. It’s the single most important design principle for building AI workflows that actually work.
Michelle’s Menu Maestro was trying to:
- Remember ten client profiles
- Generate a master menu
- Customize each dish per client
- Flag exceptions and modifications
- Export everything to a Google Sheet
That’s five separate jobs. And every job it had to do increased the chances of it dropping the ball on the others.
What Good Design Looks Like
Once I understood what was happening, the fix was straightforward.
Instead of one GPT doing everything, we broke it apart:
Step one is understanding the client. A dedicated context file per client — stored as a spreadsheet tab — that the GPT reads at the start. Not a “remember this” prompt buried in a conversation. A structured file it can reference reliably.
Step two is generating the base menu. One prompt, one output: here’s the master set of dishes for this week.
Step three is the customization. For each client, take the base and apply their preferences. One client, one output. No trying to do all ten at once.
That’s it. Three focused steps. Chain them in sequence.
“I would cry with happiness,” Michelle said when we mapped it out.
We estimated her four-hour weekly process could drop to about thirty minutes once the workflow was running properly.
Why You Should Start With ChatGPT, Not Lindy
Here’s a counterintuitive thing I tell people who want to build AI automations: don’t start with the automation tool.
Start with ChatGPT.
When Michelle mentioned the session was supposed to be a Lindy workshop, she asked — shouldn’t we be building this in Lindy?
Not yet.
Lindy is excellent for external integrations: connecting to email, calendar, Slack, Google Sheets, triggering workflows automatically. But it’s not the fastest tool for prototyping the actual logic of what you’re building.
ChatGPT lets you iterate the design in minutes. You paste a prompt, see what it produces, adjust, try again. Once the logic works reliably in chat, porting it to Lindy is easy.
This is what I call “start small, iterate.” The smallest useful version of any workflow isn’t the automated version. It’s the working version.
Automation before validation is how you end up building a complicated system that reliably produces the wrong output.
Prove the logic first. Then automate.
A Framework You Can Use Right Now
Before you build your next AI workflow, answer these three questions for each step:
What is the one output this step needs to produce?
Not “manage the whole process.” One output. “Generate a list of ten menu ideas based on these ingredients and these client restrictions.” One thing.
What is the minimum input it needs to do that job?
Client preferences, weekly ingredients, and nothing else. Don’t feed it the entire history of the business. The more you add, the less reliable it gets.
Can this step succeed without knowing what comes next?
If yes, it’s a good candidate for a standalone step. If it needs to “keep track” of what it’s going to do five steps later, it’s doing too much.
Three questions. If each step can answer them clearly, your workflow will be more reliable, easier to debug, and faster to build.
The Lesson That Applies Beyond AI
What I keep seeing in workshops and client sessions is that most people approach AI with a project management mindset. Define the big outcome, build toward it, see if it works.
But AI workflows are more like cooking.
You don’t throw everything in the pot at once and hope it turns out. You have a sequence. You build flavor in stages. One step informs the next.
Michelle is a personal chef. She would never give someone a raw list of ingredients and say “figure it out.” She designs the week’s menu first, then customizes per client, then writes the shopping list.
The AI workflow for her business just needed to mirror how she already worked.
That’s the principle: design your AI workflow the way you’d design the process itself. Break it into steps. Give each step one job. Then chain them together.
The tool isn’t the problem. The design almost always is.
Want to learn how to build AI workflows that actually work for your business? Thanh runs hands-on AI workshops in Austin — one day, 9am to 5pm, personalized by industry.
