Last November, I was mid-workshop. Twenty or so people in the room, all learning how to build AI agents.
And I decided to do something unrehearsed.
I pulled up Lindy on the big screen, hit the voice button, and said: “Hey, check the sentiment around Red Ash — the restaurant here in Austin. If it’s positive, find a time on my calendar and add it as a walk-in.”
One sentence. No other instructions.
Then I turned to the group and said, “Watch this.”
What Happened Next
Lindy went to Perplexity first. It searched for reviews and sentiment about Red Ash. Came back: overwhelmingly positive.
So it kept going.
It went to Google and looked up Red Ash’s opening hours.
Then it opened my Google Calendar and found a free slot that matched those hours.
Then it added a calendar event. Automatically.
The room went quiet. Not the polite quiet of people being respectful — the actual quiet of people whose brains just caught up to what they witnessed.
I said “I’m going to delete this because I actually don’t feel like going right now,” and a few people laughed. But then someone asked: “How many tools did that use?”
Four. It used four separate tools, chained together, from a single sentence. No additional prompts. No clicks.
Why This Isn’t Just a Party Trick
Here’s what I want to be clear about: that demo wasn’t meant to impress. It was meant to show the difference between using AI as a tool and using AI as an agent.
Most people are still at what I call the AI Assisted stage. They open ChatGPT, ask a question, get an answer, copy it somewhere. That’s fine. That’s genuinely useful. But it’s basically a faster, smarter Google search.
When you move to the agent level, something different happens. The agent doesn’t just answer — it reasons across multiple sources, makes intermediate decisions, and takes action inside real systems. No babysitting required.
The Red Ash example hit four tools in sequence:
- Perplexity (real-time research for sentiment)
- Google (look up hours — actual business data)
- Google Calendar (find availability)
- Google Calendar again (create the event)
Each step required a decision. “Is the sentiment positive enough to continue?” “Do the hours overlap with Thanh’s free time?” “Which slot makes sense?”
A chatbot can’t do that. A chatbot waits for you to tell it what to do next.
The Cost Question
At the end of that session, someone asked how much all of that cost.
I pulled up the Lindy usage dashboard and checked. The entire session — multiple scheduling tasks, calendar rescheduling, the restaurant lookup, all of it — ran about 15 cents.
Think about that against the alternative: emailing an assistant, waiting 30 minutes to an hour, getting the calendar invite, reviewing it, confirming back. That’s a real-world workflow for a lot of people.
15 cents. Done in seconds.
I’m not saying this to be dramatic about it. I say it because people underestimate the economic argument for agents. Once you see the usage cost on something like this, it changes how you think about what’s worth automating.
The Mental Model Shift
Here’s what I find most useful for people just getting into this: stop thinking about AI as a question-answering machine.
Start thinking about it as a digital teammate that can use tools.
You give it an outcome. It figures out the steps. It uses the right tool for each step. It reports back when it’s done.
That’s the agent model. And when you internalize that model, you start seeing automation opportunities everywhere. Not just “I should ask ChatGPT about this” — but “I could build a workflow that handles this every time it comes up.”
The multi-tool chaining is what makes agents genuinely different. The ability to use Perplexity for research, Google for facts, a calendar for scheduling, an email system for communication — and chain all of those together inside a single instruction.
That’s not a better chatbot. That’s a different category.
Where to Start
If you’re still mostly using AI for one-off questions, that’s a great starting point. But the next level is learning to think in workflows.
Pick one task you do every week that involves at least two steps. Map out what a human would do to complete it. Then ask whether an agent could handle those steps.
If the steps involve information lookup, any kind of scheduling, or sending a message to someone — there’s a good chance an agent can take it off your plate.
The Red Ash demo was unrehearsed because I wanted people to see how natural it is once you’ve built that muscle. One sentence. Four tools. Done.
That’s the goal.
Thanh runs hands-on AI workshops in Austin for entrepreneurs and business owners. If you want to see what agentic workflows look like for your specific situation, check out the next available workshop.
