Last November I was running a Two Hour Workday session on AI agents. We had a group working on their first email drafting agent in Lindy.
One of the participants raised his hand about halfway through. He had built the whole thing. Connected his inbox. Set up the trigger. The agent was working, drafting replies. But every draft was different. Sometimes short and punchy. Sometimes three paragraphs long. No consistency.
He was frustrated. He asked me what was wrong with the prompt.
I looked at it. The prompt was fine. The instructions were clear. But there was one thing missing.
There was no example.
The Missing Piece Nobody Talks About
When you write an agent prompt, you are basically giving someone a job description. You tell them what you want, what to avoid, and how to handle edge cases.
But imagine hiring a new assistant and handing them a job description with no examples of good work. They would read every instruction carefully… and still produce wildly different results, because they are interpreting everything based on their own defaults.
AI agents do the same thing.
Without a reference point, the model fills in the gaps with its own judgment. Sometimes it guesses right. Sometimes it does not. And there is no way to know in advance which one you will get.
I call what fixes this the sample of success — a real example of the output you actually want, added to the bottom of the prompt.
What I Added (And What Happened)
After the participant showed me his prompt, I had him scroll to the bottom and add one email.
Not a hypothetical email. A real reply he had written that he was happy with. Short, direct, around 5 sentences. The kind of reply he would want his agent to write.
He ran the agent again.
The next five drafts? All came back in the 4-6 sentence range. Same tone. Same structure. Totally consistent.
The agent was not smarter. The model did not change. He just gave it something to aim for.
The difference between “write a professional email reply” and “write a professional email reply that looks like this…” is enormous. The second instruction does so much more work, even though it feels like a small addition.
This Applies to Every Kind of Agent
The sample of success is not just an email thing. It works for any agent where output quality or format matters.
Content agents: If you are using an agent to draft LinkedIn posts or newsletters, add one post or newsletter you have already written and liked. The agent will start mirroring that structure, that length, that tone.
Meeting summary agents: Add an example of a meeting summary in the exact format you want. Two sentences of context, then bullet points? Show it that. Three-paragraph narrative? Show it that.
Research agents: If you want a research brief in a specific format, paste in a good one. The agent will use it as a template without you having to describe the template in words.
The pattern is always the same: the clearer your example, the more consistent the output.
The Two-Part Prompt Formula
At my workshops, I teach something I call the OCE formula for agent prompts — Outcome, Context, Expectations. Most people get the outcome right. They state what they want. Some people add good context. But the Expectations piece… that is where the sample of success lives.
Expectations tell the agent: here is exactly what done looks like.
Without that piece, your prompt is incomplete. The agent knows the goal but not the standard.
And honestly? Most AI-generated content I have seen is weak for exactly this reason. Someone gave the model a goal without a standard. So the model filled in the standard itself, and it did not match what the person actually wanted.
The Shortcut When You Are Stuck on the Prompt Itself
Here is something that trips a lot of people up: they know they need a better prompt, but they do not know how to write one.
I have started using what I call meta prompting for this.
Instead of staring at a blank text box trying to craft the perfect prompt, I ask the AI to write it for me. I describe what I want the agent to do, paste in an example output if I have one, and say: “Write me an agent prompt that would produce something like this.”
Claude and ChatGPT are both good at this. What comes back usually includes a clear role, a goal, some context, a set of instructions, and — if you gave it an example — a sample of success already baked in.
A client of mine was struggling to write a prompt for a complex spreadsheet automation. Instead of coaching him through the whole prompt-writing process, I told him to take a screenshot of the spreadsheet, paste in what he was already trying to do, and ask Claude to write the agent prompt for him. He had a solid prompt in about 3 minutes. He had been stuck for an hour.
The AI can write its own instructions. You just have to know to ask.
Start Here
If you have an agent right now that is producing inconsistent output, here is what to try:
- Pull up the prompt.
- Find one real example of the output you would actually want to use.
- Add it to the bottom of the prompt with a label like “Example output:” or “Sample of success:”
- Run the agent again on the same input.
If the variance does not drop significantly, the prompt might need work. That is when meta prompting helps — paste the whole prompt into ChatGPT or Claude and ask it to improve it.
Inconsistency is not a model problem. It is an instruction problem. And it is usually fixable in about 10 minutes.
I cover this kind of thing in my one-day AI workshops in Austin — hands-on, no slides, real agents built by the end of the day. If that sounds useful, reach out.
