Last October, I was teaching a live masterclass for the Productivity Academy and asked the room a question: “Who has at least 20 tabs open when they’re doing research?”
Every hand went up.
Then I asked: “Who bookmarks articles and never goes back to them?”
Same result.
I’ve been guilty of both. For years, I had research scattered across Evernote, Google Docs, saved browser tabs, and Instapaper. I’d spend two hours “doing research” and walk away with nothing actionable. A pile of links isn’t a decision. A folder full of bookmarks isn’t knowledge.
There’s a name for what I was doing. I call it fake work.
Fake work is anything that feels productive but doesn’t actually move something forward. Collecting information without synthesizing it is one of the purest forms of it. You’re busy, you feel like you’re working, but three hours later you don’t know what you learned or what to do next.
The fix isn’t a productivity app. It’s a process. And once I built this into my workflow, research went from a black hole to one of the fastest things I do.
The 4D Research System
The whole system fits in four steps: Define, Discover, Distill, Deliver.
Step 1: Define
This is the step almost everyone skips. Before opening a single browser tab, answer one question: what outcome am I actually looking for?
Not “research AI tools.” Not “learn more about productivity systems.” Something specific.
When I was shopping for a new to-do app, my Define was this: I need one app that integrates cleanly with Lindy, Zapier, and Make, so my AI agents can add tasks automatically. That’s a specific outcome with a specific constraint. It makes every subsequent step faster because you already know what “good” looks like.
If you can’t answer the Define question in one sentence, you’re not ready to research yet. Write it out first.
Step 2: Discover
This is where the actual finding happens, and honestly, it’s the step most people are already fine at. The thing I changed was using multiple tools simultaneously.
I run the same research question through ChatGPT, Perplexity, and Claude at the same time.
They give different answers. That’s the whole point. Each model has its own algorithm, its own biases, its own way of prioritizing sources. Running all three is like the old days of checking Google, Yahoo, and Bing, except instead of links, you get synthesized answers from three different perspectives.
Perplexity is my go-to for finding a wide range of sources quickly. ChatGPT is better for going deep on a specific angle. Claude tends to give different takes than either of the others. Together they cover ground I wouldn’t cover alone.
Step 3: Distill
This is where most people fall apart. They’ve done the discovery, they’ve got a bunch of information… and then it just sits there.
The tool I use for this is Notebook LM. It’s a free Google product and one of the most underrated research tools out there.
I dump everything into a Notebook LM notebook. Links, YouTube videos, PDFs, copy-pasted sections from ChatGPT conversations. Then I query it like a second brain that only knows my sources, not the whole internet, just the material I curated.
On the to-do app research, Notebook LM told me Todoist was mentioned 46 times across my sources. Notion was second at 40. It also generated a comparison report and a mind map. When I still wasn’t sure, I had it create a 9-minute audio summary, put it on my phone, went for a walk, and had my answer before I got back.
The audio overview feature is something I didn’t expect to use this much. Now it’s one of my favorites. You can turn 15+ sources into a mini podcast and catch up on something while you’re moving.
Step 4: Deliver
Research that doesn’t become something is just reading.
The Deliver step is where you turn what you learned into something usable. That could be a decision (which app to buy), a brief (this is what I know and what I recommend), a plan, or a set of action items.
Research always ends with output. If you close your browser and nothing changed, you were collecting, not researching.
Where This Shows Up in Real Work
One of my most-used research automations now runs entirely in the background. Thirty minutes before every meeting, a Lindy agent does a background search on the person I’m meeting, professional background, company context, anything relevant, and sends me a brief to Telegram before the call.
I started building this after I caught myself going into meetings underprepared. Not because the information wasn’t out there. Because the research step was friction I kept skipping.
Automating the Discover step for recurring contexts like pre-meeting prep means I’m always at Step 3 when I need it. The research is done. I just have to act on it.
For staying current on AI news, I use ChatGPT Tasks. You can schedule a prompt to run daily at a set time. Mine runs at 11am and summarizes the latest AI updates in terms of what’s relevant for content creation and where I might want to go deeper. It’s a five-minute daily briefing I never have to remember to do.
The Gap Most People Miss
After teaching research workflows to hundreds of people in my Austin AI workshops, the pattern is consistent. Steps 2 and 4 are fine. People find stuff and eventually do something with it.
Steps 1 and 3 are where things fall apart.
Step 1 (Define) gets skipped because we’re impatient. We want to start, not plan. But research without a clear outcome is just wandering.
Step 3 (Distill) gets skipped because it feels like extra work. It’s not. It’s the step that turns research into clarity.
Fix those two, and the whole system snaps into place.
