I’ve spent most of my working life exploring a simple question: What does it mean to live in better relationship with time? Not to control it, or conquer it, or squeeze more from it—but to understand it, to move with it, to co-operate rather than compete.
Lately, I’ve realized that my relationship with AI is starting to resemble that same dynamic.
For months, I’ve been sitting with a tension I hadn’t quite named: AI is both deeply convenient and deeply disruptive. It can accelerate work, sharpen research, and expand possibility. It can also flatten nuance, mimic without understanding, and tempt us to hand over parts of ourselves we should never outsource. The parallels to our relationship with time aren’t subtle—they’re striking.
AI asks us the same question time does: What are you giving your attention to? And what are you giving up in the process?
That’s the backdrop for my recent conversation with Faisal Hoque, whose new book Transcend: Unlocking Humanity in the Age of AI takes this tension seriously. He doesn’t frame AI as a saviour or a threat. Instead, he invites a third way: curiosity over panic, agency over surrender, awareness over autopilot. In other words, a mindful relationship with the tools that increasingly shape our days.
Faisal and I spent a good portion of our conversation exploring the question beneath all other AI questions: What does it mean to remain human when machines can think alongside us? Not “think” in the philosophical sense, but think in the way a calculator thinks—faster than us, recursively, and without fatigue.
That’s where the parallel to time becomes clearest for me.
We outsource things to time constantly. We assume “future me” will have more energy, more clarity, more discipline. We load up our calendars and trust that the hours will sort themselves out. Sometimes they do. Sometimes they don’t. The risk is subtle but accumulative: we gradually trade presence for postponement.
AI offers a similar temptation. When we hand over our recall, our drafting, our outlining, our gathering of ideas—or worse, our thinking—we create what Faisal calls “cognitive debt.” It’s the mental version of leaving dishes in the sink. One or two is nothing. A week’s worth is another story.
Tools can genuinely lighten the load. But they can also quietly weaken the muscle we rely on most: attention.
Fear and Fascination Are Close Relatives
One line from our conversation has stayed with me. Faisal noted that fear and fascination sit side by side—twin cousins, as Stephen King once put it. You can’t really have one without brushing up against the other.
AI triggers both in me, often in the same moment. So does time.
Time is finite. AI feels infinite. Time humbles us. AI challenges us. Time asks us to slow down. AI tempts us to speed up. I’m starting to believe the real task isn’t choosing sides—it’s learning to hold both realities without collapsing into extremes.
That means resisting the urge to treat AI as either a miracle or a menace. It’s neither. It’s a tool that inherits our intent. And like time, it asks something of us in return: discernment.
A Framework Worth Stealing
Faisal’s OPEN and CARE frameworks apply beautifully to time as well as AI.
- OPEN helps you explore opportunity: Outline your purpose before you choose the tool or the tactic.
- CARE helps you assess risk: Imagine the catastrophe before it arrives, not as alarmism but as responsibility.
If we used these with our calendars the same way Faisal uses them with organizations navigating AI transformation, we’d make far fewer commitments we regret and far more choices we can stand behind.
Purpose first.
Risk second.
Action third.
Reflection always.
That’s good AI practice. It’s also good time practice.
The Human Element Doesn’t Get Automated
One place Faisal and I aligned quickly: emotion may not be programmable in the way logic is.
AI can parse data, but it doesn’t feel devotion. It can suggest, but it doesn’t care. It can advise, but it cannot love. Those are distinctly human capacities—and the ones most under threat when we rush toward convenience without pausing for meaning.
This is where my two long-running obsessions—attention and intention—meet the moment. AI can’t decide what matters to you. It can’t teach you how to reflect. and it can’t choose the pace at which your life unfolds. It can only react to what you give it.
Just like time.
And just like with time, the quality of the relationship depends on the quality of your presence.
A Responsible Partnership
I’m not interested in AI as replacement. I’m interested in AI as rhythm—something that smooths the friction around the work I’m already committed to doing.
If I use AI to draft ideas I would never think to write? I’ve crossed a line.
If I use AI to surface research that sharpens my point of view? That’s collaboration.
If I use AI to avoid thinking? I’ve surrendered something essential.
If I use AI to deepen my thinking? I’ve honoured the opportunity.
The same questions apply to time: Are you avoiding your life, or are you engaging it? Are you postponing what matters, or shaping a rhythm that allows you to show up fully? Are you controlling time, or relating to it?
AI can help us become more productive.
Time can help us become more present.
But only if we stay awake.
We’re standing at a strange threshold. AI on one side, time on the other, and us in the middle—holding far more agency than we often admit. Neither force is inherently good or bad. Both can expand us or diminish us depending on how quickly we reach for certainty.
And certainty is always the seduction.
But the real work—the human work—lives in the nuance. In the middle place where we’re not outsourcing our thinking or idolizing our tools, but paying attention to the quality of the relationship we’re building with them. In that small, subtle space between convenience and surrender, between acceleration and presence, between what we can do and what we actually intend to do.
AI won’t make that choice for us. Time won’t either.
Which leaves the question dangling in front of each of us: Will we choose the easy path of certainty, or the harder, richer path of nuance?
That answer, for better or worse, is still entirely human.
