At breakfast in Morocco, prepping for a CEOs’ conference, I got the best metaphor for digital transformation I’ve seen in months. “Why am I writing this blog post? Because this morning, at breakfast in Morocco...I was handed the perfect metaphor for agentic work.”
Here’s what happened: I dropped my guard for a moment, left my table to fetch something, and boom my eggs were gone. Cleared away by a waiter just doing his job. No ill intent, no laziness, only a routine: keep things tidy, move fast, clear plates left on empty chairs. But the routine clashed with reality. I wasn’t done eating. It’s a mistake that will feel familiar to anyone automating workflows with AI agents or digital workers.
What’s the real problem? “That waiter hadn’t done anything 'wrong.' He’d followed his rules as they were given. But because there was no feedback loop no way for him to know instantly whether I was really ‘done’ or just momentarily away the process failed.” If you manage people or deploy AI, this scene should ring alarm bells.
A simple lost breakfast illustrates a core truth: agentic routines human or digital break when they don’t adapt to human context. AI support tools clearing tickets, sending standard replies, or deciding tasks are “done” with incomplete information represent the same failure, at a bigger scale.
We tend to write rules for what should happen “if all goes well.” But resilient systems are built for exceptions, surprises, and recovery not perfection.
Look to recent headlines. In Australia, Deloitte refunded the government for a $290,000 report exposed as riddled with “AI-generated errors, including references to non-existent academic research papers and a fabricated quote from a federal court judgment.” (Fortune, 2025-10-07, Trust rating: High, Recent and authoritative business press)
Where did they go wrong? The absence of robust feedback and after-action review. As Fortune put it: “Deloitte reviewed the 237-page report and 'confirmed some footnotes and references were incorrect,' the department said in a statement Tuesday.” The issue wasn’t intent to deceive it was processing at scale, without the loop of live feedback, exception handling, and correction that any seasoned manager would insist on.
Here’s what outstanding managers and now, leading-edge technologists do differently:
Corporate leaders now face the same test. When feedback and context are missing, AI “hallucinates,” processes become brittle, and value is lost. The Deloitte saga echoes this: “The report was... published...after [a] researcher said he alerted media outlets that the report was 'full of fabricated references.'” Accountability and after-action review saved the day, but not before trust (and margin) took a real hit.
“Learning comes through doing, reflecting, and tweaking. That’s not just 'training'; it’s the loop that turns rules into intelligence, and intelligence into service.”
Get this right, and your agentic workers AI and humans alike will deliver real value, not lost eggs or headline scandals.
Links:
Quotes: