Let’s get honest: the story dominating boardroom conversations this month isn’t about AI gone rogue – it’s about leadership that leaves oversight on autopilot. The Deloitte–Australia incident didn’t just raise eyebrows; it exposed how fragile reputation and budget become when you outsource not only work, but also verification and critical thinking.
Mistakes labelled as “AI slop” don’t happen because language models fail in magical new ways. They happen because teams skip the same steps they’d demand of human analysts: research, double-checking, peer review, and a final sense-check. As Fortune reported,
“Deloitte’s member firm in Australia will pay the government a partial refund for a $290,000 report that contained alleged AI-generated errors, including references to non-existent academic research papers and a fabricated quote from a federal court judgment.”
It didn’t end there. Errors only came to light after an external expert flagged faults. By then, the revised report had to admit the use of generative AI and scrub dozens of made-up citations. All of it was avoidable.
“A revised version was quietly published on Friday after Sydney University researcher of health and welfare law Chris Rudge said he alerted media outlets that the report was 'full of fabricated references.'”
Plenty of leaders now ask: So, do we pull back on AI? Or do we double down on external expertise? Here’s the risk: blaming the tools is missing the lesson. Every C-suite and board should instead ask:
• Who owns our agentic routines – the combination of human checks and smart automation – from start to finish?
• Are we building skill and accountability inside our teams, or just signing off big consulting cheques and hoping for the best?
When knowledge generation is commoditised, quality control becomes the differentiator. “AI slop” is rarely about the tech – it’s the predictable by-product of short-cuts, lack of process, and a failure to adapt project management to this new AI-enabled era. As much as AI can speed work, only robust agentic routines protect against high-profile embarrassment (and refund requests).
Before you authorise another AI initiative or external AI contract, build your own playbook for:
• In-house verification and sign-off – don’t let critical review become someone else’s job
• Ongoing upskilling on what generative AI does well – and where human sense is still pivotal
• Clear standards for disclosure, review, and escalation when things don’t look right
Boards that get this right won’t just dodge the next headline: they’ll quietly capture years of cost and speed advantage, while rivals scramble to contain self-inflicted reputational damage.