Skip to content

A Call to All Managers Rolling Out POCs: How You Reduce Failures

Tony Wood
Tony Wood

 

Most AI POCs do not fail because the demo was weak. They fail because nobody gave the organisation the time, ownership, and operating model needed to turn the demo into reality.

reduce-info


If you manage the rollout, this is for you

Not the AI lab.
Not the vendor.
Not the board.

You.

The uncomfortable truth is this: AI POCs rarely fail where people think they fail.

  • Not in the demo
  • Not when the model gives an awkward answer
  • Not even when the pilot group says “interesting”

They fail later, when the organisation has to answer operational questions:

  • Who owns this now?
  • Who has time to make it work?
  • What workflow changes?
  • What stops?
  • What gets measured?
  • Who can pause it?

That is where AI projects die.


The real problem: handover to reality

Most POCs answer:

Can this thing work?

Production requires:

Can this thing become part of how we run the business?

A POC can succeed technically and still fail operationally.

That is why organisations end up with “promising pilots” that never scale.


Managers: slow the room down

Before scaling a POC, ask:

  • What work changes?
  • Which team absorbs the change?
  • What will they stop doing?
  • Who owns the transition?
  • Who has authority to change workflows?
  • What happens if adoption fails?
  • What are we measuring at 30, 90, 180, 365 days?

These are not negative questions. They are success questions.


Three manager duties before scaling

1. Give people time

The most misleading sentence in AI adoption:

“This will save time.”

Early stages consume time:

  • Learning
  • Testing
  • Comparing outputs
  • Changing habits
  • Building trust

Managers must explicitly allocate:

  • Training time
  • Experimentation time
  • Workflow redesign time
  • Feedback time
  • SOP updates
  • Data clean-up

Key question:
Whose calendar is paying for this?


2. Set the RACI before the roadmap

Define:

  • Responsible
  • Accountable
  • Consulted
  • Informed

Before building anything.

Excitement is not ownership.

Minimum RACI coverage:

  • Problem definition
  • Data readiness
  • Workflow redesign
  • Training
  • Risk and compliance
  • Technical delivery
  • Monitoring
  • Benefits tracking
  • Stop or scale decisions

If your RACI ends at go-live, it is incomplete.


3. Treat it as an operating change

Define:

  • What workflow changes
  • What decisions move faster
  • What human judgement remains
  • What data is required
  • What permissions exist
  • What must never happen
  • What proves success
  • What signals risk

If the work is not redesigned, the AI becomes:

  • Another tab
  • Another login
  • Another unused tool

That is not transformation. That is clutter.


Minimum viable setup

1. One-page operating brief

Include:

  • Business problem
  • Workflow impacted
  • Users
  • Baseline and target metrics
  • Business owner
  • Technical owner
  • Adoption manager
  • Time allocated
  • Risks
  • Human checkpoint
  • Pause trigger
  • Review dates

2. Named benefits owner

  • Technical lead → system works
  • Benefits owner → value appears

These are not the same role.


3. Manager capacity plan

Define actual effort:

  • Weekly time commitment
  • Adoption huddles
  • Escalation paths
  • Backfill requirements
  • Feedback loops

If managers are expected to “absorb it”, the plan will fail.


4. Review cadence

  • 30 days: usage and issues
  • 90 days: workflow change
  • 180 days: value
  • 365 days: scale or stop

Measure:

  • Adoption
  • Value
  • Risk
  • Trust

The real reason managers avoid this

It feels heavy.

There is pressure to move fast.
There is excitement.
There is always someone saying:

“Let’s not over-engineer it.”

Fair.

But most failures come from under-owning, not over-engineering.


The fear you need to name

People are not always sure AI is good for them.

  • Fear of replacement
  • Fear of reduced authority
  • Fear of surveillance

If ignored, feedback disappears.

Managers must clarify:

  • What AI is for
  • What it is not for
  • What work it removes
  • What people move toward
  • How performance is judged

A practical 60-minute session

Bring:

  • Business owner
  • Technical lead
  • Manager
  • Risk/compliance
  • Users

Answer:

  1. What workflow changes?
  2. What does success look like?
  3. What stops?
  4. Who owns benefits?
  5. Who supports adoption?
  6. What are failure modes?
  7. What must never happen?
  8. What human checkpoint exists?
  9. What is measured over time?
  10. Who can pause it?

If you cannot answer these, do not scale yet.


How managers reduce failure

Managers make invisible work visible:

  • Time
  • Ownership
  • Risk
  • Workflow change
  • Benefits
  • Stop decisions

They prevent:

  • Technical success being mistaken for operational readiness
  • Good demos bypassing real-world adoption
  • Innovation masking weak accountability

Closing

AI POCs do not need less ambition.
They need better management.

  • Not more dashboards
  • Not louder sponsorship
  • Not generic training

They need:

  • Time
  • Ownership
  • Operating models
  • Accountability

Before you scale, ask:

Have we made it possible for this to work here?

Share this post