Because I am still in this series about how human emotions can be related to agentics. I have been playing with dreaming, surprise, shame, curiosity and distrust, and I keep coming back to the same thing. If agentic systems are going to make work easier day to day, then we need metaphors and interfaces that people can feel in their bones. Not more dashboards and error codes.
So in this one I want to introduce the idea of pain.
I am not making a medical claim here, I am borrowing a biological pattern and turning it into an engineering pattern. Pain is one of the clearest examples we have of a signalling system that drives action.
Source: National Library of Medicine, MedlinePlus
"Pain is a signal in your nervous system that something may be wrong. It is an unpleasant feeling, such as a prick, tingle, sting, burn, or ache."
That is the whole point of this post. Pain is a signal. It routes attention. It prioritises. It changes behaviour.
In the body, a sharp pain is basically, stop doing something now because it is hurting. A dull ache is different. It is persistent. It changes how you move. You protect the area, sometimes without thinking about it.
In agentic systems, I think we can map that cleanly:
This is useful because it creates a shared language between people and systems. Humans already understand what sharp pain and dull ache mean. We do not have to train everyone to speak in severity codes to get the message across.
Sharp pain in agentics is, something is broken, something is not working, something is unsafe, or something is outside bounds.
The key thing is not detection. The key thing is what happens next. In a good team, sharp pain triggers incident management. In an agentic system, sharp pain should trigger an incident management agentic, a dedicated capability that can triage, contain, communicate, and coordinate.
Source: Andrew Stribblehill, Google SRE Book
"Effective incident management is key to limiting the disruption caused by an incident and restoring normal business operations as quickly as possible."
So the sharp pain pattern looks like this:
You are not trying to replace humans. You are trying to ensure the system reacts predictably and quickly, and keeps people in the loop with words they can act on.
Then there is the dull ache. This is the stuff that does not page you at 3am, but slowly drains time, trust, and attention.
In business systems it looks like:
In the body, a dull ache gives you a limp. You still move, but you protect the area. In an agentic system, the limp is deliberate protective behaviour that stays in place while the ache persists.
Examples of limp behaviours in an agentic system (design hypotheses, not fixed rules):
This is where observability becomes the nervous system. You want signals that let you interrogate what is happening from the outside.
Source: OpenTelemetry Authors, OpenTelemetry
"Observability lets you understand a system from the outside by letting you ask questions about that system without knowing its inner workings."
Here is the actionable bit. If you want to teach your agentic system to feel pain, you need two layers:
1) Human language so people can understand quickly
2) Machine readable metadata so agents can route and respond consistently
I think the simplest operational definition is:
A pain signal is an internal event emitted by agents, described in human terms, backed by structured routing metadata.
Keep the surface language human and consistent. For example:
The point is not theatrics. The point is shared context at speed.
Under the hood, attach fields that let other agents act without guessing:
pain_type: sharp, dullarea: subsystem, workflow, capabilitypersistence: new, recurring, continuousconfidence: low, medium, high (or a numeric score if you already use one)ownership: which agent or team owns responsesuggested_next_action: stop, investigate, reduce load, add checks, request human confirmationblast_radius_hint: what might be impacted (if known)You can store this as state, publish it as an event, or both. The important part is that it is consistent and routable.
This is the small experiment I would run with a team. No big rewrite required.
Pick five common failure modes or risks, for example:
Be strict:
Make it sound like a teammate, not a log line. Include:
Use a consistent schema across agents. If you do this, you will reduce arguments later.
Pick one dull ache and deliberately enforce protection for a week. Then evaluate:
After a sharp pain incident, decide what becomes a dull ache, and what gets removed entirely. After a dull ache fix, decide what trust is restored and what monitoring stays.
A few things I want to go deeper on next:
If we do use eventing for this, having a shared way to describe event data matters.
Source: CloudEvents Authors, CloudEvents
"CloudEvents is a specification for describing event data in a common way."
Pain is a warning system and a coordination system. When we map it into agentic systems, it becomes a practical design pattern:
If you try this, start small. Define five signals, route them, test the incident agentic, test the limp. Then tell me what broke, what surprised you, and what words your team naturally used. That is where the good patterns come from.
#Educate #Agentics #Observability #IncidentManagement #SRE #HumanCentredDesign #AIOps #SystemsThinking