When we build agentic systems inside real companies, the technical work is rarely the hardest part. The hard part is language.
Every function develops its own shorthand over time. Legal, finance, operations, development, each group has a way of speaking that is tightly linked to the information they work with every day. That is efficient for them, but it becomes a trap the moment we cross boundaries.
And we do not notice it because humans compensate naturally.
In a meeting, if I realise someone is a lawyer, I adjust my words. If I realise someone is finance, I get more precise about definitions. If I am in a board context, I shift to strategic framing. That adaptation is a social skill we have learned through experience.
Your agents do not have that skill unless you design for it.
Here is the pattern I see repeatedly.
A person asks a question in the way their function expects to ask it. The meaning is obvious to them, because their context fills in the gaps. But to an agent, those gaps look like instructions to invent something plausible.
A simple example: you ask, "for year 21, 22, what was margin?"
To a finance person, that might mean a specific financial year definition that is well understood internally. To someone else, it could mean calendar year. Or it could mean a company year that starts in a different month. Each one changes the answer.
If the agent guesses, you get a confident answer that feels authoritative, but is built on an assumption you never agreed.
That is how hallucination shows up in day to day work.
Source: https://www.tcg.com/blog/ai-hallucinates-when-you-ask-the-wrong-question/
"An LLM hallucinates when it believes that’s what the prompt is asking for. The model defaults to a mode of creative plausibility unless you explicitly instruct it to operate under the constraints of verifiable fact."
This is why I do not treat hallucinations as an abstract AI issue. In cross functional environments, hallucination is often a symptom of misaligned definitions.
Source: https://documentation.suse.com/suse-ai/1.0/html/AI-preventing-hallucinations/index.html
"Vague queries can lead to random or inaccurate answers. Lack of clear context. When the language model lacks context, it can fabricate answers."
So rather than demanding everyone become bilingual in every department (which never works), we teach the agent to behave like a good colleague.
A good colleague does three things before answering:
That is the behaviour we want to engineer.
If you only take one thing from this post, take this checklist. It is deliberately simple, because simple is easier to operationalise.
Make the agent ask, or infer safely from the session profile, the lens the user is operating in.
Examples of lenses:
Prompt pattern you can copy:
If you are capturing role in a user profile, be explicit about privacy. Only store what you need, store it securely, and tell people what is being stored and why.
This is where most cross functional errors hide.
Common ambiguous terms:
Prompt pattern you can copy:
year 21, 22 (financial year, company year, or calendar year), and which definition of margin you want?"Agents can write fluent text without being grounded in your actual systems. That is where trouble starts.
You want the agent to ask:
Prompt pattern you can copy:
If the agent cannot access the source of truth, it should say so, and ask you how to proceed. That transparency is a feature, not a failure.
This is a guardrail I like because it is measurable.
For cross functional questions, require the agent to ask a minimum of two clarification questions before giving an answer, unless the question is already fully specified.
My default two are:
1) role lens and audience
2) time period and definition confirmation
You can tune this by function. Legal might need jurisdiction. Finance might need consolidation scope.
This is where you stop an agent delivering a developer style answer to a board question, or a board style answer to a finance reconciliation.
Confirm:
Source: https://documentation.suse.com/suse-ai/1.0/html/AI-preventing-hallucinations/index.html
"The clearer the prompt, the less the LLM relies on assumptions or creativity. A well-defined prompt guides the model toward specific information, reducing the likelihood of hallucinations."
If you want to make this real, do a small test with one recurring cross functional question, ideally one that includes dates, periods, or definitions.
Run two versions:
Version A: Answer immediately
Let the agent respond without clarifiers.
Version B: Context First
Force the agent to ask the two clarifying questions (lens, time period definition) before it answers.
Measure three things for a week:
You might find Version B takes slightly longer per interaction. The point is whether it reduces rework and increases trust across the team.
Cross functional language is not noise, it is signal. It tells you what people care about, and which definitions they are assuming.
If you do not capture that context, the agent will fill the gaps. Sometimes it will guess correctly. Sometimes it will guess confidently and be wrong.
Design for the pause. Design for the clarification. Design for explicit definitions and data boundaries.
That is how you build agents that behave like good colleagues.
Source: https://www.tcg.com/blog/ai-hallucinates-when-you-ask-the-wrong-question/
"An LLM hallucinates when it believes that’s what the prompt is asking for. The model defaults to a mode of creative plausibility unless you explicitly instruct it to operate under the constraints of verifiable fact."
Robert Buccigrossi, AI Hallucinates When You Ask The Wrong Question, TCG, 2026-01-04
Source: https://documentation.suse.com/suse-ai/1.0/html/AI-preventing-hallucinations/index.html
"Vague queries can lead to random or inaccurate answers. Lack of clear context. When the language model lacks context, it can fabricate answers."
SUSE LLC and contributors, Preventing AI Hallucinations with Effective User Prompts, SUSE AI 1.0, 2025-12-18
Source: https://documentation.suse.com/suse-ai/1.0/html/AI-preventing-hallucinations/index.html
"The clearer the prompt, the less the LLM relies on assumptions or creativity. A well-defined prompt guides the model toward specific information, reducing the likelihood of hallucinations."
SUSE LLC and contributors, Preventing AI Hallucinations with Effective User Prompts, SUSE AI 1.0, 2025-12-18