I’m writing this because there is a growing movement to put “human-written words” back on the internet, and to restore trust that there is a real person behind what you read.
I agree with the instinct for authenticity.
But we also need to consider neurodiversity and accessibility.
For me, the ideas and claims in my blogs are mine, captured in my own words.
I usually record them quickly, often in about ten minutes of speaking.
Then I use AI to tidy and structure those words for speed and clarity.
As a dyslexic, writing and making text legible can take me an inordinate amount of time.
Long emails can be cognitively exhausting.
Writing formal policies has been close to torture in the past.
So I want a clear separation between:
Each post will start with my cleaned words.
Then, if helpful, there will be a research section underneath so you do not have to look it up yourself.
I treat my blog as a durable history and a personal source of truth.
I review each article until I am happy with it.
This also raises a harder question about authenticity.
Many CEOs already ask marketing teams to write content for them to post.
That is not so different from AI ghostwriting.
If we genuinely want authenticity, perhaps the human author in the marketing team should be named, rather than hiding behind “the team”.
Some open questions remain:
These are not easy questions, but they are worth asking.
Leaders are being asked to pick a side on “human-written” versus “AI-written”, but the more useful question is: what behaviour are we trying to protect?
If the goal is trust, then we need to talk about process, not purity.
One of the clearest arguments I’ve seen is that hiding AI involvement is not neutral. It changes the relationship with the reader.
As Pascal Bornet puts it: "Should Employees Disclose When AI Contributed to Their Work? ... In my opinion, this isn’t just a policy question — it’s a trust question. Because when we hide AI’s involvement, we’re not just concealing a tool — we’re concealing a process."
https://www.linkedin.com/posts/pascalbornet_ai-ethics-leadership-activity-7414938945253294080-dy8O
There is also a commercial layer here.
Some organisations will use “human-only” as a premium badge, whether or not it improves outcomes for the reader.
Chelsea Burns frames it bluntly: "\"100% human\" is becoming the \"organic\" label of the content economy. A shorthand for purity, care, and trustworthiness that commands a premium by signaling what was not involved in production."
As a leader, it’s worth pressure-testing what that label means inside your organisation:
Here’s the thing that surprised me, and it matters for comms strategy.
Even when the text is identical, disclosure can change how people judge it.
Donald Farmer summarises research like this: "Across 16 preregistered experiments involving 27,491 participants (conducted between March 2023 and June 2024), the researchers found a consistent \"AI disclosure penalty\": when people are told that creative writing was produced by or with the help of an AI, they rate it lower on enjoyment, creativity, quality, and overall appeal than the identical text attributed to a human author."
So we have a genuine leadership tension:
That does not mean we should hide AI use.
It means we should treat disclosure as a design problem, not a compliance footnote.
If you want something workable, aim for clear separation between authorial intent and AI curation.
A simple internal standard for blogs, thought pieces, and leadership posts:
This aligns with the plain ethical case for disclosure.
Reckonsys states it directly: "Transparency is critical when deploying AI-generated content. Users should be informed when content is created or assisted by AI. This includes disclosing the use of AI in articles, art, music, or other creative works. Not doing so could mislead audiences about the nature of the content and the level of human involvement."
https://www.linkedin.com/pulse/ethics-ai-generated-content-authorship-originality-reckonsys-div9c
Some leaders are not worried about efficiency.
They are worried about voice, credibility, and control.
That concern is fair.
Oliver Malcolm captures the direction of travel: "The real question isn’t whether AI belongs in publishing - it already does. The question is whether the industry is willing to evolve its sense of expertise, authorship, and control. Editing has never been about perfection; it’s about judgement. Refusing to engage won’t save publishing - it might just sideline it."
My pragmatic take is to build governance that protects judgement, not to ban tools.
A leadership checklist you can use:
If your organisation already accepts ghostwriting, you are already in the authorship business.
AI does not create the ethical problem.
It exposes it.
If we want trust, we should stop pretending the goal is “no tools”.
The goal is:
https://www.linkedin.com/pulse/ethics-ai-generated-content-authorship-originality-reckonsys-div9c