Should only the author write content for humans?
My words (cleaned)
I’m writing this because there is a growing movement to put “human-written words” back on the internet, and to restore trust that there is a real person behind what you read.

I agree with the instinct for authenticity.
But we also need to consider neurodiversity and accessibility.
For me, the ideas and claims in my blogs are mine, captured in my own words.
I usually record them quickly, often in about ten minutes of speaking.
Then I use AI to tidy and structure those words for speed and clarity.
As a dyslexic, writing and making text legible can take me an inordinate amount of time.
Long emails can be cognitively exhausting.
Writing formal policies has been close to torture in the past.
So I want a clear separation between:
- What I wrote and meant (authorial intent, accountable human voice)
- What AI curated (structure, editing, and, if needed, research beneath the line)
Each post will start with my cleaned words.
Then, if helpful, there will be a research section underneath so you do not have to look it up yourself.
I treat my blog as a durable history and a personal source of truth.
I review each article until I am happy with it.
This also raises a harder question about authenticity.
Many CEOs already ask marketing teams to write content for them to post.
That is not so different from AI ghostwriting.
If we genuinely want authenticity, perhaps the human author in the marketing team should be named, rather than hiding behind “the team”.
Some open questions remain:
- What do we want from marketing content?
- Is benign corporate content acceptable, or do we want content that a specific person actually believes?
- What disclosure norms should we adopt?
These are not easy questions, but they are worth asking.
Agentic (AI created research and content)
Leaders are being asked to pick a side on “human-written” versus “AI-written”, but the more useful question is: what behaviour are we trying to protect?
If the goal is trust, then we need to talk about process, not purity.
One of the clearest arguments I’ve seen is that hiding AI involvement is not neutral. It changes the relationship with the reader.
As Pascal Bornet puts it: "Should Employees Disclose When AI Contributed to Their Work? ... In my opinion, this isn’t just a policy question — it’s a trust question. Because when we hide AI’s involvement, we’re not just concealing a tool — we’re concealing a process."
https://www.linkedin.com/posts/pascalbornet_ai-ethics-leadership-activity-7414938945253294080-dy8O
The “100% Human” Label Is Becoming a Status Signal
There is also a commercial layer here.
Some organisations will use “human-only” as a premium badge, whether or not it improves outcomes for the reader.
Chelsea Burns frames it bluntly: "\"100% human\" is becoming the \"organic\" label of the content economy. A shorthand for purity, care, and trustworthiness that commands a premium by signaling what was not involved in production."
As a leader, it’s worth pressure-testing what that label means inside your organisation:
- Is it a commitment to human accountability?
- Or is it a marketing claim that quietly excludes people who need assistive tools to communicate clearly?
- Does it improve trust, or does it create a new kind of “authenticity theatre”?
Transparency Helps Trust, But It Can Also Trigger Bias
Here’s the thing that surprised me, and it matters for comms strategy.
Even when the text is identical, disclosure can change how people judge it.
Donald Farmer summarises research like this: "Across 16 preregistered experiments involving 27,491 participants (conducted between March 2023 and June 2024), the researchers found a consistent \"AI disclosure penalty\": when people are told that creative writing was produced by or with the help of an AI, they rate it lower on enjoyment, creativity, quality, and overall appeal than the identical text attributed to a human author."
So we have a genuine leadership tension:
- We want transparency.
- But people may punish transparency, even when the work is solid.
That does not mean we should hide AI use.
It means we should treat disclosure as a design problem, not a compliance footnote.
A Practical Disclosure Standard You Can Adopt This Quarter
If you want something workable, aim for clear separation between authorial intent and AI curation.
A simple internal standard for blogs, thought pieces, and leadership posts:
- Intent: What the named author believes, decided, or learned.
- Curation: Editing, structure, summarisation, grammar, readability support.
- Research: Any external references, links, or quotes added “beneath the line”.
- Accountability: A named human signs it off.
This aligns with the plain ethical case for disclosure.
Reckonsys states it directly: "Transparency is critical when deploying AI-generated content. Users should be informed when content is created or assisted by AI. This includes disclosing the use of AI in articles, art, music, or other creative works. Not doing so could mislead audiences about the nature of the content and the level of human involvement."
https://www.linkedin.com/pulse/ethics-ai-generated-content-authorship-originality-reckonsys-div9c
What To Do If You’re Worried About Reputation
Some leaders are not worried about efficiency.
They are worried about voice, credibility, and control.
That concern is fair.
Oliver Malcolm captures the direction of travel: "The real question isn’t whether AI belongs in publishing - it already does. The question is whether the industry is willing to evolve its sense of expertise, authorship, and control. Editing has never been about perfection; it’s about judgement. Refusing to engage won’t save publishing - it might just sideline it."
My pragmatic take is to build governance that protects judgement, not to ban tools.
A leadership checklist you can use:
- Name the accountable author on every piece of leadership content.
- Disclose AI assistance in one plain sentence.
- Keep “My words” separate from “Agentic research” when you publish.
- Treat AI like a junior editor, not a ghostwriter.
- Create a red line:
- No invented facts.
- No invented quotes.
- No “sources” you cannot click.
A Closing Thought For Leaders
If your organisation already accepts ghostwriting, you are already in the authorship business.
AI does not create the ethical problem.
It exposes it.
If we want trust, we should stop pretending the goal is “no tools”.
The goal is:
- Human accountability
- Transparent process
- Respect for accessibility
- Evidence-based decision making about what readers value
Links
- https://www.linkedin.com/pulse/ethics-ai-generated-content-authorship-originality-reckonsys-div9c (trust_rating: Not provided)
- https://www.linkedin.com/pulse/authenticity-premium-why-100-human-new-organic-chelsea-burns-m-s--vkjvc (trust_rating: Not provided)
- https://www.linkedin.com/posts/oliver-malcolm-82a22742_is-our-notoriously-sensitive-ego-not-ai-activity-7420397275937427456-iHzd (trust_rating: Not provided)
- https://www.linkedin.com/posts/pascalbornet_ai-ethics-leadership-activity-7414938945253294080-dy8O (trust_rating: Not provided)
- https://www.linkedin.com/pulse/we-dont-like-ai-written-text-cant-identify-can-science-donald-farmer-aw7hc (trust_rating: Not provided)
Quotes
- "Transparency is critical when deploying AI-generated content. Users should be informed when content is created or assisted by AI. This includes disclosing the use of AI in articles, art, music, or other creative works. Not doing so could mislead audiences about the nature of the content and the level of human involvement."
https://www.linkedin.com/pulse/ethics-ai-generated-content-authorship-originality-reckonsys-div9c
- "\"100% human\" is becoming the \"organic\" label of the content economy. A shorthand for purity, care, and trustworthiness that commands a premium by signaling what was not involved in production."
- "Across 16 preregistered experiments involving 27,491 participants (conducted between March 2023 and June 2024), the researchers found a consistent \"AI disclosure penalty\": when people are told that creative writing was produced by or with the help of an AI, they rate it lower on enjoyment, creativity, quality, and overall appeal than the identical text attributed to a human author."