I Tried Running OpenClaw Locally and It Scared Me Into Doing This Instead
I watched all the OpenClaw videos, the breathless ones promising the personal assistant you always wanted. Run it on your machine, change your life, hands-free magic. And honestly, I was right there with them in the excitement for about five minutes.
But here’s the thing. The security guy in me couldn’t help but think, OK, so what’s the catch? What’s the issue? Every time the hype ratchets up, I start looking for the angle, the dark corners. It’s never only magic and rainbows, is it?
So I decided to try it out myself.
Not on a single one of my normal machines.

The Bit Nobody Puts In The Demo Video
The promise is simple: an agent that can do things for you, not only suggest things. That sounds like productivity. In practice, it also means authority. Someone or something is now acting in your name.
That is not a technical detail. That is a leadership decision.
Not saying existing controls are no longer any good. Least privilege still applies, authentication still matters and logging is still critical. But our scoping assumptions need revision. If an agent can take action, we've already delegated authority whether we documented it or not.
If you lead a team, you already know how this plays out.
When something goes wrong, the post-incident question is never: which tool did it.
It’s: who approved this, what did we allow, and why did nobody notice sooner.
Docker Frustration Was A Gift, Not An Inconvenience
First go was Docker. Frustration is the only word for it. It wouldn’t play nice locally. Nothing smooth about that install process, and frankly, that’s a red flag in itself.
So I did what security folk do. I moved to a fresh VPS, no ties to anything real.
A clean environment gives you two huge leadership wins:
- A hard boundary between experimentation and your real life, your real business, and your real data
- A recovery story that is not wishful thinking, because you can burn the server and walk away
My Governing Rule: Worst-Case Thinking
This is the rule I keep coming back to, whether I’m playing with agentic tools or signing off a business process change.
Always think, what is the very worst thing that could possibly happen?
And when that very worst thing happens, how can you get back from it?
And if you can’t get back from that very worst thing, then don’t do it.
If that feels slow, good. Speed is not the goal. Survivability is the goal.
Why I Refused To Give It My Browser, Accounts, Or Anything I Care About
Once it was up, it was not doing all the flashy, integrated stuff everyone shows on YouTube.
That was deliberate.
It doesn’t have my browser. It doesn’t touch my bank. It doesn’t touch my socials. I’m not handing an early-stage agent the keys to my life and hoping for the best.
Instead, I’m taking a service-by-service approach:
- Prove a use case in a contained space
- Add one integration
- Validate what it can read and what it can do
- Decide whether it gets to keep that access
Here’s the leadership trap: most teams do the opposite.
They start with open access because it feels productive, then retrofit controls after the first scare.
The current AI playbook quietly inverted "least privilege" access. Every week I talk to teams deploying AI and almost none of them have thought through what “open by default” really means...As an industry we’ve spent decades building security around least privilege: only grant what’s needed, when it’s needed, for as long as it’s needed.
The Security Reality Check: This Stuff Is Getting Prodded Hard
I keep hearing reports about OpenClaw having lots and lots of security things and stuff like Cisco. That’s not gossip. That’s your signal to stop treating this like a toy.
If you’re leading an organisation, you do not need to be the most technical person in the room to be effective here. You need to ask better questions, earlier.
Questions like:
- What could this agent do if it were tricked?
- Where are the credentials stored?
- What logs exist that would help us explain an incident?
- What is our fastest clean rollback if it goes off the rails?
And if you cannot answer those, the correct response is not to push ahead harder.
It’s to shrink the blast radius.
Treat Your AI Like An Intern (Because That’s What It Is Right Now)
Slow steps, right? Slow steps.
You wouldn’t let a child loose in a place with sharp, sharp, sharp implements because it could hurt itself.
In the same way, don’t let your AI loose where it could land you in trouble.
Treat it like an intern. In your business, you wouldn’t give them access to every single thing at the beginning. You would slowly add as you trust, as you learn to trust someone, you then give them more information.
That mental model is more useful than any vendor pitch deck.
It keeps you anchored in:
- Dignity and respect, because you’re building guardrails rather than blame
- Non-harm, because you’re assuming mistakes will happen and designing for containment
- Honesty, because you’re not pretending you can control what you cannot see
- Privacy, because you’re stopping unnecessary access at the door
- Accountability, because you are documenting who allowed what, and why
Messaging Interfaces: Convenient, But Don’t Be Casual About It
Yes, I see why people want Telegram or WhatsApp style control. It’s familiar, low friction, and it feels like a personal assistant should live there.
But if you wire an agent into messaging without a plan, you’re creating a new path into your operations.
My leadership rule of thumb:
- Use messaging only when it sits behind the same permission discipline as everything else
- Avoid tying it to your primary personal accounts
- Keep a clean separation between experiments and day-to-day work
- Assume messages will be forwarded, screenshotted, mis-sent, or scraped
The goal is not paranoia. The goal is governance that matches the reality of how humans behave.
What I’d Do If I Were You (This Week)
If you want the benefits of agentic workflows without rolling the dice on your business, do this in order:
- Create an isolation boundary: a VPS or a separate machine that you can wipe with zero regret
- Start with read-only where possible, and time-box access where you can
- Adopt least privilege by default: one service, one permission set, one purpose
- Rotate tokens and secrets regularly, and immediately after any change that smells odd
- Turn on logging and review it: if you cannot explain what it did, you do not control it
- Write a rollback plan: how to kill it, revoke access, and restore clean state in minutes
- Stage the rollout: prove value in low-risk workflows before you even consider sensitive systems
None of this is hard. It is simply disciplined. That’s what leadership looks like here.
Closing Thought
OpenClaw is useful, and it is exciting.
But excitement is not a control. It’s a feeling.
If you’re going to experiment, do it like you would onboard a junior hire into a regulated environment: with kindness, structure, boundaries, and a clear path to earned trust.
Slow steps are safer steps.
Links
- OpenClaw Bug Enables One-Click Remote Code Execution via Malicious Link
URL: https://thehackernews.com/2026/02/openclaw-bug-enables-one-click-remote.html
Trust rating: high
Reason used: Recency and incident framing for why containment, patching, and rollback matter at leadership level
Date written: 2026-02-07 - Personal AI Agents like OpenClaw Are a Security Nightmare
URL: https://blogs.cisco.com/ai/personal-ai-agents-like-openclaw-are-a-security-nightmare
Trust rating: high
Reason used: Credible vendor perspective supporting the need for governance, plugin risk awareness, and cautious deployment
Date written: 2026-02-07 - OpenClaw vulnerability notification - Information Security at University of Toronto
URL: https://security.utoronto.ca/advisories/openclaw-vulnerability-notification/
Trust rating: high
Reason used: Independent institutional mitigation guidance that supports token rotation, monitoring, and separation practices
Date written: 2026-02-07 - What Security Teams Need to Know About OpenClaw, the AI Super Agent
URL: https://www.crowdstrike.com/en-us/blog/what-security-teams-need-to-know-about-openclaw-ai-super-agent/
Trust rating: high
Reason used: Practical risk categories (prompt injection, lateral movement) reinforcing staged rollout and auditability
Date written: 2026-02-07 - Best practices for storing user information in Telegram bots
URL: https://community.latenode.com/t/best-practices-for-storing-user-information-in-telegram-bots/31889
Trust rating: medium
Reason used: Applied messaging bot hygiene concepts supporting cautious Telegram style integrations and minimising data exposure
Date written: 2026-02-07
Quotes
- LinkedIn (Michael Burns)
URL: https://www.linkedin.com/pulse/agentic-ai-challenge-security-standards-michael-burns-xhtoe
Trust rating: medium
Reason used: Reinforces the leadership reality that agent actions equal delegated authority, requiring revised scoping and governance
Date written: 2026-02-07
Quote (exact):
"Not saying existing controls are no longer any good. Least privilege still applies, authentication still matters and logging is still critical. But our scoping assumptions need revision. If an agent can take action, we've already delegated authority whether we documented it or not." - LinkedIn (Jake Miller)
URL: https://www.linkedin.com/posts/jakemillerindy_aisecurity-redteaming-pentesting-activity-7417237153543380993-lW3D
Trust rating: medium
Reason used: Plain-language warning about open-by-default patterns replacing least privilege in real teams
Date written: 2026-02-07
Quote (exact):
"The current AI playbook quietly inverted "least privilege" access. Every week I talk to teams deploying AI and almost none of them have thought through what “open by default” really means...As an industry we’ve spent decades building security around least privilege: only grant what’s needed, when it’s needed, for as long as it’s needed.\n\nThe current AI playbook quietly changed that. And many orgs don’t realize they handed out root-ish access until something goes wrong."