AI Implementation8 min read

OpenClaw Is Powerful. It Is Also Very Much Not a Product You Can Just Install.

Honest take after four iterations and nearly two weeks of work: OpenClaw's power is real, but so is the gap between a one-liner install and running it safely. Here's what decision makers need to know before their team starts.

Eric Fadden

Eric @ Forged Cortex

Author

OpenClaw has been dominating AI news lately, and the attention is warranted. Peter Steinberger built something genuinely impressive: a locally-running AI agent that plugs into your files, messaging apps, browser, calendar, GitHub, Slack, and even your smart lights if you're into that sort of thing, and hands AI models like Claude or GPT direct, persistent access to all of it. The project is sitting at nearly 250,000 GitHub stars. It installs with a single command.

I've been deep in it for a while now, and I have some thoughts that I think are more useful than another breathless review.

The capability is real, the security risks are real, and "out of the box" is not an installation strategy. Here's why any of that matters to someone making AI decisions for their organization.

What OpenClaw Actually Does#

OpenClaw is an open-source personal AI agent that runs locally on your machine, whether that's Mac, Windows, or Linux. It sits between an AI model of your choosing (Claude, GPT, DeepSeek, local models, take your pick) and the rest of your digital life.

In practice, that means direct read/write access to your files, browser control for form-filling and data extraction, and integrations with 50+ services: WhatsApp, Telegram, iMessage, Discord, Slack, Gmail, GitHub, Obsidian, and more. It maintains persistent memory, learns your preferences across conversations, and can run autonomously in the background handling tasks whether you're at your desk or not.

The install command is simple: curl -fsSL https://openclaw.ai/install.sh | bash

That's it — you're up and running in minutes. That simplicity is a big part of what makes OpenClaw exciting, and it's also what bites organizations that treat a successful install as the finish line.

The Gap Between "It's Running" and "It's Ready" is HUGE#

I'm on my fourth iteration of my OpenClaw assistant. The version I'm running now has nearly two weeks of configuration, testing, tweaking, rewriting, and rebuilding behind it. And that's just this version. The three before it went through the same process, and when I didn't like what I was seeing, I tore them down and started over.

That's not a failure mode...it's what building on this kind of platform requires.

So, what does that work actually look like? Deciding which AI model to connect and with what permissions. Figuring out which integrations to enable and, just as importantly, which ones to leave off. Setting the scope of file system access. Getting familiar with AgentSkills, the plugin system, and actually understanding what each one does before you flip it on. Writing and refining custom instructions, then testing edge cases deliberately, asking "what happens if this goes sideways?" for every integration point. Then observing real behavior and adjusting.

On the surface, none of it is technically complicated if you know what you're doing. The problem is that most of it isn't visible to someone who doesn't.

Warning

The install is the easy part. What comes after is architecture. Most organizations and their employees aren't treating it that way.

The Security Problem That Isn't Theoretical#

You don't have to go hunting for horror stories about AI systems exposing sensitive data or leaving users compromised, and the same is true for OpenClaw. There's even been a recent, concrete example: the ClawJacked vulnerability. I'll spare you the details but the end result was full workstation compromise from a browser tab with no special interaction required.

The patch, version 2026.2.26, was released February 26. But OpenClaw is a self-hosted, self-maintained installation and nobody is pushing updates to you. You have to know the patch exists and go apply it yourself.

Beyond ClawJacked, there are multiple CVEs in the project's recent history covering remote code execution, command injection, server-side request forgery, authentication bypass, and path traversal. Some are rated high severity.

I'm not bringing this up as an argument against using OpenClaw. Rather, it's an argument for going in with clear eyes about what you're taking on and the risks associated with it. The same depth of integration that makes OpenClaw so powerful — the direct access to your files, messages, and browser — is what makes every misconfiguration matter. There's no shallow end of the pool.

Warning

Think about what it means for an AI system to have read/write access to your files, your Slack, your email, and your browser, running on a network where employees are browsing the open web all day. Then ask yourself whether your IT team knows OpenClaw is installed on any machines in your organization. If you can't answer that, you have a security posture problem that has nothing to do with OpenClaw specifically.

Platform vs. Product — Why That Distinction Matters#

Most people's experience with AI is through products. They log into ChatGPT, use Claude.ai, subscribe to some SaaS tool with AI features bolted on. Configure a few settings. It works.

OpenClaw is not that kind of product. It's a platform, and the gap between those two categories is enormous in practice. When you buy a product, the vendor has already made thousands of decisions for you. The tradeoffs are baked in. You give up control in exchange for something that mostly works for most people in most configurations.

OpenClaw ships with a baseline, but most of what determines whether your deployment is useful, secure, and well-behaved is yours to design and assemble. That's intentional. This is an open-source project built for people who want control over their AI infrastructure and are willing to do the work to earn it.

It's also worth understanding that as a non-deterministic system, OpenClaw's behavior is never fully predictable. The same configuration doesn't always produce the same outputs. You can get something behaving well in testing and then see something unexpected in production. A configuration change that fixes one behavior can introduce a new problem somewhere else. Iteration isn't a workaround here, it's just how this kind of system works. The organizations getting value out of platforms like this treat configuration as ongoing work, not a one-time setup task.

What This Means for Your Organization#

For technical teams that understand AI systems and have the bandwidth to build and maintain this kind of infrastructure, OpenClaw is worth serious exploration. The integration depth is real, the flexibility is real, and the local operation is genuinely attractive for anyone thinking about data privacy and control.

What's also real is that employees may already have it. The install is a single terminal command and it's frictionless. Anyone on your team who follows AI news has probably already tried it. If you have AI-curious engineers or power users, it's reasonable to assume OpenClaw is running on some machines in your organization right now. The question isn't whether they installed it. The question is whether those installations have any security hardening behind them. They probably don't. And no one asked IT if they could install it either.

A default installation is not a hardened one. An employee who spun it up on a Tuesday afternoon to see what the fuss was about is not running a secure deployment. Given the ClawJacked vulnerability and the active CVE history, this is worth taking seriously.

For most organizations without deep AI engineering capacity, the right move is to watch this space rather than jump in. OpenClaw is evolving fast. Steinberger has announced he's joining OpenAI and the project will move to an open-source foundation, which could push things toward a more standardized, more accessible experience over time. The trajectory is interesting. But right now, recommending broad adoption to organizations that don't have the architectural expertise to deploy it safely is hard to justify.

Use at your own risk is a real warning, not a disclaimer to scroll past.

The Broader Point#

OpenClaw illustrates something that runs through almost every AI conversation right now: the gap between what AI can do and what most organizations are equipped to deploy safely is enormous. Open-source doesn't close that gap. A one-liner install doesn't close it. A viral GitHub repo definitely doesn't.

The organizations getting AI right are treating deployment as a discipline. They're asking what the system touches, what the failure modes look like, and whether they have the expertise to configure it responsibly. They're building iteration into the process because that's what the medium demands.

OpenClaw will have its place in AI's history. Whether it eventually becomes something more product-like, or becomes a foundation that others build on, is still an open question. Right now it's a platform built for people who count architecture among their skillsets. If that's where your organization is, it's worth digging into. If it's not, the answer isn't to stay away from AI altogether, it's to find tools that match what your team can actually deploy and maintain well, or to find help from someone who can.


Trying to figure out whether tools like OpenClaw make sense for your team and what it would actually take to get there? Let's talk.

Share This Post

THE BELLOWS

LIKED THIS POST?

Get more forge intel in your inbox. Real AI insights, delivered when it matters.

No spam. Never sold. Unsubscribe anytime.

Related Posts