Photo by Dollar Gill / Unsplash

The First Lesson of AI Agents Is Knowing When to Stop

Create Mar 8, 2026

Last weekend, I set up two OpenClaw instances in one go.

One ran on a MacBook. The other ran on a Windows laptop. I gave them names, assigned them roles, installed different skills, configured the core files, and started making them work together.

For the first few days, it felt great.

It was like learning how to run and then suddenly hiring two world-class marathoners to train with you. At first, the feeling is simple: this is amazing. So this is the future.

Task breakdown, research, document cleanup, code generation, workflow building. The speed of progress on many tasks really did jump by an order of magnitude.

But that feeling did not last long. Very quickly, I went from excited to overwhelmed.

The bots were online around the clock, constantly sending back questions, exceptions, items waiting for approval, new ideas and new requests. I thought I had gained two AI employees. In reality, it felt more as if I had suddenly added two high-pressure project sites to my life, both of them waiting for me to make decisions, give permission, jump in for emergency fixes and sign off on the results. My own mental bandwidth stayed in a state of overclocking, not at 120 percent but closer to 200, almost to the point of crashing.

After one full week of using them at that intensity, I made a decision:

I broke up the AI team I had just built.

Not because I was done with it.

But because I needed to reduce the leverage and start again.

I kept one setup for myself and deployed another one for my wife, but I stopped chasing the idea of dual-agent collaboration, multi-track execution and letting the system do everything from day one. I went back to the beginning and tried to understand, more calmly, what this kind of delivery-style AI really is, where its limits are, and how ordinary people should actually use it.

The biggest gain from that week was not that I had learned a few more skills.

It was that I became sure of three things.

First, an AI agent is an amplifier, not a wishing well

For 99 percent of ordinary people, the real nature of an AI agent is not “tell it what you want and it will make it happen.”

It is something else. It amplifies a capability you already have, a workflow you already understand, or an execution layer you can already judge.

That sounds obvious now. It was not obvious to me in the first week.

I was clearly getting carried away by its power and by the sheer size of the imagination it opened up. New ideas kept popping into my head. Could it do this on the side too? Could I run that in parallel as well? Could I now hand over things I had never been able to do before?

To put it plainly, I was getting ahead of myself.

If you ask it to organise materials, break down tasks, build a simple workflow or sketch out the first version of an information system, it really can be extremely good. Sometimes it is so good that you start to think the limits have disappeared.

They have not.

You cannot have only a shallow understanding of a complex real-world problem and still expect the tool to deliver a high-return, low-risk, fully automated project that works quickly and cleanly. That is not an AI problem. That is a human being treating a tool like a wishing well.

I saw a line recently that stayed with me. Many people do not find the nail first and then look for a hammer. They drag home a giant hammer first and then start looking for nails everywhere.

That is very easy to do with tools like OpenClaw, Claude Code and Codex.

The tools are so strong, and the apparent boundaries are suddenly so wide, that you stop asking what real problem is worth solving. What remains is a different question: if it can do so much, should I be making it do something, anything?

That is why so many people install one of these tools and then immediately ask:

“So what exactly can I use this for?”

If that is the first question, the order is usually already wrong.

The right order is the opposite. Start with a real problem. Then go looking for the tool.

Second, the biggest danger is not that you do not know how to use it, but that you use it in areas you do not understand at all

I learned a deeper lesson that week too.

AI agents really are powerful. They can push many complicated projects to the point where 80 or 90 percent of the work appears to be done. They build the structure, organise the files, connect the flow, write the code, fill in the documentation.

And that is exactly where the trouble begins.

The moment people see 80 or 90 percent, they start telling themselves that the last bit should not be that hard. Just finish it off.

In reality, it is often the opposite.

That final 10 percent is usually the most expensive part.

It may be a critical bug. It may be a permissions issue. It may be a dependency conflict you cannot even read properly. It may be a business judgment. It may be that the acceptance standard was never defined in the first place. You think you are saving time, and then you realise that the last 10 percent has swallowed several times that amount in learning, rework and emergency fixes.

This becomes especially dangerous when the project comes from a field you do not understand at all.

If you are only improving a workflow you already know well, writing, research, information gathering, document setup, then even if the workload is heavy, you can usually still hold it together. You know what counts as right, what counts as wrong, what counts as done, and what must be redone.

But if you do not even have the most basic judgment standards for that field, then very often you are not in a position to verify the 90 percent that AI has built for you.

That is not an AI problem. It is a problem of human competence and range.

So I have come to believe a very plain rule:

Be careful when stepping into fields you do not understand at all.

That does not mean never touch them. It means do not begin by using too much leverage, and do not try to run too many threads in parallel while you still do not understand the basics.

In finance, people talk about leverage all the time. The same rule applies to families, companies and now AI workflows. Moderate leverage can improve efficiency and expand returns. Too much leverage reaches a breaking point. The structure bends. The chain snaps.

The AI era is no different.

Agents can amplify efficiency. They can also amplify your knowledge gaps, your decision gaps and your execution mistakes.

Third, for most people, one agent is already enough

The more I use these systems, the more I think many people get the order wrong from the start.

They want to build an AI team before they have learned how to use a single AI worker well.

For 99 percent of people, one agent is enough to begin.

First learn how to walk. Then learn how to run.

Get one instance working steadily for you. Learn how to choose the right model. Learn how to install the skills you actually need. Learn how to write the core files clearly. Learn how to define rules, memory, permissions and review. Only after that should you think about multiple instances working together.

That is the more realistic path.

Otherwise, it is very easy to jump from “this is a productivity tool” straight into “I am now responsible for operating a system.”

And not by choice.

You thought you were trying to work faster. Suddenly you are managing rhythm, failures, logs, context, conflicts, upgrades and rollbacks. In the end, most of your time goes into managing the system rather than using the system.

That is why I chose to dismantle the team.

Not because it was weak.

Quite the opposite. I knew it was powerful. That was exactly why I needed to control the pace.

At this stage, OpenClaw feels to me a lot like Windows XP did more than twenty years ago.

You are staring at a clean new computer. Your first thought is not, “I am going to rebuild the whole world with this.” You install an input method. A word processor. An image viewer. A media player. A few games. You use it, you learn as you go, it breaks, and then you reinstall the system and start again.

That is roughly where I think OpenClaw is today.

It feels less like a magical machine and more like an operating system that is still taking shape.

One day it may become like Windows, macOS or Android. It may dissolve into daily life so completely that you barely notice it is there. But at this early stage, the important thing is not to fantasise about unlimited capability. It is to learn how to make it work steadily inside your own daily life.

Get one use case running.

Then a second.

Get one instance working.

Then talk about collaboration.

Stay inside your competence range first.

Then slowly expand outward.


The biggest progress I made that week was not that I had learned how to manage two more AI employees.

It was that I finally became willing to admit this:

What ordinary people really need is not more agents. It is fewer illusions.

AI agents are powerful, of course.

But they should first be understood as amplifiers, as execution layers, as the early form of a new operating system. They are not wishing wells, not money-printing machines, and not the place where every anxiety and ambition about the modern world should be dumped.

Once you understand that, many things become simpler.

You still have to define your own problems first. You still have to protect your own pace. You still have to understand the edges of your own ability. Only then should you hand the rest over to AI and let it amplify what is already there.

That path is slower.

But it is much steadier.

And for most ordinary people, it may be the more realistic way into the AI era.

Tags

QiDi

Trusting the journey. From Beijing to Japan, I’ve traded one chapter for another to build a new life here. This is where I document my story of starting over. | 一切都是最好的安排。 从北漂到日漂,开启一段新的人生,讲述自己的故事。