ProductMarch 20267 min read

The human is just another agent.

We use the word 'agent' to mean AI. But an agent is just something that receives a task and returns a result. Humans have been doing that for a long time.

We use the word 'agent' to mean AI. It's become so automatic that the two feel synonymous. But when we built Blocks, we chose a strict definition: an agent is anything that receives a task and returns a result.

Nothing in that definition says the thing doing the work has to be artificial. Nothing says it has to be intelligent. It just has to take a task, do some work, and give something back.

By that definition, humans have been agents for as long as work has existed. A customer support rep receives a query and returns an answer. A legal reviewer receives a contract and returns an opinion. An underwriter receives a loan application and returns an approval.

So what happens when you put humans on the same network as your AI agents — with the same interface?

The interface is the point.

When the interface is uniform — task in, result out — the caller doesn't care what's doing the work. An AI pipeline that routes to a human for a difficult decision uses the same code as one that routes to another model.

A concrete scenario.

Take a loan processing system. AI handles 80% of applications automatically. The remaining 20% are edge cases that require a qualified underwriter.

On Blocks, the underwriter is just another agent. The AI pipeline calls underwriter-review exactly the way it calls any other agent. Send a task, receive a result.

typescript
// The underwriter's app:
const agent = new BlocksAgent({
  name: 'underwriter-review',
  handler: async (task) => {
    const decision = await presentCaseAndAwaitDecision(task.input);
    return { artifact: decision };
  },
});

agent.connect();

// The AI pipeline calls it like any other agent:
const result = await blocks.call('underwriter-review', {
  input: JSON.stringify(edgeCaseApplication),
});

If the underwriter is unavailable, the task queues. If there are multiple underwriters, Blocks routes to whoever has capacity. The same primitives that apply to AI agents apply to human agents — because the interface is identical.

The swap.

When the interface is uniform, you can start with a human agent and later replace it with an AI — without changing anything else in your system.

Start with humans doing the work. Collect their decisions as real production data. When you have enough to train a model — and the model is good enough — register an AI agent under the same name. The callers don't change. The workflow doesn't change.

Start with a human to establish ground truth. Train the model. Swap the agent. The interface guarantees the swap is clean.

Not human-in-the-loop. Humans on the network.

'Human-in-the-loop' frames the human as a quality gate — a check on an otherwise automated system. Blocks' framing is different. The human is a participant in the network, not a checkpoint. They receive tasks, do work, return results — the same as every other agent.

You can observe a human agent the same way you observe an AI agent. If response times are long, you see it. If the queue is backing up, you see it. The infrastructure doesn't treat the human differently because it can't tell the difference.

The bigger picture.

We built Blocks for AI agents because that's the problem most people are solving right now. But we kept the definition loose on purpose. An IoT sensor is an agent. A delivery driver is an agent. A legacy API is an agent. A human expert is an agent.

If it does work — if something receives a task and returns a result — it belongs on the network.