Most teams using AI agents have already made a choice about how they work with them, they just haven’t named it yet.
In workshops and reviews, you can see the pattern clearly: some teams treat agents like interns, some like contractors, and some like teammates whose absence would be felt immediately.
The confusion doesn’t come from whether agents are “tools” or “collaborators.” It comes from a mismatch between how agents are used and how they are governed.
The question isn’t whether agents are teammates.
It’s what kind.
A practical taxonomy
You can usually tell where a team is by answering three questions:
How long does an agent persist?
Can it run continuously?
Who is allowed to update its memory?
Those answers matter more than any policy document.
The Intern
Short-lived
No persistent memory
Starts fresh every task
Intern-style agents are great for ideation and one-off tasks. They ask good questions and forget everything tomorrow. If your agent repeats the same mistake every week, you’re working with an intern.
The Contractor
Medium persistence (context lives outside the agent)
Task-bounded runtime
Humans own all memory updates
This is where many teams land. The agent is reliable, skilled, and productive—but it never really owns the work. You carry the tribal knowledge; the agent executes.
The Junior Teammate
Long-running
Scoped persistence
Limited autonomy over memory, with supervision
These agents start to learn how your team works: how decisions are made; which edge cases matter; what “done” actually means.
Performance jumps here—but so does the need for clarity about boundaries, because responsibility drifts the same way it does with human employees.1
The Senior Teammate
Strong continuity
Always-on or event-driven
Meaningful (audited) memory sovereignty
If removing the agent would disrupt the team, you’re already here, whether you admit it or not.
At this point, the agent is holding tribal knowledge. Pretending otherwise just makes failures harder to diagnose.
Where teams get stuck
Most teams don’t consciously choose a category.
They say they have interns, operate as if they have junior teammates, and govern as if they’re still using tools.
That’s where friction appears:
agents that feel “flaky”,
context that mysteriously evaporates,
responsibility that quietly shifts onto humans without being acknowledged.
None of this is about intelligence or consciousness: it’s about coordination.
Why this matters for speed
In The need for speed, we talked about iteration velocity and feedback loops.
This is the other half of that story.
Speed doesn’t come from clever prompts alone. It also comes from continuity:
remembering what mattered last time,
not relearning the same lessons,
and letting systems hold context so humans don’t have to.
If you want agents to move fast with you, you have to be honest about what role they’re playing.
Naming the role is the work
There’s no single “right” answer here.
Not every team wants senior-agent dynamics, nor should every system should persist.
Once you’ve named the role, you might find the name starts to carry weight you didn’t expect.
Expectations shift. Failure modes change. Questions about ownership, continuity, and handoff stop being abstract and start becoming operational.
But there is a wrong move:
relying on agents like teammates while governing them like replaceable tools.
The most dangerous configuration isn’t giving agents too much responsibility; it’s giving them responsibility without naming it.
Once teams get clear about the kind of teammates they actually have, the rest—tooling, guardrails, workflows—gets much easier to reason about.
This article was written with AI assistance at the junior-teammate level.




