Should AI Agents Learn to Work Together Like Humans?

Exploring the differences between AI agent interactions and human collaboration, and whether agents should adopt human-style teamwork patterns.

A photo of Anna, a white-presenting woman smiling
Founder & CEO
Illustration contrasting human collaboration versus AI agent behavior: diverse, expressive human characters (semicircle, teardrop, star, circle, and triangle shapes with happy faces) on the left side versus uniform, rigid AI agents (red trapezoidal shapes with neutral expressions) arranged in a grid on the right side, separated by a black lightning bolt

I’ve been thinking a lot lately about how AI agents interact, and how different that is from how humans collaborate.

In human teams, trust is essential. It’s how we delegate, adapt, recover from mistakes, and move fast without checking each other’s work every five minutes. But when you look at how agentic workflows are designed today, it’s the opposite. Agents don’t trust each other. They assume every step needs to be double-checked, re-summarized, or validated from scratch.

At first, I wondered if my urge to “teach agents to work like a team” was just a bias from working so long with actual people. But the more I explored it, the more I realized there’s something useful in that tension.

Why Agents Don’t Trust Each Other

Right now, most AI agents are built on large language models. These models are powerful, but not perfect. They generate responses based on probabilities, not certainty. So an agent summarizing a document might leave out something important. Another agent interpreting that summary might misunderstand it. And since there’s no persistent memory or shared context between them, the safest bet is to assume the last agent could be wrong.

This is why agent systems often feel repetitive or overly cautious. It’s not inefficiency. It’s error management.

What Human Teams Do Differently

In contrast, humans build shared context over time. We know who’s good at what. We tolerate a certain amount of fuzziness or ambiguity because we trust each other to course-correct if something goes sideways. There are social mechanisms for repair: clarifying, renegotiating, apologizing. We don’t need to start from scratch every time.

That trust saves energy. It’s what lets us collaborate efficiently, especially when things are complex or fast-moving.

Is Human-Style Collaboration Worth Bringing into Agent Design?

I think the answer is yes, with caveats.

There’s value in giving agents:

  • Persistent identity, so they can build credibility over time
  • Role clarity, so others know when to rely on their output
  • Shared memory, so context doesn’t need to be recreated constantly
  • Some form of repair mechanism, even if it’s just retrying with different context

At the same time, there are things we probably shouldn’t import directly:

  • Social assumptions without structure
  • Emotional signals that don’t map to meaningful outcomes
  • Trust without built-in validation

So maybe the best path is inspired by human teams, but not trying to replicate them.

Why This Feels Important

I spend a lot of time thinking about inclusion, systems, and the messy realities of being human. As we build AI systems that increasingly interact with people, or make decisions that impact them, it matters how those systems collaborate under the hood.

Collaboration models shape what’s possible. And building agent teams that work with clarity, context, and care could make the whole system more responsive to human needs, including those of people who are often excluded or misunderstood by tech.

Still noodling on all of this. If you’ve been thinking about agent collaboration or are experimenting with new patterns, I’d love to hear how you’re approaching it.

We're excited to also share this with you.