Notes on AI agents, product taste, company building, and the occasional lesson from restoring an old farmhouse in Tuscany.

Back to all posts

March 29, 2026 ยท Product

Building AI agents people actually trust

The hard part of agent design is not intelligence. It is making behavior legible, bounded, and reliable enough that a team will keep using it.

The interesting part of AI agents is not the demo. It is the second week after launch.

That is when the novelty fades and teams start asking sharper questions:

  • Can I tell what this thing is doing?
  • Will it fail safely?
  • Does it get better with use, or just stranger?
  • When should a human step in?

Those are not model questions. They are product questions.

Reliability is a design problem

Most agent failures look dramatic on the surface, but they usually come from ordinary product mistakes underneath:

  • the scope was too broad
  • the handoff was unclear
  • the success condition was vague
  • the state was hidden

If users cannot see the boundaries, they will not trust the behavior.

Good agents make themselves easy to reason about

The best agent experiences feel less like magic and more like working with a strong operator.

They show intent. They show progress. They expose uncertainty. They ask for help at the right time. And they leave enough of a trail that a human can recover when something goes sideways.

Taste matters more than people admit

A lot of agent work is really about sequencing, restraint, and judgment.

When should the system act automatically? When should it wait? What should be editable? What should be logged? What is the smallest useful loop before you expand the surface area?

Those decisions are where trust is built.

A capable model helps. Clear product taste helps more.