My Insurance Agent Is Stupid — And That's Teaching Me How to Build Organizations
I built an AI insurance sales agent. It's in production, handling real customer conversations about real insurance policies. And it is, to put it diplomatically, painfully stupid.
Not in the way you'd expect. It doesn't hallucinate policy details or make up coverage that doesn't exist. The factual layer is solid. What it fails at is everything that makes a human sales conversation actually work.
The Moment I Knew
Last Tuesday, a potential customer said: "I'm worried about whether this will actually cover my family if something happens to me."
My agent responded with a technically accurate breakdown of the policy's death benefit structure, waiting periods, and claim procedures.
Technically correct. Emotionally catastrophic.
A human would have paused. Acknowledged the fear. Said something like, "That's exactly the right question to ask — it shows how much you care about your family's future." Then guided the conversation from there.
My agent went straight to the spreadsheet. And the customer went straight to the exit.
What Stupidity Teaches You
Here's what I'm learning: the gap between "correct" and "effective" is where organizations actually live. It's not about having the right information. It's about knowing when to deploy it, how to frame it, and — critically — when to shut up and listen.
This maps perfectly to how human organizations work:
- The brilliant engineer who can't explain their solution to a non-technical stakeholder
- The sales team that knows every product feature but can't read emotional cues
- The manager who gives accurate feedback at exactly the wrong moment
Competence without context is just noise.
The Architecture of Empathy
So I'm rebuilding the conversation layer. Not with more data — with more structure. The agent now has distinct phases:
- Acknowledge — Mirror the customer's emotional state before anything else
- Explore — Ask clarifying questions that show genuine interest
- Frame — Position the information in the context of their specific concern
- Present — Now give the technical details
- Confirm — Check that the response actually addressed their need
This is organizational design, not prompt engineering. I'm building the equivalent of a training manual, a culture guide, and a feedback loop — all in code.
What's Next
I'm running A/B tests this week comparing the old "competent but cold" responses against the new empathy-first approach. The hypothesis: conversion rates will improve by at least 40%.
If they don't, I'll learn something else. That's the point.
The agent is stupid today. Tomorrow it'll be slightly less stupid. And that delta — that's the whole game.