That Doesn't Sound Like You
Two stories about the same AI tool. Only one of them left my thinking intact
A few months ago, my wife, Liz, asked me to share a story in our family group chat. Something personal, about helping my Uncle Jim navigate the VA system while my aunt was slipping into dementia.
It was a sad story, but it had a happy ending. It was my story. I had lived it. Liz thought it should come from me.
So I opened ChatGPT. Not as a big decision, just my obvious next move. Let it help me shape this.
Which, when you slow that down for a second, is already a strange instinct. Asking a piece of technology to help me sound like myself.
What came back was good. Really good, I thought. Clear, structured, emotionally balanced. There was even a moral to the story.
So I sent it.
Thirty seconds later, my wife texted me privately: “That doesn’t sound like you. I miss your voice.” (Colorful expletives redacted to protect the annoyed.)
A few seconds later, my son piled on: "Dad, why are you talking like a LinkedIn post?"
They were right. I went back, read what I’d sent, and saw it immediately. Nothing in it was wrong. It just wasn’t mine.
So I unsent it and rewrote it myself — shorter, less polished, a little uneven. The response changed completely. Real, emotional replies. Encouraging emojis. A phone call from my daughter-in-law…using her actual voice.
I shared the same facts, but with a completely different voice. Mine.
That moment stayed with me. Not because the first version was bad.
Because it was better than mine. And that was the problem.
A few weeks later, I had the same feeling in a completely different context, a draft of something I was working on. I was reading it and had this strange reaction. I couldn’t tell which ideas were actually mine. Nothing was wrong. It was good. Clear, logical, structured.
But it felt like I was reviewing something instead of building it.
That’s a subtle distinction. But it changes everything.
Before I go further, I need to tell you about Uncle Jim.
Jim is a Vietnam veteran. Retired truck driver. Stubborn in the way men of his generation are stubborn, as a point of pride. Last year, he pulled me aside and said two words I’d never heard from him:
“I’m drowning.”
As I mentioned earlier, my aunt’s dementia had gotten bad. He was her sole caregiver, and it was a lot, both physically and emotionally. He was exhausted and couldn’t think clearly about what to do next.
No one put me in charge, but I felt like I could help. So I opened an AI tool — ChatGPT, Claude, Gemini, doesn’t really matter which — and spent a weekend navigating the VA system. Researching dementia care options, cross-referencing eligibility, and building a case I never could have built that quickly on my own.
Three months later, Jim got an answer: $4,200 a month in disability benefits tied to Agent Orange exposure. Benefits he didn’t even know to claim.
The money mattered. But what he said afterward mattered more:
“For the first time in decades, I feel like somebody sees me. Respects me.”
That’s real. That’s not nothing.
Two experiences. Same tool.
In one, AI helped me do something I genuinely couldn’t have done as well on my own. Finding the benefits a Vietnam veteran had earned and never claimed took a weekend of research, which I didn’t have the expertise to do alone.
In the other, it helped me do something I absolutely should have done myself. Write a message in my own voice. AI made it worse.
That’s the distinction I keep coming back to. Not whether AI is good or bad. Where it enters the process.
I’ve spent thirty years inside technology. Microsoft. Apple. Startups. For most of that time, I was valued for how I think. And how I explain things.
That’s kind of been my thing. Making sense of complexity. Connecting the dots. Telling the story so normal people actually get it. That’s been my edge.
I’m not supposed to miss the shift.
And lately I’ve had this quiet worry that I’m losing it. Losing the thing that makes me me.
Here I was, sending my family a message that didn’t sound like me.
What’s different about AI from every other technology I’ve worked with is that it doesn’t just help you do things. It participates in how you think. You’re no longer starting with a blank page. You’re starting with a suggestion.
And the suggestions are good, which is exactly what makes this so easy to miss. Nothing breaks. Most things improve. You’re faster, clearer, more productive. But somewhere in there, a small question starts to fade:
Would I have arrived at this on my own?
I started noticing it everywhere. Not in big decisions. In small ones.
Waiting for someone else to frame the problem. Looking for structure before forming an idea. Reacting instead of initiating.
None of those choices is wrong. But they all move the starting point away from you.
My wife caught a version of this in a completely different context. One morning, she asked how I slept. Without really thinking, I said:
“I don’t know, let me check my ring.”
She paused. Then asked:
“When did you stop knowing how you feel?”
She wasn’t asking about sleep.
That question landed harder than it should have. Not because it was clever. Because it was true.
I didn’t stop thinking. I just stopped starting there.
That’s the shift, and it’s harder to see because it doesn’t look like harm. It looks like clarity.
It’s not that AI is thinking for you. That would be obvious, and obvious is something we’re pretty good at resisting. It’s that you’re increasingly starting from something that already feels like thinking. And skipping the part where you would have had to do it yourself.
Nobody is going to sue an AI company because their employees stopped forming original judgments. There’s no plaintiff for the slow erosion of knowing your own mind. The damage doesn’t look like damage.
It looks like productivity.
I’m not anti-AI. I use it every day. I’ve built a digital clone of my own perspective on entrepreneurship that my USC students use as a kind of 24/7 office hours. Sometimes I consult the clone myself. Which is a genuinely strange sentence to say out loud.
I’m building tools that extend my thinking. And I’m also starting to see where they can quietly replace it.
So I’ve been running a small experiment. Nothing dramatic. Just one constraint:
Don’t let the machine go first.
Before I open a tool, before I ask for help, before I see a suggestion, I try to answer one question: What do I actually think? Not what sounds right. Not what would work. Not what I’ve seen before.
What do I think?
Sometimes the answer is incomplete. Sometimes it’s wrong. Sometimes it’s not very good. That’s fine. The point isn’t to be right on the first pass. The point is to stay connected to the part of the process that builds judgment, and then bring AI in to challenge it, sharpen it, expand it.
It turns out the tool is actually more useful that way. Not less.
This isn’t about rejecting the technology. It’s about noticing the moment before you use it. The moment when you still have to decide.
Most people skip that moment. Not because they can’t do it. Because they don’t have to.
But that moment, the one just before the suggestion arrives, is where thinking begins. It’s what this newsletter is trying to protect.
And it starts with a simple question, my family answered for me in about thirty seconds:
Whose voice is that?
Next issue: The delegation stack — why handing off tasks is one thing, and handing off judgment is something else entirely.




It won’t be long before we can choose to have our AI persona respond for us in the family group text. And then the fam can decide which Dad has better Dad Jokes.
(I’ve also had the exactly the same experience with needing to check my Oura app to answer “How did you sleep?” I welcome our new robot personality replacements.)
“You sound like a Linkedin post” might be favourite new burn. Given that no one is likely to hire me anymore it’s been quite freeing to occasionally poke the bear over on Linkedin, at least in a way that might make people think a little bit about the loss of their “voice”. People I’ve known for decades starting to all sound like one... entity.