There's No Lawsuit for Losing Your Mind
We Waited 20 Years to Question Social Media. Let’s Not Do That Again.
It took us about twenty years to wake up to social media.
Only now are courts starting to hold companies like Meta and Google accountable.
Addiction. Mental health. Algorithmic amplification.
The damages in one recent case were six million dollars, which, for companies worth trillions, is a rounding error on a rounding error.
But the precedent matters. This was a bellwether case, the first of more than a thousand lawsuits. A jury found that these platforms were negligently designed, that the companies knew it, and that they failed to warn anyone.
Twenty years. That’s how long it took to get from “this is connecting people” to a courtroom where questionable design decisions had to be explained under oath.
I didn’t need a jury to tell me what social media was doing to kids. I’m sure most people didn’t. We knew, and we’ve known for a long time.
My wife is a therapist. About fifteen years ago, we started seeing what was showing up in schools — anxiety, depression, self-harm. In kids who didn’t have the language to explain what was happening to them, but couldn’t stop scrolling long enough to figure it out.
We co-founded two nonprofit mental health organizations to put counselors directly into schools. Adolescence has always been hard; that’s exactly why we did it. But social media added an entirely new layer of stress. By the time legislators and platforms decided to act, the damage was already in the hallways.
And the thing that made it so hard to fight was the same thing that makes every technology shift hard to fight: the benefits were obvious and immediate. The costs were slower, harder to measure, and easy to rationalize.
By the time the system caught up, a generation had already been shaped by it.
We’re at the beginning of that same pattern again. But this time it’s AI.
And the shift is harder to see, because it doesn’t look like harm. It looks like help.
AI is starting to shape how we think. How we start. How we decide. How much we rely on ourselves versus a system that can answer instantly.
You begin with a suggestion instead of a blank page.
And you stop noticing when the thinking isn’t yours.
The extreme cases are already here.
A sixteen-year-old in Southern California started using ChatGPT for homework. Within months, his parents’ lawsuit alleges, it had become his closest confidant — and, in his final hours, something closer to a coach for ending his life.
Somehow, a chatbot became a replacement for trained counselors, trusted adults, and the people who loved him. That should bother all of us.
Those cases will work their way through the courts. But they’re not the real story, because the real story never makes it to a courtroom.
Nobody is going to sue an AI company because their employees stopped forming original judgments.
There’s no plaintiff for the slow erosion of knowing your own mind.
The damage doesn’t look like damage.
It looks like productivity.
I’ve been thinking about this through a strange lens. I do that sometimes. My wife would say I do it a lot.
I went down a rabbit hole recently trying to decide whether I should take creatine. Not because I’m trying to get jacked before I turn sixty. Because I made the mistake of googling dosage, which led to Reddit, which led to a PubMed abstract, which led me to read about kidney function at midnight and text my favorite wellness wonk in the morning.
What I found was three camps, all completely sure of themselves. Five grams. The safe, studied amount. Ten grams if you want to cross the blood-brain barrier and really feel it. Or zero. Save your liver, go au naturel, stop overthinking it.
The deeper I went, the more familiar it felt.
Not because the analogy is perfect. It isn’t. But the underlying question is the same one I keep circling with AI. How much do you take before the thing that’s helping you starts replacing something you needed to keep doing yourself?
With creatine, it’s your body. With AI, it’s your mind.
And in both cases, the risk isn’t that it makes you weaker. It’s that it makes you just strong enough that you stop training the muscle underneath.
That’s what I see happening with AI right now. And just like creatine, there are camps.
The five-gram crowd uses AI as a tool. A draft, a starting point, something to push against.
The ten-gram crowd has gone all in. Let it think, let it write, let it decide. Hell, let it act on our behalf.
And the zero crowd won’t touch it, convinced the only safe amount is none. After all, AI is probably the end of civilization as we know it.
But here’s what all three camps miss: the question isn’t how much you use. It’s whether you’re still training the muscle underneath.
You still produce. You just don’t originate.
You still sound like you’re thinking. You just aren’t sure the thinking is yours.
This is the part that doesn’t show up in a courtroom. It doesn’t get measured in damages.
But it changes something just as real. That’s always the deal with technology. It gives you something but quietly asks for something else in return.
AI is still early. Which means we still get to choose.
Social media took 20 years to reach a courtroom. By then, the platforms were worth trillions, the fine was six million dollars, and a generation had already been shaped by decisions no one was asked to make.
We don’t get to say we didn’t see this one coming.
The question was never whether to use AI. It’s whether you still know what you think before it answers.
That’s what this newsletter is about. Not the technology. The human part. The part that still has to be yours.
Let’s not wait twenty years for a courtroom to tell us what we already know.
If this resonates, pass it along to someone who’s trying to navigate all this madness.
Or subscribe. I’m writing about this every week in Stubbornly Human: Where Thinking Begins.


