It’s everywhere.
The tragic story of Sewell Setzer III—a 14-year-old who reportedly took his own life after forming a bond with an AI chatbot—forces us to confront some heavy questions about AI’s role in our lives. It’s unsettling, and it raises valid concerns about how we protect young users interacting with digital personalities. But before we rush to pile all the blame on AI, we need to take a harder look at the full picture. Sewell’s case isn’t just about AI; it’s about why a kid had access to a handgun. It’s about how, in a world full of risks, we manage both technology and our own responsibility to protect the ones we love. It’s about parenting.
With AI advancing so fast, developers do face a real need to build protections that recognize distress and shield vulnerable users from risky interactions. But parents are a crucial part of this safety net too. In a world where kids can chat with AI mimicking characters from Game of Thrones, we have to step in—block the site, monitor the usage, or, at the very least, teach kids the risks of relying on what’s ultimately just code. If Sewell’s chatbot access had been restricted, or if we’d had more open conversations about the risks of these digital relationships, maybe things could have turned out differently.
Technology does play a role here, no doubt, but so do we. And I don’t mean to sound insensitive—it’s tough to control kids’ internet use, and I know firsthand it’s a constant battle. But still, something more might have been done, and I can’t shake that thought.
I get it, it’s a new world, and parents aren’t equipped for dealing with this new stuff. But the real risk here is letting AI become the easy scapegoat, allowing us to sidestep deeper, systemic issues. The fact that a young boy could access a loaded firearm so easily is a big part of the story, and a painful reminder of the urgent need for stricter gun safety practices, especially in households with minors. According to studies, a shocking number of American homes with children have guns that aren’t securely stored, and that’s a problem we cannot keep sweeping under the rug. Letting technology take the full blame might be convenient, but it’s also a distraction from the critical issue of responsible gun ownership.
Now, Sewell’s mother has filed a lawsuit against Character.AI. This case could be just the first of many, setting a legal precedent for how AI companies are held accountable when their technology interacts with vulnerable users. But treating AI as the main culprit lets society off the hook on two fronts—responsible tech use and gun ownership. Imagine if this tragedy was a call to action, pushing developers to implement ethical guidelines, parents to manage their kids’ tech interactions, and gun owners to secure firearms from minors. Blaming AI alone won’t prevent another tragedy like this. What we need is a broader perspective—and more accountability across the board.
So, ask for better protections from the tech industry, yes, but let’s not forget about the bigger issues lurking underneath. By placing responsibility where it belongs—on the technology, the parents, and gun ownership practices—we create the best chance of protecting our kids. This isn’t about choosing between AI safeguards or gun control; it’s about making sure that every point of access is secured to prevent tragedies like this from happening again.