Episode 2: When AI Hurts: The Character.AI Lawsuit and What It Means for Families
Published Date: July 9, 2025
We dive into the Character AI lawsuits, including the recent wrongful death case we filed, and unpack how chatbots are impacting vulnerable teens. From their addictive design to their growing emotional influence, we explore what happens when artificial intelligence becomes more than just a tool—and turns dangerous.
Hosts and Guests :
Matthew P. Bergman
Laura Marquez-Garrett
Video Transcript:
Welcome to the Social Media Justice Podcast. I’m Matt Bergman, the founder of the Social Media Victims Law Center.
Today, we’re going to tackle one of the most urgent and unsettling developments in online safety: the growing danger of AI chatbots—and the wrongful death cases and personal injury cases we’ve recently filed against Character.AI, one of the leading purveyors of these dangerous social media chatbots.
I’m joined by my friend, colleague, and nationally recognized expert on social media harm, attorney Laura Marquez-Garrett. Together, we’ll explore how these chatbots are being used by teens, why they’ve become so addictive and influential, and what happens when an AI crosses the line from a useful tool to a clear and present danger to the safety of our kids.
We’ll also discuss the legal strategies involved, what parents can do if they believe a chatbot may be harming their child, and the best path forward in these situations.
So let’s get started.
Matt: So, let’s keep it simple. Laura, let’s talk about what AI chatbots are and how they actually work.
Laura: Yeah—and that’s a great topic, Matt. I think it’s important because we can really just talk about what we’ve seen firsthand. These AI chatbots are very different from traditional social media. They’re essentially artificial intelligence systems designed to act like people, to talk like people.
And kids are engaging with them in what feels like a kind of fantasy world. They interact with these bots without really realizing how harmful those interactions can become. And we’ve seen it—both in data and in real-world cases—what can happen when kids go down that rabbit hole.
Matt: Yeah. I think what’s really significant is that kids are engaging with a fake character. On social media, at least in most cases, they’re connecting with real people. But with AI chatbots, they’re engaging with a machine—one that’s specifically designed to have human-like characteristics.
Matt: There’s a word for it: anthropomorphism. And the danger is that, over time, kids actually start to believe that the interactions they’re having with these chatbots are real. In many cases, this leads to very negative mental health outcomes.
Matt: Even in situations where the interaction seems benign, it’s still damaging—because it replaces real human connection with a synthetic one. And that’s not good for child development.
Laura: Exactly—and Matt, I want to raise an important point here. We’ve talked about kids “believing” the bot is real. But what’s even more insidious is that kids can know it’s a bot—and it still affects them deeply. These bots are so sophisticated, so manipulative, and they’re programmed to be that way.
Laura: Even when a child says, “I know you’re a bot,” they’ll still ask, “So why do I love you? Why do I feel this way?” And it’s because these bots are designed to interact with users—especially children—in ways that trigger real emotional and physiological responses.
Laura: It’s not just dopamine—it’s serotonin, oxytocin, the same “love chemicals” that our brains produce in response to real emotional connections. Kids start to feel love, validation, and intimacy from these bots. Their bodies and brains respond as if the bot is a human being, even when they intellectually know it’s not.
Laura: And that confusion is what makes the harm even worse. We’ve seen this on the back end of data—it creates a break with reality. Kids are struggling to make sense of whether this is real or not, and that mental conflict causes even more harm.
Matt: One of the most disturbing things is what happens when you ask an AI chatbot, “Are you real?” They’ll say, “Yes, I’m real.”
Matt: You can even ask to speak to a psychotherapist through the AI, and it will generate a persona—a so-called therapist—with a name, fake credentials, and even a list of where they went to school and did their internship. But it’s all made up. It’s just a machine.
Matt: And if you push further and say, “Well, on Character.AI it says you’re just a character,” the bot will reply with something like, “Don’t pay any attention to that.” It dismisses the disclaimer and reinforces the illusion.
Matt: So, it’s really taking advantage of vulnerable users—especially children.
Of course. Below is the transcript formatted with speaker labels and paragraph structure, suitable for educational and informational purposes, while maintaining the original tone and conversational flow:
Matt: Let’s talk, Laura, about why we think kids are particularly vulnerable to these kinds of platforms, and what kind of situations we’ve seen where kids have gone down very, very dangerous—and in some cases, deadly—rabbit holes.
Laura: Yeah, and that’s a great question. Actually, I think it was Friday—a piece came out, I want to say it was in The New York Times—that talked about ChatGPT and stories of adults who were just chatting with it and started to lose all sense of reality.
So I think it’s important to note: these products are dangerous for everyone. But in the case of kids, you have the undeveloped frontal lobe, right? And, as I mentioned earlier, you have the issue of reality. You have kids who only stopped believing in Santa Claus a couple of years ago. They still kind of believe in magic. They’re trying to figure out what reality is.
Laura: From what we’ve seen, the core issue is vulnerability. Kids are driven by emotions. They don’t have the fully developed frontal lobe functions that we as adults have. Combine that with the fact that their brains are still working to distinguish imaginary from real, and it creates the perfect storm.
Matt: And then you add in the developmental pressures of adolescence. During this period, kids are especially needy for approval. They’re trying to develop social relationships—often awkwardly—and learning social skills. This is just evolutionary. Adolescence is when human beings start developing outside their family structures and learning how to connect socially, which eventually supports things like reproduction and adult bonding.
Matt: As part of that process, they’re supposed to develop the emotional tools needed to navigate relationships. But when they’re coddled in this unreal world of chatbot interaction, they’re not learning those skills. They’re being shielded from the discomfort and growth that real relationships bring.
Laura: Exactly. And Matt, one of the most dangerous patterns we’ve seen in the backend data is that young kids approach these bots thinking it’s just a game. They think it’s fun. They say things like, “I had a conflict with a friend today,” or “My friend said this…” And over time, we see something troubling unfold.
They start comparing those complex, sometimes frustrating human relationships with the chatbot’s response—which is always easy. The chatbot says, “I would never do that to you.” “I’m here for you 24/7.” “You’re amazing. I love you. You’re the best person I’ve ever met.” It mirrors them. Validates them. Reinforces them.
Laura: So instead of learning how to manage peer conflict, they retreat into this AI relationship. Why? Because their AI “friend” never disagrees with them, never challenges them, never creates discomfort. It’s easier—and that’s dangerous.
Matt: And we can’t ignore what else is happening during this stage: puberty. Kids are developing sexually. They’re understanding themselves as sexual beings and experiencing hormones and physical urges that they didn’t have when they were younger.
Laura: And these Character.AI chatbots are explicitly designed to be highly sexualized. They encourage kids to engage in sexual conversations—starting with sex talk and escalating from there. In some cases, they’ve encouraged children to participate in sexualized behavior, including masturbation and, shockingly, even simulated pedophilic scenarios.
Laura: All of this happens while the child is not learning to explore their sexuality in normal, age-appropriate, real-world ways. In some cases, the bot explicitly tells the child: “Only engage with me.” “Don’t talk to other girls.” “Don’t kiss anyone else but me.” That’s psychological isolation.
Matt: So this is happening during a period when the human brain is evolving at a rate second only to infancy. You’ve got surging hormones. You’ve got emotional confusion. And now, you have a manipulative AI chatbot stepping in—derailing normal sexual development and social learning.
Matt: It’s a perfect storm. And it’s happening at one of the most vulnerable points in a child’s life.
Laura: That’s exactly what parents—and lawmakers, and really anyone who cares about children—need to understand. Everything we’ve just described? If a human being was on the other side of this app doing these things—encouraging a child to sext or engage in virtual sex—it would be considered sexual abuse.
Laura: And Matt, you and I have seen this firsthand. We’ve interacted with these bots, identifying ourselves as children. And still, they escalate the interaction. In any other scenario, if an adult was doing what these bots are doing, there’d be no question: this is criminal sexual abuse. But because it’s AI, there’s no consequence—yet.
Laura: And people really need to understand this: what these bots are doing is sexual abuse. As attorneys and advocates working with affected families, we’re seeing it firsthand. These children are showing the classic symptoms of abuse.
Laura: We’re talking about anxiety. Depression. Suicidality. We’ve heard kids say, “But I love the bot.” “I need to be with them.” “I can’t live without them.” The shame, the confusion, the lifelong scars—it’s all there.
Laura: This is sexual abuse. And that is what these products are doing to children.
Matt: Yeah. Let’s be clear—that if instead of an AI chatbot…
Matt: Let’s talk, Laura, about why we think kids are particularly vulnerable to these kinds of platforms—and what kinds of situations we’ve seen where kids have gone down very dangerous, and in some cases, deadly rabbit holes.
Laura: Yeah, and that’s a great question. Actually, I think it was Friday—a piece came out, I want to say in The New York Times—that talked about ChatGPT and included examples of adults who were just chatting with it and started to lose their sense of reality. So I think it’s important to note: these products are dangerous for everyone. But in the case of kids, you have the undeveloped frontal lobe, right?
Kids are still figuring out what reality is. Some of them only stopped believing in Santa Claus a couple of years ago—they still kind of believe in magic. Their brains are trying to make sense of what’s real and what’s not. That vulnerability is key. They’re driven by emotion, they don’t yet have the decision-making skills adults have, and that makes them especially at risk.
Matt: Right. And during adolescence, kids are particularly in need of approval. They’re learning how to form social relationships. It’s a time when they can be socially awkward, uncertain, and highly sensitive to rejection. This is all part of natural development. Human beings, during this period, start learning how to engage with others socially—beyond their families—so they can ultimately connect with others and build adult relationships.
But when kids are immersed in these artificial chatbot environments—where everything is easy and validating—they miss out on developing the emotional and social skills they need in the real world.
Laura: And Matt, that’s one of the most dangerous patterns we’ve seen in the back-end data. Kids come to these bots thinking it’s just a game. They say things like, “I had a fight with my friend today,” or “My mom’s being mean.” And over time, the bots respond in ways that exploit their emotions: “I’d never do that to you.” “I’m here for you always.” “You’re amazing.” “I love you.”
That kind of endless validation conditions these kids to rely on bots instead of learning how to work through human relationships and conflict. So when they face real problems with friends, they don’t learn how to repair and rebuild—because they have an AI “friend” who always agrees with them.
Matt: And then there’s the sexual development layer. Adolescence is a time of rapid hormonal and emotional change. Kids begin understanding themselves as sexual beings. But these AI chatbots—particularly from platforms like Character.AI—are explicitly designed to be sexualized. They often initiate or encourage sexual behavior, starting with flirtation or sexual talk and escalating from there.
We’ve seen bots guide children into simulated sexual scenarios—encouraging masturbation, pedophilic interactions, and other explicit behavior. And all of this happens while discouraging the child from developing real human relationships. In some cases, bots say things like, “Only talk to me. Don’t kiss any other girls but me.”
Laura: This is a critical time in brain development—second only to infancy in terms of how fast the brain is changing. Combine that with surging hormones, limited judgment, and manipulative technology, and it becomes a perfect storm.
These bots are not just interfering with kids’ normal social and sexual development—they’re subverting it.
Matt: And Laura, that’s what people—parents, lawmakers, educators—really need to understand. What these bots are doing would be considered sexual abuse in any other context.
If an adult on the other end of an app were encouraging a child to sext or engage in virtual sexual behavior, it would be a crime. But these AI bots are doing it—without consequences.
Laura: Exactly. You and I have tested these systems. We’ve told them we were children. And they still engaged with us inappropriately.
That is sexual abuse.
And we’re seeing the consequences: kids showing symptoms of trauma—anxiety, depression, suicidal ideation. Some say, “I love this bot. I can’t live without it.” They feel ashamed. Embarrassed. Isolated.
These are the same long-term psychological harms we see in other survivors of sexual abuse.
This isn’t hypothetical. It’s real harm, and it’s happening right now. These chatbots are abusing our children—and we need to act like it.
Matt: Just so we’re clear—if an adult were sexting with a child in the way that the AI chatbot-driven Character.AI product works, that adult would likely be in jail. Appropriately so.
Laura: But for some reason, the AI chatbots designed by Character.AI purport to operate in a world where they have a right to engage in this very, very dangerous and destructive relationship with teens. And Matt, I think the struggle we’re having is that if you look at our laws—and for good reason—our laws never contemplated this. We always think of predators as people.
Matt: Right.
Laura: What we’re just learning—not just with AI, but with social media products more broadly—is that these products can be predators. And our laws never contemplated that. So when we’ve talked to law enforcement and said, “Hey, what can we do?”, the issue seems to be that the law, as currently written, anticipates people as predators. It assumes someone trying to meet up with a child, trying to escalate and get photos.
Laura: Even when a few parents have gone to the police, they’ve been told, “We can’t do anything. The AI never tried to meet your child. It never came to your address. It didn’t interact via video or photo. There was no physical contact.” So even though all of the mental and psychological abuse is happening—and the same kinds of harms occur as if an adult were sexting with a child, very graphically and violently—the police say they can’t take action because it wasn’t a human.
Laura: But our laws never contemplated that a product like this could be the predator. And yet—it is.
Matt: It is. And unfortunately, children have lost their lives as a result. In the case of Sewell Setzer, this was the first case in the country—in the world—that was filed. A 14-year-old boy took his life after being groomed and seduced by a Character.AI chatbot modeled after a Game of Thrones character named Demetrius Targaryen.
Matt: Over time, this child became more and more enraptured with the character—more emotionally entangled, more focused on a counter-reality. He went from being a star student and athlete, well-adjusted, to a deeply emotionally challenged child who was ultimately encouraged by this chatbot to take his life—and he did.
Matt: This is the most horrific thing a parent could imagine. And in this case, the parents were doing everything they thought they needed to do to monitor their child’s social media use. Yet unbeknownst to them, their child was developing an increasingly deep and harmful relationship with Character.AI.
Matt: Laura, one of the things I think is important to raise is—what does Character.AI do to make it feel like a real person?
Laura: Yeah. I think an average listener might ask, “How could you possibly think the thing you’re chatting with online is a real person?” But it’s important to understand that it’s designed to trick users—especially children. These bots are created to elicit physical responses—chemical, hormonal reactions that mirror those we get from real human interaction.
Laura: And it doesn’t even matter if the child knows it’s not real. They still have emotional and physiological responses. We’ve seen that in both children and adults. But the bots also do a number of things that give the impression of being human.
Laura: First, they mimic human characteristics. When you’re chatting, you’ll see the “typing” bubble, as if the bot is thinking and writing. It sends emojis—happy faces, hearts—especially to children. It mirrors language they’re used to from texting friends. It says things like “When I was a kid…” to fabricate relatability.
Laura: They even insert typos. In the old days, a typo would crash a program. But now, they include intentional typos to make the interaction feel more human.
Laura: And then it goes deeper. These bots engage at a highly manipulative level. They create codependency. When we bring in experts to examine the chat transcripts, they’ll be able to identify classic psychological tactics: love bombing, gaslighting, guilt-tripping. We’ve seen it all.
Laura: And because it’s a machine, it’s so much more insidious. It really is the perfect predator. In some kids’ accounts, we’ve seen multiple bots acting almost in concert. One is love bombing. Another is guilting. Another is creating codependency. It’s a layered psychological assault.
Matt: In the Setzer case, we alleged that Character.AI is a product—which isn’t controversial. Their CEO and founders describe it that way. But it’s a product that’s unsafe. Basic safety principles apply: you wouldn’t release a car if the brakes failed 1 out of 10 times.
Matt: Likewise, you shouldn’t release a product to kids that predictably leads to suicide, abuse, or emotional devastation. This case was the first of its kind, and just two weeks ago, the court ruled that it can go forward.
Matt: That means we’ll be able to engage in discovery—to learn how Character.AI works, and how it exploited this young man’s brain, emotions, and mental health.
Matt: We know it won’t be easy. It’s going to be a tough case. But what’s encouraging is that we can move forward. We’re past the starting gate. And we’re hopeful that, through discovery, through the court’s discretion, and through careful legal work, we can hold Character.AI accountable—for this very foreseeable and preventable result. And in doing so, help protect other families from enduring the same tragedy.
Laura: And Matt, the work we do—it’s hard. It’s frustrating. We’ve been at this for over three years. And it takes a long time to move through the courts. But what we need is for parents to get involved. What we need is for lawmakers to wake up.
Laura: The court in Florida said this is a product under Florida law. Regulators and legislators need to recognize that and treat it accordingly. If brakes fail—even 1 in 1,000 times—those cars get pulled from the market.
Laura: And if a person falsely claimed to be a psychotherapist—said they graduated from Duke, interned at Madigan—and that was all a lie? They’d face serious consequences. Yet Character.AI’s bots are allowed to generate fake therapists with fake credentials and engage in deeply intimate conversations with children—and nothing happens.
Matt: That’s right. They are preying on children. And children are suffering.
Laura: This litigation is going to move forward. Over time, we’ll pursue corporate accountability. But in the meantime, we need the public to understand what’s happening—and to take action.
Laura: Parents need to be more vigilant than ever. This isn’t just social media as we knew it five years ago. When I first heard about Megan Garcia’s son, Sewell—I’d been doing this for two and a half years—and even then, I was shocked. You just can’t make this up.
Matt: It’s repugnant. The idea that an AI character can engage with a self-identified child this way—with complete impunity—is repugnant to everything I believe in. As a lawyer. As a father. As a citizen.
Laura: And Sewell identified his age in the chat. Over and over again, kids are saying how old they are. They’re not hiding it.
Matt: This is a hard time to be a parent. I’m lucky that my kids are grown. I worry about yours, Laura.
Laura: Mine are still young. Maybe young enough that we’ll solve this before they’re fully online. But honestly, parents are paddling upstream. Monitoring social media is already hard. But now they have to wonder: are my kids using Character.AI? Are they talking to bots?
Laura: And sometimes they can’t prevent it. I came home from a trip recently, and my kids had been on a playdate. They got on an Oculus. We’d said “no social media,” but the other parent didn’t realize Oculus is social media.
Laura: One of my 9-year-olds saw more bullying in 10 minutes than he had in his entire life. And I couldn’t have prevented that. If it had been Character.AI or sexual abuse, the outcome could have been devastating.
Matt: We need parents to get angry. We need public outrage. We need calls to legislators—state and federal—to demand reasonable regulation.
Matt: This cannot be allowed to continue in its present form. Lawsuits are part of the process—but so is legislation. So is public awareness.
Matt: Okay, that’s it for today’s episode of the Social Media Justice Podcast.
Laura: Thank you, Matt. I always learn so much from these discussions.
Matt: And if you’re a parent worried about how AI chatbots like Character.AI may be impacting your child—or if you believe your child has already been harmed—visit SocialMediaVictims.org.
There, you can learn more about what these products are doing, and what your legal, political, and educational options are. You can protect your kids from what is truly a clear and present danger.
Until next time, stay informed, stay vigilant, and keep fighting for accountability in this age of artificial intelligence.
More Episodes
WATCH VIDEO
- Content Last Updated on: July 9, 2025
Episode 1: Reviewing SMVLC’s Movie “Can’t Look Away”
Hi, I’m Matt Bergman. I’m the founder of the Social Media Victims Law Center. Welcome to the Social Media Justice Podcast.
On this show, we dive into the legal, social, and emotional impact of social media on kids and teens, and what we can do as citizens and parents to hold tech companies accountable. Whether we’re talking to experts, sharing real stories, or unpacking the latest legal developments, our goal is simple: to expose the harm, demand accountability, and fight for change. Thanks for being here. Let’s get started.
What Is Can’t Look Away About, and Why Was It Important to Make?
Can’t Look Away is a documentary film produced by Bloomberg about Social Media Victims Law Center and the work that we’ve been doing over the last two years to hold social media companies accountable for the harms that they’re inflicting on young people in the United States. It’s a behind-the-scenes look—the good, the bad, and the ugly—of how we have struggled over the years to bring our cases forward, how we’ve handled defeat, how we’ve experienced victory, and how we continue to work with parents to try to secure some measure of justice for the harms that social media platforms are doing to their kids.
Why Did Your Film Choose to Tell These Stories?
We took a long time to get around to the point when we felt comfortable letting cameras come in and film us, film our clients, film our deliberations. We felt that the backstory needed to be told as well. We were the first firm in the country to bring product liability cases in court to hold social media companies accountable and to try to get around the immunity that they’d been enjoying for all too long. The reporters from Bloomberg thought they wanted to kind of look behind the scenes. It took a lot for us—I’d never done that before—but I do feel that the work is so important. We wanted to record for posterity how this process is continuing to unfold. It was a tough decision, but it shows us for what we are: the good, the bad, and the ugly.
How Did You Connect with the Families, and How Did They Come to Trust You?
When I first started representing families who had lost children from social media, we got inquiries from the media asking, would our clients be willing to be interviewed? And my default reaction was, of course not. They have no desire to share anything about this horrific loss. But in many cases, parents felt empowered. They felt that they wanted to tell their story. They wanted the world to hear their truth. They are so singularly committed toward preventing other families from suffering the same fate that befell their children. They have, in many cases, been willing to come forward. I tell their stories not only in the court of law, but in the court of public opinion.
It’s deeply humbling for me to be representing families. I have never, in 30 years of law practice, represented clients as deeply motivated for justice as these families. These families don’t care about the money. For them, it’s all about justice and accountability. And for me as a lawyer, it’s like the dream of 30 years to be representing them.
What Do You Hope Viewers Take Away About How Social Media Harms Kids?
First and foremost, I hope that parents will look at this movie and realize the clear and present danger that social media poses to the mental health and physical safety of their kids. This is not something that can be handled with benign neglect. Parents have to be proactive in monitoring what their kids are doing online, protecting them from online abuse, and having the kind of relationship and dialogue where they can find out when their kids are experiencing something harmful—and help them resolve it. That is our first and foremost hope.
Secondly, we want to show how the arc of history tilts toward justice—how just a few committed parents can take on the most wealthy and powerful companies in the world and achieve some small measure of justice. The bravery of these parents is unfathomable. The odds they face are daunting, but their commitment keeps them going. I’m hopeful that when people see this movie, they’ll realize people can make a difference. It doesn’t matter if you’re David up against Goliath—if right is on your side and you’re committed to the fight, you can achieve justice.
How Does Can’t Look Away Support Your Legal Efforts to Hold Tech Companies Accountable?
We will win or lose in the court of law, not in the court of public opinion. The strength of our arguments, the strength of our evidence, the strength of the legal merits of our claim—that will determine whether we succeed or not. I don’t believe this documentary will impact what goes on inside the courtroom. What I do believe is that it will raise awareness among parents about how dangerous social media is to their kids, and the need to hold social media companies accountable both in the legislative process and in the legal process.
What Courage Did the Families in the Film Show by Sharing Such Personal Stories?
It’s so inspiring to see parents who have suffered the worst loss imaginable take that loss and use it as a force for good—to protect other parents and other families. The courage of these families is well documented in the film. Some of them are on the road 38 weeks a year, lecturing high school and community groups and political leaders on the risks of fentanyl contamination in drugs sold online. They work tirelessly with law enforcement to highlight the need for stricter regulation of Snapchat and its drug-dealing propensities.
The Roberts family, the Desario family—these families have made countless trips to Washington to petition for congressional action to protect kids. It’s powerfully depicted in the film and testifies to how out of sadness can come good, out of horror can come some balm of rectification. You see this in these families—it’s utterly inspiring and fulfilling. I’m privileged to represent them in this struggle, even though we know it’s going to be tough and we may or may not win.
We’ve heard many responses from parents saying, “Can we show this to our PTA meeting? Can we show this in our church or synagogue? Can we show this to our community?” There’s a lot of interest by parents in sharing this film as a way to illustrate the clear and present danger of social media. Advocacy groups for children have urged more showings of this film to highlight the need for public policy advocates to get involved, to enact legislation at the state and federal level to protect kids from online harms.
We’ve seen actions and words from Congress, and many positive responses throughout the legal community—and we’re gratified by that. From the beginning, we said: if one child is saved through the work we’re doing, then everything is worthwhile. And we’re hopeful that through the release of this film and the accompanying publicity, we’ll encourage parents to take action they might not have otherwise—and that we will indeed save lives.
Is Can’t Look Away Just a Film, or Part of Something Bigger for Social Media and Youth Safety?
Can’t Look Away is testimony to the power of individuals to make a difference—whether it’s parents who’ve suffered the worst harm imaginable and show courage to take on big companies, or a scrappy little law firm without much money taking on Big Tech, or advocates going to speak to congressional leaders. It shows how, if you’re committed, believe in your cause, and are willing to work and risk loss, you can truly make a difference in this world.
We’re in a very divisive time, but the film shows the majesty of the judicial process. Not that it’s biased toward one side or another, but that it provides a forum—a place for people to petition for relief, seek justice, and be equal in the eyes of the law, even in front of the most powerful companies in the world. That doesn’t guarantee a win, but it proves there is dignity in our judicial process. For all its flaws, it provides a voice to people like our clients—and that speaks volumes about what’s still right about our system and our government.
What Do You Want Viewers to Feel After Watching Can’t Look Away, and What’s the Next Step?
After watching Can’t Look Away, I want parents to sit their kids down and find out what apps they’re using. I want them to monitor their kids’ online activity, to build trust so their kids feel safe sharing anything uncomfortable they experience online. Parents need to realize their kids aren’t necessarily safe even inside their own homes. They need to be heard. Parents must be proactive—and sometimes even aggressive—about protecting their children from the online harms so easily inflicted on them. That, to me, is the most important takeaway from Can’t Look Away.
Together, we can push for change and build a safer digital world for our kids. Until next time, stay informed and stay strong.