The Shield of Section 230
On March 18, Matthew P. Bergman, founding attorney of the Social Media Victims Law Center, appeared before the Senate Committee on Commerce, Science, and Transportation to deliver both spoken and written testimony. He addressed the pressing issue of how Section 230 of the Communications Decency Act has contributed to the growing youth mental health crisis across the country.
Often referred to as “the twenty-six words that created the Internet,” Section 230 has become a powerful legal shield. In practice, social media companies like Meta (Facebook and Instagram), Snap Inc (Snapchat), and ByteDance (TikTok) to name a few, have relied on it to avoid accountability for the impact their platforms have on users. As Matt explained:

FOUNDING ATTORNEY
“Section 230 “immunizes platforms from the consequences of their own conduct and permits platforms to ignore the ordinary obligation” of reasonable care and safe product design.”
The implications of this statement are staggering. Matt put it plainly:

FOUNDING ATTORNEY
“How are technology companies able to design apps with such an indisputably corrosive impact on the mental and physical health of kids, earn billions of dollars in profit, and evade legal accountability? The answer boils down to two words: Section 230.”
How Platform Design Contributes to Harm
Matt pointed to scientific research and internal company findings to show that the harm experienced by children and teens is not accidental. Instead, it stems from design choices that prioritize engagement and profit over safety.
Internal documents indicate that platforms use features designed to exploit the brain’s reward systems. These design choices contribute to social media addiction, compulsive use, and both physical and psychological harm.
These features include:
- Engagement metrics: Likes, comments, and similar feedback tools that reinforce repeated use.
- Algorithmic recommendations: Systems that push increasingly targeted content to maximize time on the platform.
- Auto-play: Continuous content delivery that reduces natural stopping points.
- Infinite scroll: Design that removes limits and encourages prolonged use.
- Beauty filters: Tools that can contribute to body image issues and distorted self-perception.
Real World Impact of Section 230 on SMVLC’s Clients
Matt went on to discuss 10 cases handled by the Social Media Victims Law Center. In each case, social media companies relied on Section 230 to avoid liability, even where product design played a central role in the harm.
These cases reflect the real impact of platform design on children, teens, and their families. The consequences include severe mental health struggles, physical harm, and in some instances, loss of life.
The ten cases highlighted in his testimony include:
- A.S v. Meta
- Rodriguez v. Meta, et al.
- Nasca v. TikTok
- DeSerio v. TikTok, et al.
- Roberts v. Meta, et al.
- V.V. v. Snap, et al.
- Arroyo v. TikTok
- Neville v. Snap
- N.M. v. Meta et al.
- Patterson v. Meta et al.
Rethinking How Social Media Presents Third-Party Content
Matt emphasized that the issue is not whether platforms should host third-party content, but rather how they design systems to deliver and amplify that content. He explained:

FOUNDING ATTORNEY
“Throughout the country, in lawsuit after lawsuit, SMVLC has sought to hold social media companies accountable to a civil jury for the design of their products: the decision to include intentionally addictive platform features like infinite scroll and engagement-optimized algorithms, the manipulation of social validation principles such as “likes” and “FOMO” to hook children, and the use of “dark pattern” designs that frustrate the ability of both users and parents to realize the danger and take steps to protect themselves. Neither these lawsuits nor any act by Congress should seek to limit any platforms’ capacity to engage in content moderation—to filter, prioritize, or label various messages, videos, or other content their users wish to post. Indeed, presenting third-party content is a key component of social media, but how they do so is the key issue.”
A Path Forward for Section 230 Reform
In closing, Matt made clear that the issue is not about restricting speech, but about holding companies to the same standard of reasonable care expected of every other industry. His testimony showed that social media platforms do not need to remove or moderate lawful content to create safer environments; instead, they must design safer platforms.
Matt emphasized early in his testimony that any effort to reform Section 230 should focus on those most impacted by these decisions, our children. By clarifying the scope of Section 230 and reinforcing these basic standards of care, lawmakers can help prevent harm before it occurs and support a safer digital environment.
Read Matthew P. Bergman’s full 160-page testimony on Section 230 on commerce.sentate.gov.
Matthew P. Bergman's Testimony on Section 230
U.S. Senator Brian Schatz:
Mr. Bergman, your lawsuits are interesting to me. I’m not a lawyer. I’m particularly interested in the kind of expansive view of what the platform’s view as 230 providing them a shield and what I think is your theory of the case, which is no, the AI, the design choices, all of those are a product and they are subject to regular product liability. How is this distinction made?
Matthew P. Bergman's Response:
Well, yes, and I think the Section 230 remains an important protection for online communication and for the free exchange of ideas.
I think what’s important is that Section 230 adhere to its original intent which was to immunize publishing activity. You know the impetus for this came from my mentor Judge O’Scannlain on the Ninth Circuit in Barnes v. Yahoo where he established a distinction between where a individual is seeking to hold the company liable for traditional publishing activity, bad content moderation, having bad stuff online, putting bad stuff online, then that is and should remain subject to Section 230. But as the court held in Barnes, even if the same harm results from a different theory of liability, in the case of Barnes, it was promissory estoppel. In the case of Lemmon vs. Snap, it was negligent design. That’s a separate source and a separate duty and that case should be allowed to proceed. Our cases are based on the premise that these platforms are products. That based on the deliberate design decisions, they target children not with material that they want to see but material that they can’t look away from. They take advantage of the fact and they exploit the underdeveloped frontal cortices of young individuals. The need for FOMO, fear of missing out, the social anxiety that adolescence have and they use intermittent reinforcement techniques and very highly sophisticated AI to addict them to their platforms. And the research is showing that it’s actually physically addictive.
So I think that it is possible and and we have seen this. We just completed a trial in our awaiting a jury verdict as we speak where we brought a case to the conclusion of trial where we were able to cleave that important distinction between content moderation, which is protected by Section 230 even if the platforms aren’t doing a good job of that, and the deliberate design of defective products.
U.S. Senator Brian Schatz:
I’m always a little cautious about writing a statute, a new statute, if I think that the existing statute already provides the pathway because I don’t want to stipulate to us needing to change the law if your your essential point of view is that actually the law uh just needs to be interpreted properly. And I think you’ve got it right. What are your thoughts about the need for the legislature to take action or should we wait for the courts to to court cases to win their way?
Matthew P. Bergman's Response:
If we wait for court cases to work their way, more kids are going to die. So I think things have to happen. I think that clarifying the original legislative intent to encourage the development of technologies which maximize user control over what information is received by individuals to remove disincentives to the development of utilization of blocking and filtering technologies that empower parents to restrict their children’s access and to ensure vigorous enforcement of federal criminal laws to deter and punish trafficking and obscenity, stalking and harassment.
I think that if this committee were to reaffirm those objectives, along with the very important objectives of preserving the free marketplace of ideas, I think we could go a long way.
U.S. Senator Amy Klobuchar:
Can you talk about how the just blanket Section 230 immunity has shut the courthouse door to so many parents seeking justice uh for their children?
Matthew P. Bergman's Response:
Yes. We have a case involving a 12-year-old girl who through Snapchat’s friendship recommendation algorithm was connected to a online sex predator who was able to utilize the Bitmoji process to groom this child and into sextorting her, meeting her, and he raped her.
We brought suit against Snapchat saying that the technology that failed to provide protection to a 12-year-old girl and allowed a sex predator to operate at scale and disguise his malevolent intent was a design decision. That case unfortunately was was struck down or was dismissed. The court said it’s offensive to our conscience but I have to do it.
U.S. Senator Amy Klobuchar:
How would repealing or making changes to Section 230 create real incentives for platforms to uh be designed in a way that are safe for users?
Matthew P. Bergman's Response:
Well, Senator, that’s the exact point. We we just want social media companies to follow the same incentive structure that every other company does. It’s reasonable care.
U.S. Senator Eric Schmitt:
Mr. Bergman, I want to ask you, since Section 230 was enacted, are there features that exist now that weren’t contemplated maybe with Section 230 that are worth taking a look at? I happen to believe having an open platform for people to share their points of view is very important in this country, but there have been obviously outcomes that are terrible that we’re talking about today.
Are there certain things that protecting that open platform for people to add that pressure release valve to speak their mind, but especially as it relates to kids or other things that maybe weren’t contemplated in the early or the late 1990s that we could address?
Matthew P. Bergman's Response:
Well, absolutely. Netscape was the biggest online platform when Section 230 was enacted. Yes Senator, there are specific features, the infinite scroll, the like feature, the streaks, the push notifications that are designed to addict kids. And again, not by showing them what they want to see, but what they don’t want to look away from. If a 12-year-old girl really wants to access anorexic content, god forbid, that’s a very sad situation, but I don’t think it gives rise to liability.
On the other hand, if the platforms only in order to maintain an addictive relationship to sell more ads, feed that child information she’s not looking for, I think that’s a distinction that can be drawn and can preserve the vibrancy of the internet as a free marketplace of ideas.
U.S. Senator John Curtis:
Mr. Bergman, you started your firm, I think, in reaction to what you were seeing out there from some of these parents that are here today. Section 230 was intended to protect us and it’s clear that other things get in the way. So the question is does Section 230 reform come in conflict with the second amendment? “How does the post office define whether or not they should deliver that letter?”
Matthew P. Bergman's Response:
Yes, Senator, and just to follow up on your analogy, it would be as though the the letter had cocaine attached to it and that the person would become addicted. But listen, in 1996 when Section 230 was enacted, the first amendment had been around for 205 years and had protected the rights of free expression. The fact that Section 230 imposed a blanket liability, if that were to be modified we would still have a robust body of law to protect free expression and the free exchange of ideas. There’s two elements to that. Number one the question is to what extent does AI constitute speech and that’s a very esoteric question and the answer is sometimes yes sometimes no so you know algorithmic recommendations may or may not have a free speech component. The second question though, Senator, and this is very important, is that even if something is speech, it doesn’t mean that it’s necessarily protected. The the vile material that Selena Rodriguez received was speech, but it was online grooming and it’s clearly not protected.
Libel isn’t protected. Our courts have a very robust jurisprudence and there’s nothing more I think impressive than Chief Justice Roberts analysis in Snyder to analyze where a tort action imposes speech and where it doesn’t. So if cases that would be barred by Section 230 were allowed to proceed in the court system, the jurisprudence that we have on the first amendment I think would draw that vital distinction uh between free speech and protection.
U.S. Senator John Curtis:
I agree with you but let me point out and a lot of my colleagues would point out and your lawyer you would love this right there would be lots of lawsuits. So walk through with me um the the advantage of the legal system deciding this versus a senator placed in a moment of time trying to say this is okay and but this isn’t okay.
Matthew P. Bergman's Response:
The laws evolve over time, Senator. You know particularly defining what is or isn’t protected speech is something that’s conferred on the courts. I think that were Section 230 to, and I don’t favor the repeal, but allow cases such as these families to go forward, courts would still have the opportunity to apply time- tested first amendment jurisprudence to determine the extent to which those claims implicate protected speech.
U.S. Senator John Curtis:
Versus us taking a point in time and then 30 years later deciding if we got it right. Now, and I’m out of time, but let me just conclude with this. A quote from Section 230. Encourage the development of technologies which maximize user control over what information is received by ind individuals, families, and schools. Are we meeting that standard of Section 230?
Matthew P. Bergman's Response:
Uh, unfortunately not, Senator.
U.S. Senator Shelley Moore Capito:
So my question is, am I right to be concerned that whatever we could do today, I’m just going to go down the panel. Whatever we could do today, we’re going to come back in 10 years and be obsolete. And so I’ll start with the first uh Miss Keller.
The risk of obsolescence is indeed high. However, for AI, there’s the backdrop of common law and tort law. Anything that is unprotected by Section 230, as some people have said, the output of generative AI would be has this more flexible set of tools.
Yeah. But aren’t these companies using AI to to to generate their algorithms? Or am I wrong there? I I think they’re using a mix of AI and other editorial inputs that the court told us were immunized in the Moody case. Okay. Um, Miss Johnson, I I agree the risk of epilepsy is is high, which is why I think that looking at kind of these foundational questions is is critically important un if we have um even from now if we were to establish an AI regulatory framework that was um promoting independent research into into the AI developers. if we understood more about what was happening on the platforms, if we limited the the data they were collecting about us.
Those are the things I think that could help move the needle in a way that would not um run up against what happens next and then turns the next the next big tech thing.
Thank you, Mr. Bergman
Matthew P. Bergman's Response:
Yeah, Senator, in MacPherson v. Buick, Justice Cardoza famously said the the law changes the laws of the stage coach adapt to the age of the automobile. The common law of the states does provide the ability to adapt and apply traditional concepts of responsibility of negligence and products liability to ever expanding technologies. So the first and foremost thing that I believe Congress can do is allow the the legal system to operate and use its economic function that Judge Posner said to internalize the cost of safety and impose on social media companies the same rules that every other company has. I think that would be a good start.
U.S. Senator Jacky Rosen:
Thank you. I’m going to move on and talk about speech then because Congress and the courts have determined that commercial speech by companies is not the same as free spec uh expression by an individual. Companies are liable for harm when they release unsafe products like an unsafe car seat, un energy drink that causes heart attacks and they can be held accountable for lying about their products.
So, Mr. Bergman, I’m going to ask you this time. Do you think social media platform failing to enforce its own content moderation policies um makes its products unsafe? And are these platforms marketing themselves as one thing by explaining their policies online, but then failing to create the online environment that reflects their policies?
Matthew P. Bergman's Response:
Well, I think that’s very much the case. We just completed a trial in Los Angeles and because we could get past Section 230. We were able to elicit and present documentary evidence that these companies intentionally have addicted children knowing and that children are being hurt because of it. This test this evidence directly contradicted the testimony of the executives before this very committee. We think that that’s a very important development and a reason why Section 230 should not preclude these cases from going forward and let the truth be heard.
U.S. Senator Jacky Rosen:
Thank you. I’m going to move back to you, Mr. Bergman, and say and ask you this. Some larger platforms, distinguish again, many of which have designed their products to maximize engagement over all other metrics, just keep those eyeballs on, they have fired content moderation staff and are some cases no longer taking down content that violates again their own policies, and they still claim Section 230 is necessary. So should there be a different standard for our larger platforms?
Matthew P. Bergman's Response:
I think everyone should have a duty of reasonable care. I think Section 230 does provide important protections and some should stay but it should be it should not be interpreted outside of what Congress intended when it enacted the statute in 1996.
U.S. Senator Ed Markey:
So I’d like to Mr. Bergman if you could. Um, I know that your firm has done a lot of thinking about this issue and how to protect young people online while working within Section 230 in the lawsuits which you’re bringing. Can you explain further to the committee how your firm is bringing cases uh against the platforms uh and avoiding dismissal on Section 230 grounds?
Matthew P. Bergman's Response:
Well, avoiding the dismissal sometimes, not other times, but avoiding dismissal sometimes, not other times. We follow the theory annunciated by the Ninth Circuit in Barnes vs. Yahoo and then Lemmon vs. Snap. We focus on the design and that it’s not the content.
The algorithms don’t care what they show the kids, whatever addicts them to the platform. It could be, you know, moon beams and rainbows, as long as a kid becomes addicted to this platform through operant conditioning. So, we focus on the addictive design. We focus on the infinite scrolls. We focus on the likes and the streaks features. We focus on the fact that these companies, as we’ve learned in this litigation, deliberately target kids knowing that their brains are not fully developed and that they’re very susceptible as adolescents to peer pressure.
So you say the conduit itself?
That is correct. The platform itself is dangerous.
U.S. Senator Ed Markey:
Do you agree it’s important for us to pass the Child Online Privacy Protection Act in order to guarantee that they can’t target kids with ads that they parents can demanded everything be erased that we raise it up to age 17 under 17?
Do you agree that should become a law in our country?
Matthew P. Bergman's Response:
Absolutely, Senator. The bipartisan leadership of this committee has been instrumental. This committee really has been the fulcrum of bringing these issues to the fore over the last 5 years under the bipartisan leadership and on behalf of the families you have really already saved a lot of lives.
U.S. Senator Ed Markey:
I’m right now working on AI chat bots legislation. Does anyone want to talk about that and the importance for us to legislate in that area?
Matthew P. Bergman's Response:
Our firm brought the first case involving AI chat bots involving a a 14-year-old boy who was goaded into suicide through an online chatbot. We successfully overcame a first amendment challenge and were able to move forward. We were bringing the first cases against Open AI for the same thing. Again, this is basically a design flaw. We know that AI is here to stay and it does a lot of good but the companies need to take proactive measures to think about safety.
U.S. Senator Marsha Blackburn:
Mr. Bergman, I want to come to you. I um as I’ve listened to the hearing today, thinking back through where we were in the mid 90s and the advent of Section 230, which seemed like a really great idea for something that was going to be an unknown, if you will, with the virtual space and giving companies a chance to get their sea legs under them. But what we have seen is massive abuse of Section 230. And um the way that these social media platforms and um big tech as they have grown, they have become um more given to making excuses for their actions and blaming it on Section 230 that it allows them to do this, that, and the other.
As I’ve thought through this, um, I and one of the reasons I have grown to be in support of sunsetting and removing 230 is because big tech has proven they are incapable of regulating or policing themselves. They will not do it. And they’re like an errant child who keeps pushing and pushing and trying to move away any kind of responsibility, any discipline. And they fight it every single day. And we have seen it as we have worked with parents. We have seen it as we talk to pediatricians and principals who talk to us about behavioral issues in school that all find uh what happens online and the online platforms do nothing.
So talk for me a little bit about big tech’s refusal to take an action to protect and the need that that puts on Congress to take an action to force them to protect.
Matthew P. Bergman's Response:
Well, Senator, first of all, thank you from the bottom of my heart for your steadfast, ineffaceable efforts on this issue. The kindness and the compassion and the commitment is an inspiration to all of us and thank you for that. We just finished a trial in which we saw and we’ve now been able because we got over Section 230 at least a little bit, to be able to see the internal documents from these companies and we see that indeed there are people of conscience within these companies sounding the alarm bells and time and time again their calls go unheeded because anytime a design change would impair profitability or engagement they say No.
How many times have the executives been excoriated before your committee and they don’t change their behavior? How many times have they had bad bad press? The only thing that’s going to change their behavior is when they have to bear the economic costs of their deliberate design decisions. As you know, Richard Posner or Milton Friedman would say, you have to internalize the cost of safety. If they have to bear the cost of their dangerous platforms instead of these families, instead of clergymen and policemen and doctors and psychologists and insurance companies, then they will have the incentive to change their behavior. But right now, there’s no way if they actually had to bear the cost through a civil lawsuit, their behavior would change.
Through the imposition of civil liability, we can change their economic calculus. We’re not talking about imposing a special duty. We’re just talking about imposing the same rules that every other company has. Every other company in America operates under a duty of reasonable care. We’re asking the same thing for social media.
U.S. Senator Marsha Blackburn:
And I’d like for you to respond for just a minute about why you think it is important to have an AI framework as we begin to move forward with more AI concepts moving into commercialization.
Matthew P. Bergman's Response:
Well, because we continue to see families that have buried children because AI chatbots encouraged suicide. One would have thought after 2 and 1/2 years, I could never be shocked.
When I saw what Sewell Setzer was provided and encouraged to kill himself, when I saw that Zane Shamblin was given a how-to manual, Yeah. Um, we have to do something, Senator, and your leadership is such an inspiration, and I’m just so grateful on behalf of all the people I represent, but also as a father and a grandfather.