top of page
Search

Two Landmark Verdicts. Now What?

Updated: Mar 26

The Meta and YouTube verdicts mark a real shift in platform accountability. Here's what the conversation that follows needs to get right.



Well, I didn't have two juries holding two digital platforms accountable within two days of each other on my 2026 Bingo card, but here we are.


On Tuesday, a New Mexico jury found Meta liable for knowingly harming children's mental health and concealing what it knew about child sexual exploitation on its platforms, awarding $375 million in damages. On Wednesday, a California jury found Meta and Google liable for the depression and anxiety of a young woman who began using their platforms as a small child, awarding $3 million in compensatory damages with punitive damages still to be determined.


As someone who has spent my career studying young people's relationship with digital platforms, I admit that I hold this moment with something between relief and caution.


These verdicts matter. What we do with them matters just as much — how we frame them, what we demand next, and whether we resist the urge to treat a jury verdict as a solution when it's just a beginning.

The story that gets told about these cases in the weeks ahead will shape policy, shape public understanding, and shape how we support young people.


What the verdicts establish and why it matters


Let's start with what deserves real recognition, because we should celebrate this moment without burying it in caveats. The evidence presented in these trials wasn't abstract. Internal Meta documents revealed executives explicitly targeting tweens, with one memo reading: "If we wanna win big with teens, we must bring them in as tweens." The New Mexico jury found Meta engaged in "unconscionable" trade practices, language that cuts through the usual corporate deflection about user safety being a company priority. What the documents made clear is that exploitative design isn't an accident or a side effect: it is the business model. Engagement is the product. Children are the resource.


For those of us who have argued for years that platform design is not neutral — that the choices companies make about algorithms, notification systems, recommendation engines, and age enforcement are deliberate decisions with foreseeable consequences — these verdicts matter. They establish that the court can hold companies accountable for the architecture they build and the harms that architecture produces.


What the verdicts confirm is something advocates have long argued: harmful design is a choice. And choices carry responsibility.

How we talk about this matters


News has covered the case as a "social media addiction trial," and I want to be careful with that language. Addictive design is real, and the trial documents proved it was intentional. But addictive design is different from a generation of addicted users. What most young people experience is problematic or habitual use — patterns shaped by deliberate design choices — and research shows that framing their relationship with platforms as addiction rather than as a response to engineered design actually reduces their sense of control and increases self-blame. The win here is holding platforms accountable for the architecture. The goal is to change the design, not to pathologize the users the platform was built to capture.


Likewise, these cases are also being widely characterized as social media's Big Tobacco moment, and I understand the comparison. Like tobacco companies, Meta knew its products were causing harm, concealed that knowledge, marketed them aggressively to children, and built its business model around it anyway. That parallel is real and the accountability it demands is justified.


But social media isn’t a drug. It can foster connection, creativity, and belonging — if it's built to.

Tobacco is inherently harmful: there is no safe level of use and abstinence for minors is the right response. But social media isn't a drug. For young people across all kinds of circumstances, it can be a genuine source of belonging, creativity, learning, and community. The answer isn't bans; it's demanding that companies build platforms that protect the young people already using them, which is exactly what these verdicts make possible.


When we lean too hard on the tobacco analogy — or the addiction frame that travels alongside it — we risk returning to something media scholars have spent decades pushing back against: the idea that exposure to media causes harm in a direct and predictable way, that young people are passive recipients of whatever platforms “inject” into them. That model erases context, erases agency, and erases content — the actual substance of what young people encounter and how they make meaning from it.


This case smartly sidestepped questions of content for legal reasons, and I think that was the right legal strategy. But content matters. Not all use is equivalent. Not all young people experience the same platforms the same way. Holding design accountable doesn't require pretending otherwise.


What these verdicts cannot fix


The conversation that will follow these rulings will, I expect, treat them as confirmation that social media is the primary driver of the youth mental health crisis. The research doesn't support that conclusion, and if that narrative takes hold, it will distort the responses we pursue.


Problematic media use is real. Exploitative design amplifies it. The harms documented in these trials are serious and warranted serious legal response. And this is the part that we tend to drop from the conversation: the young people most vulnerable to those harms are often already navigating conditions that have nothing to do with any platform. 

Decades of research consistently shows that economic hardship, housing instability, food insecurity, and parental stress are among the strongest predictors of depression and anxiety in young people. Platforms don't create that vulnerability. They find it, and they monetize it.


Understanding this changes what we should be asking. The young woman at the center of the California case started using YouTube at six years old. Before we ask only what the platform failed to do, we should ask why a six-year-old was on it in the first place. And when we ask that honestly, the answers point somewhere structural.

Families don't turn to screens because they're negligent. They turn to screens because the social infrastructure that used to hold children — affordable childcare, community programs, after-school care — has been systematically defunded, cut, or priced out of reach.

Screens fill a real gap, and Meta built its business on exploiting exactly that.


Adolescents face a version of this too. The third spaces where previous generations of teenagers worked out identity and belonging — parks, rec centers, malls, street corners — have either disappeared or become actively hostile to young people without adult supervision. If we don't provide unsupervised social life in the physical world, we shouldn't be surprised that teens find it online, or that platforms built to capture their time and attention are there waiting when they do.

For many teens, digital spaces aren't a temptation to be resisted — they're filling a real absence in their autonomy and navigation of the world.

None of this changes the case for platform accountability. Harmful design choices amplify vulnerability that already exists, and that amplification is real and serious. But if we allow these verdicts to substitute for investing in families, rebuilding community infrastructure, and funding mental and physical health care for families and children, we'll have traded structural solutions for a satisfying story about corporate villains — and young people will keep suffering in the meantime.


The framework we need


At this week's Common Sense Summit, Baroness Beeban Kidron, the architect of the UK's Age Appropriate Design Code, reframed the debate in a way that stuck with me. She is not, she said, in favor of banning children from social media. That's the wrong approach. What she is in favor of is banning platforms from children (until certain conditions are met). Those conditions aren't complicated: safety and wellbeing by design, by default, and verified before products reach children at all.


That reframe matters. It shifts responsibility from children and families, who have been told to limit, monitor, and self-regulate, back to the companies that built the systems in the first place. And it has a clear real-world precedent.


I'm far from the first to point this out, but it bears repeating: we don't allow any other products that children use to enter the market without rigorous safety testing, regulatory oversight, and market recalls when harm is documented.

Parents trust products not because they've personally audited the supply chain, but because there are industry-wide requirements that companies must meet before putting a product in front of a child.

And critically, intent doesn't exempt a product from that standard. The standard isn't whether businesses market a product to children, it's whether children are likely to be present. We extend no such expectation to digital products, even as we know children use them for hours every day, even as those same internal documents prove companies knew about the harms and built for harmful engagement anyway.


The verdicts this week were possible because individual families fought for years through civil litigation. That is accountability, but it’s not the same as safety. What we need is the equivalent of consumer product safety standards for the digital environment: not just damages awarded after harm is done, but enforceable design requirements before products reach children.


California's Age Appropriate Design Code Act, signed into law in 2022 and struck down by a First Amendment challenge from tech industry lobbyists before it ever took effect, pointed exactly toward what this could look like: mandatory risk assessments for any product likely to be accessed by children, default privacy protections for users under 18, no behavioral profiling of minors, and no engagement-maximizing nudges by default. We know it works.


After the UK's version took effect, more than 90 platforms made architectural changes, not because they wanted to, but because they were required to. That is what design accountability looks like when it works upstream rather than in the rearview mirror.


What I want the conversation to hold


These verdicts open a moment that could genuinely move things forward and I want to make sure we use this moment well.


I want it to include real demands for design change: not just damages, but structural requirements that mean the next generation of children doesn't need to sue for protection. I want it to include investment in the families and communities that are the primary protective buffers between children and harm, because platform regulation, however necessary, will not rebuild the third spaces, fund the childcare, or address the economic instability that puts families under pressure in the first place.


We also need to expand the frame. "Social media" is too small a category for the problem we're describing. YouTube's lawyers already tested this, arguing that it is a streaming platform, not a social media site. The jury assigned them just 30% liability, suggesting the argument landed at least partially. Whether or not it holds legally, the definitional fight is just beginning, and platforms will keep using it. The design choices that exploit young users —attention harvesting, behavioral profiling, engagement optimization — show up in gaming platforms, educational technology, and AI companions just as readily as they show up in Instagram.


If we build our policy responses around "social media" as a category, we will protect children from yesterday's harm while the next generation of exploitative design goes unregulated.

The digital environment children inhabit is larger than any one genre of app, and our accountability frameworks need to be too.


And I want it to hold, with genuine care, the young people who are not primarily victims of Instagram, but who are navigating real isolation, real economic stress, and a digital environment that was built to exploit both.

Media literacy education matters here, not as a substitute for structural change or platform accountability, but as part of what we offer young people who deserve to understand the environments they're living in. 

Teens are already asking critical questions about their relationship with platforms. They need us to meet that awareness with honesty about what they're up against — not just restrictions handed down without explanation, and not a cultural conversation that treats them as passive casualties rather than people with their own developing agency.


Platform accountability and the larger context of young people's lives are not in competition, they're inseparable. The language we use to describe these harms, the definitions we rely on, the causes we name and the ones we don't, will shape policy, shape public understanding, and shape whether young people are better supported or just better surveilled.


These verdicts opened a door. What we do on the other side of it is up to us.

 
 
bottom of page