Social Media on Trial: Everything you need to know
There have been two jury verdicts handed down within 24 hours of each other this week that have big tech rattled. And for a good reason. At its core, there’s a consequential legal and ethical debate at the centre: when a technology platform is engineered to be addictive, who bears responsibility for the harm it causes?
Snacks are made to be addictive too. So are TV shows. What role do personal decisions play in this? How about parental oversight?
Both verdicts will almost certainly be appealed, but this isn’t a simple issue.
The Los Angeles Verdict
On Wednesday, a Los Angeles Superior Court jury found Meta and Google’s YouTube negligent in the design of their social media platforms, awarding $3 million in compensatory damages and an additional $3 million in punitive damages. The jury having found that both companies acted with “malice, oppression or fraud,” with Meta bearing 70% of the responsibility and YouTube being accountable for 30%.
The plaintiff, identified in court documents only by her initials, K.G.M. and referred to as “Kaley” by her legal team, is now 20 years old. But this case goes all the way back to her childhood. She began using YouTube at age 6 and Instagram at age 9. By the time she finished elementary school, she had posted 284 videos online. She testified that she was on social media “all day long” as a child, and that she developed depression, anxiety, body dysmorphia, and suicidal thoughts. These are conditions she and her legal team attribute in significant part to the design of those platforms.
The jury deliberated for more than 40 hours over nine days, an unusually long timeframe and a signal that this was not quite such straightforward or unanimous decision. In fact, only nine of the twelve jurors needed to agree on each count in the civil trial, and the jury itself told Judge Carolyn B. Kuhl mid-deliberation that it was struggling to reach consensus on one of the defendants.
This is the first case of more than 2,400 similar lawsuits consolidated in California state court against Google, Meta, TikTok, and Snap. TikTok and Snap settled with the plaintiff before the trial began. Meta and Google fought it to verdict. Another bellwether trial is scheduled for this summer and a separate federal trial involving school districts and parents is also set to begin in the Northern District of California later this year.
“How do you make a child never put down the phone? That’s called the engineering of addiction. They engineered it, they put these features on the phones,” lead plaintiff attorney Mark Lanier said during the trial’s closing arguments, “These are Trojan horses: they look wonderful and great…but you invite them in and they take over.”
A Meta spokesperson responded: “We respectfully disagree with the verdict… Teen mental health is profoundly complex and cannot be linked to a single app,” adding that the company plans to appeal. Google, whose YouTube representatives argued the platform is more like television than social media, also plans to appeal.
The New Mexico Verdict
The day before the Los Angeles decision came in, a separate jury in Santa Fe found Meta liable on all counts in a case brought by New Mexico Attorney General Raúl Torrez. The jury ordered Meta to pay $375 million in civil penalties, accounting for $5,000 per violation, spread across thousands of individual violations. Meta was found liable for breaching the state’s consumer protection law by misleading users about platform safety and enabling child sexual exploitation on Facebook, Instagram, and WhatsApp.
The verdict came at the conclusion of a six-week trial that followed a 2023 undercover investigation in which state prosecutors created a fake profile of a 13-year-old girl and found that the account was immediately inundated with explicit material and solicitations from adults. Multiple individuals were criminally charged as a result.
This is the first time a social media company has been held liable by a jury for harming underage users in a state-led action. The $375 million figure, while substantial, is only a fraction of the roughly $2.1 billion prosecutors sought.
Meta says it disagrees with the verdict and will appeal.
There will be a second phase of the New Mexico trial beginning on May 4 that will go before a judge. The plaintiffs are seeking structural injunctive relief: real age verification, algorithmic changes, and an independent monitor.
Section 230
Historically, social media companies have sheltered behind Section 230 of the Communications Decency Act, the 1996 law that generally immunizes platforms from liability for content their users post. But in the cases above, the plaintiffs sidestepped the issue of content and focused on design instead.
A platform cannot be sued for a post a user wrote. But it can potentially be sued for building a feed with infinite scroll, for deploying autoplay video, for engineering a notification system designed to interrupt users at calibrated intervals and pull them back in. These are choices made by engineers and product managers and are not shielded by by Section 230.
But it’s also worth asking: when a platform makes active editorial choices through its algorithm, determining what content users see in their feed, independent from what they might have requested to follow or upvoted...does that platform then cross to editor rather than neutral content provider?
Certainly there’s evidence proving that platforms do indeed intervene in these ways. Internal Facebook documents, disclosed by whistleblower Frances Haugen in 2021, revealed that beginning in 2017, Facebook’s algorithm weighted certain emoji reactions, such as the “angry” reaction, at five times the value of a simple “like.” Stronger emotional reactions meant higher engagement, and higher engagement meant more advertising revenue. Negative comments, led to more clicks.
Such posts, however, were also more likely to contain misinformation and inflammatory content, according to Facebook’s own data scientists, flagged in 2019. But they continued this policy until late 2020.
An active editorial choice.
Similarly, an internal Instagram presentation from 2018 showed that targeting minors was part of their strategy: “If we want to win big with teens, we must bring them in as tweens.”
So what now?
The legal and philosophical ramifications of all of this aren’t necessarily quite so straightforward.
How do you prove that someone’s depression or anxiety stems specifically from Instagram, rather than life circumstances and inability to cope? How do you know if them turning to social media in the first place is a consequence to existing depression or neglect? And how do you determine whether the overall net impact on someone is negative or positive?
As mentioned earlier, snacks are engineered to be addictive (”snackable”). So are television shows, video games, and slot machines. For things like tobacco or alcohol, there are minimum age requirements, restrictions on how they can be marketed to minors, and warnings on packaging. These laws are usually determined based on the severity of the harm and vulnerability of the target populations. Nicotine, for example, has a physiological impact, creating chemical dependency.
Social media isn’t regulated this way, currently.
There’s also the matter of personal responsibility. In adults, it might seem more obvious, but when it comes to a minor, things are a bit more complicated. They are afforded greater protections because they lack the cognitive development to make fully informed decisions, making them especially vulnerable. However, what is their parents’ role in all of this? Are they not the overall responsible party, given that they can control or remove access to such platforms?
Another side issue, not currently being discussed, is whether the platform is complicit in cases of violent threats, harassment, or bullying—specifically when reported to the tech company yet it continues allowing the alleged perpetrators access to the alleged victim and takes no action to protect them.
Legislature…
There is a reasonable argument to be made that suing companies after the fact, absent specific legislation defining what was unlawful, is legally and philosophically awkward.
Equally, there’s a compelling argument to be made that proper legislature is necessary. This means clearly defining minimum safety standards (age verification, algorithmic transparency, default-safe settings for minor accounts) and enforcement mechanisms, which would include penalties. These should be based on what is within the company’s control to address or manipulate.
If you found this story useful, please consider supporting my work by sharing, becoming a paid subscriber, or making a one-off donation via Buy Me a Coffee ☕️.



I'm uncomfortable with these punitive jury verdicts against Meta & Google, which in any case they are appealing in the courts. As you rightly note, companies of every kind want their customers to buy more of whatever they are selling, which is only natural in a market economy. Unfortunately some products are harmful, even if legal, especially if used or consumed to excess (or in the case of tobacco, arguably any level of consumption). Of course some joke that Facebook and Google don't charge users, which means the user is the product! There's some truth to that. In December, Australia implemented a new ban on under-16s using social media, which, though in some sense experimental, is being closely watched around the world. I think there's a case for regulating social media platforms, while upholding the principle of personal responsibility.