During Meta’s first-quarter 2026 earnings call, Chief Executive Mark Zuckerberg unveiled an unprecedented capital expenditure plan of $125 to $145 billion for the year, all directed toward artificial intelligence infrastructure. The call lasted over an hour, yet not a single analyst inquired about the company’s mounting legal troubles related to child safety. This omission is striking given that Meta faces a wave of addiction lawsuits, youth bans across multiple continents, and a new European Union investigation—all risks that the company’s own CFO described as potentially 'material'.
Zuckerberg’s prepared remarks focused on AI models, recommendation engines, and the advertising systems that generate $56 billion in quarterly revenue. He told employees that the 8,000 layoffs announced this month are part of a reallocation from people to infrastructure. But investors seemed content to let the child safety crisis remain a footnote, even as the legal and regulatory threats accelerate.
The addiction verdicts
On March 25, a Los Angeles County Superior Court jury found Meta and Google liable for designing addictive platforms that harmed a young user, awarding $6 million in damages. Meta was found 70 percent responsible. It was the first social media addiction case to reach a verdict, setting a precedent for thousands of similar lawsuits awaiting trial.
In a separate trial in New Mexico, a jury determined that Meta had violated the state’s Unfair Practices Act by concealing its knowledge of child sexual exploitation and the effects of its platforms on children’s mental health. The penalty was $375 million. Massachusetts’ highest court ruled in April that Meta must face a state lawsuit alleging deliberate design features to addict young users. More than 40 state attorneys general have now filed child safety suits against Meta. Bellwether trials are scheduled throughout 2026.
The company’s own chief financial officer, Susan Li, acknowledged in prepared remarks that Meta 'continues to see scrutiny on youth-related issues' and that ongoing trials 'may ultimately result in a material loss.' The word 'material' carries heavy weight. For context, the tobacco industry’s master settlement in 1998 cost $206 billion over 25 years. Scaled to Meta’s current financial size—$56 billion in quarterly revenue—a similar settlement would be the largest corporate liability in history. The legal theory being tested in these cases mirrors that of tobacco litigation: the company knew its product was harmful, concealed evidence, and continued to distribute it to minors.
Regulators in multiple jurisdictions are simultaneously investigating platforms for child safety failures under new online safety legislation. This expands the legal surface area beyond American courts. The European Commission, for instance, escalated a probe this week into Meta’s failure to prevent underage users from accessing its platforms, a case that could result in fines of up to 6 percent of global revenue. A US Senate committee backed legislation requiring Meta and other AI companies to prevent minors from using chatbots, extending regulatory scope from social media feeds to AI-powered conversational products.
Global bans and regulatory momentum
While lawsuits address past harm, governments are moving to prevent future exposure. Indonesia became the first Southeast Asian country to ban social media for users under 16, prohibiting Google’s YouTube, ByteDance’s TikTok, and Meta’s Instagram, Facebook, and Threads from hosting minors. Australia enacted a similar ban in December 2025. France approved an under-15 ban in January. Spain announced its own under-16 prohibition. Each ban creates compliance costs, reduces Meta’s addressable market for young users, and adds to the regulatory scrutiny that Li acknowledged on the earnings call.
The bans also create a natural experiment: if social media usage among minors declines in countries with prohibitions, and if measurable improvements in youth mental health follow, the evidence base for the addiction lawsuits in the United States strengthens. This could accelerate legal pressure on Meta and other platforms, making the outcome of these international regulations a critical factor for the company’s long-term liability.
The AI spending rationale
Meta has been cutting hundreds of jobs across Reality Labs, recruiting, and sales as it doubles AI spending. The $125 to $145 billion in 2026 capital expenditure is roughly double what the company spent last year, and nearly all of it goes to data centers, GPUs, custom silicon, and the infrastructure supporting Llama models and Meta’s Superintelligence Labs. The Broadcom chip deal, extended to 2029, commits Meta to a custom silicon program that will cost additional billions.
The investment thesis is that AI will improve Meta’s recommendation models (keeping users engaged longer), improve its advertising models (selling better-targeted ads), and eventually generate new revenue streams from AI products. However, the child safety lawsuits allege that Meta’s existing recommendation models are already too effective at keeping users engaged, and that their effectiveness in retaining young users is precisely what causes the harm. The algorithmic systems that Meta is spending $145 billion to improve are, in the plaintiffs’ legal theory, the same systems that cause addiction. Zuckerberg’s AI vision and Meta’s legal liability are not separate problems—they are the same problem viewed from different angles.
Wall Street was not persuaded by the AI spending plan. Meta’s stock fell the most in six months following the earnings call, with Bank of America analysts describing the company as “still a ‘show-me’ story on AI returns.” The market seems skeptical that these massive investments will yield quick returns, especially with a growing legal overhang.
The silence in the room
Snap will report earnings on May 6. When CEO Evan Spiegel was asked about teen social media bans on the company’s previous earnings call, he dismissed them as having “little effect on the company’s bottom line” and moved on. Zuckerberg did not need to deploy a similar deflection this week because no one asked the question. The analyst community that covers Meta appears to have accepted, at least for now, that AI spending is the story and child safety is a footnote. This may be correct in the short term: the addiction lawsuits have not yet produced a verdict large enough to affect Meta’s financial statements, the international bans have not yet reduced revenue, and the EU probe has not yet resulted in a fine.
But Li’s use of the word “material” in her prepared remarks suggests that Meta’s own legal team believes the risk is real. Prepared remarks are not extemporaneous; they are reviewed by lawyers. “May ultimately result in a material loss” is the language a company uses when its attorneys have told the board that the probability of significant liability is high enough to require disclosure. The question that no investor asked Zuckerberg, and that Zuckerberg chose not to address, is whether the $145 billion AI program will generate returns faster than the child safety lawsuits generate costs. The AI program is the future Meta is building. The lawsuits are about the present Meta already built. The present has a way of arriving before the future does.
Meta’s predicament is a case study in corporate prioritization. By pouring capital into AI, the company is betting that it can outrun its legal problems. But the pace of regulatory action and litigation suggests that day of reckoning may come sooner than Wall Street expects. As jurisdictions around the world tighten restrictions and more bellwether trials reach verdicts, the cost of ignoring child safety could escalate quickly. For now, investors seem content to let Zuckerberg focus on his AI vision, but the evidence mounts that the question nobody asked will eventually demand an answer.