C4D partners with legal and social media experts on amicus brief in MDL case against Meta

By Yaël Eisenstat and Christopher Murray

A swath of potentially consequential cases for kids’ online safety are making their way through U.S. courts that focus on how Meta’s design features and choices allegedly harm young users. Dozens of Attorneys General filed suit against Meta in 2023, alleging their design choices led to harmful behavior for younger users, and that the company’s own statements deceived the public on numerous issues. In what’s called a multi-district litigation (MDL), 33 of those cases have been folded together in the Northern District of California. Meta has moved to have the cases dismissed, invoking Section 230 immunity for what they claim is third-party speech. Cybersecurity for Democracy (C4D) partnered with the Electronic Privacy Information Center (EPIC), Common Sense Media, and the Tech Justice Law Project on an amicus brief filed on June 30th in the Ninth District, where the appeal is being heard, explaining why Section 230 should not apply in this case. C4D provided technical analysis of how the various design features work, what their goals are, and how they could be designed differently without affecting third-party speech. Eleven legal scholars signed on in support of the brief.

C4D built on EPIC’s explanation that the purpose of Section 230 was to prevent the “moderator’s dilemma,” which forces internet companies to choose between hosting user-generated content, not moderating that content, or moderating the content in an overly censorious way, adding analysis of how the design features in this case do not, in themselves, lead to the moderator’s dilemma. The core points we hope the courts will consider are: 

  • Social media algorithms have evolved in both purpose and design since Section 230 was first written. Judicial interpretation of algorithmic design features, however, continues to rely on cases where the technical analysis was based on earlier algorithmic designs. In some examples, judges analogize modern recommender systems to message boards, where the user was receiving content directly related to what they were seeking. 
  • Modern recommender systems, however, are often not based on user signals but instead keyed to business goals such as optimizing for time on screen, for engagement with content, and even for novelty.
  • Social media platforms are products, and they each have particular design features. 
  • One key point to understand in assessing modern algorithmic feeds and recommender systems is that they are designed to elicit a behavior on the part of the user that furthers the company’s business goals, as opposed to merely sorting and delivering information based on relevance to the user. We examined company patents as one way to see clearly what behaviors these systems are designed to elicit. Understanding how these algorithms and other social media design tools in this case affect behavior is critical, as it is the behavior change that contributes to the claims in these cases.

We provided technical analysis of how various features in this case work and what alternatives exist that would not result in the moderator’s dilemma. This analysis is not exhaustive for every type of social media design feature, but the same overarching analysis applies: they are designed with a business goal in mind.

As our partners at EPIC have explained, our coalition brief demonstrates that Section 230 does not block the addictive design claims against Meta because these claims do not put Meta into the moderator’s dilemma. The company could choose alternative designs that mitigate the harm to kids without touching user content. The brief assures the court that denying Meta Section 230 protections here won’t break the internet. In fact, denying Meta Section 230 protections here would incentivize more pro-social platform design by ensuring greater accountability when companies design their platforms in harmful ways.  

*Our summer intern, Maxwell Keiles, contributed to this blog post.