Press Releases

REPORT: Big Tech’s War on Kids Puts Corporate “Speech” over Kids’ Safety and Has Dangerous, Far-Reaching Implications


Oct 12, 2023

A California Judge’s Decision Threatens Digital Right to Privacy, Extending Speech Rights to Corporate Surveillance

The following report discusses the negative impact of the recent Federal court ruling in NetChoice v. Bonta.

Background

After its unanimous passage through the California legislature in 2022 and subsequent signing into law by Governor Newsom, the California Age Appropriate Design Code (AADC) was attacked as unconstitutional by NetChoice, a trade lobby group for Big Tech, which sued Attorney General Bonta, claiming the law violates the First Amendment.

NetChoice, which launched a Big Tech-funded litigation center this year to bankroll its war on kids, filed a motion seeking to stop the AADC from coming into force. On September 18, 2023, Judge Beth Labson Freeman of the U.S. District Court for the Northern District of California found the entirety of the AADC to be unconstitutional, as violating the First Amendment.

Why it Matters

In the case, a California federal judge invalidated a kids’ online safety and privacy law, passed by a democratically-elected legislature, on the grounds it violates the First Amendment. In doing so, the judge implied that the First Amendment would prohibit any restrictions on how companies design their products or facilitate commercial data flows, as well as weaken the ability to regulate digital commercial activity generally.

The force and effect is that social media platforms would have a nearly unfettered right to track and surveil users of all ages. If allowed to stand, the court’s ruling jeopardizes many existing data privacy and consumer protection laws, including:

This ruling also strips lawmakers of their right to enact any meaningful tech regulation in the future to:

  • protect kids online
  • regulate how our data is collected, used or shared, and
  • constrain runaway Artificial Intelligence technology

The arguments advanced by NetChoice, the tech-funded lobby organization that has received significant contributions from Apple, TikTok, Google, Meta, and SnapChat, and accepted by the district court undermine the entire premise of internet regulation and would tie the hands of lawmakers and regulators to rein in tech companies.

Contrary to the court’s decision, freedom of speech is not the freedom to siphon data from unsuspecting internet users and use it against them, nor is it the freedom to design products in a manner that harms consumers, especially children.

Ultimately the freedom of speech cannot constitute corporate freedom to act with impunity. In that respect, the NetChoice v. Bonta decision is not only wrong, but dangerous. It must be appealed. 

The Court’s Faulty Logic 

The Court’s ruling squarely puts the burden for protecting privacy on consumers instead of on tech companies and their platforms, running in the face of established privacy law as well as common sense. The ruling states, “Users can manage their online privacy by reading privacy policies before engaging with the provider’s services.” Such policies are often difficult to find, overly broad, and even more difficult to interpret by laypeople. Accessing and understanding these policies is difficult enough for adults; we certainly should not put the burden on children to wade through complicated language to understand their privacy risks.

The Court states that since parts of the law require covered businesses to identify and disclose to the government potential risks to minors and to develop a plan to mitigate risks, it triggers First Amendment scrutiny. In other words, the ruling states that because the law requires such reporting, that it facially requires a business to express its ideas and analysis about likely harm.” That is absurd on its face and would undermine the ability of the government to protect consumers from dangerous products altogether.

Essentially, the ruling interprets the AADC’s application of standard safety measures and testing — commonplace in all other industries — as an infringement of a corporation’s right to free speech.

Even more absurdly, the Court ruled that requiring businesses to affirmatively provide information to users – the same way that food companies are required to provide nutrition labels and mattresses must come with safety information – is akin to “requiring speech,” which is therefore (somehow?) also regulating speech.

The Court does recognize that a regulation that restricts conduct without a “significant expressive element” is not subject to any level of First Amendment scrutiny. Therefore, most of the California AADC should not be subject to First Amendment scrutiny at all. As such, it is perplexing how the Court could find the entire law to be unconstitutional.

But perhaps most astoundingly, the Court sided with NetChoice’s counsel, who argued that the law did not address a “sufficiently concrete harm.” Of course, the harms of tech platforms on children and consumers are well known at this point, including, among many, infinite scroll, dark patterns, predatory algorithms, data theft, lost classroom instruction time – even as extreme as depression, eating disorders, sexual predation, and teen suicide. The U.S. Surgeon General has warned that social media poses “a profound risk of harm” to adolescents.

The judge’s ruling cited the benefits of the internet as a reason to overturn the law – i.e., that children may learn new information and discover new interests. No one has argued otherwise. However, kids shouldn’t be tracked across every corner of the internet as they search for information and explore their identities, and children should not have to sacrifice privacy for the sake of discovery.

Above all, the Court’s decision renders it virtually impossible to meaningfully regulate not just social media platforms and impedes the ability of democratically elected legislatures to regulate data privacy, platform design as a matter of consumer protection, mandate transparency, and emerging Artificial Intelligence tech.

Impacts on Tech Accountability Efforts 

Kids’ online safety

This court decision prioritizes corporations’ First Amendment speech protection over children’s privacy and safety from online surveillance and predatory behavior.  The implications of this decision are potentially far-reaching in terms of hamstringing any existing or future legislative proposal to protect kids from online harm.

Furthermore, this ruling extends beyond children and may impact the entire tech regulation landscape. The ruling’s justification under the First Amendment implicates broader consumer safety rules, that similar to the AADC, may not be regulating content or speech whatsoever.

Data privacy

The court overstepped existing data privacy legal frameworks, suggesting any regulation of data collection or use related to content curation could be unconstitutional. The court rejected mandatory privacy settings and mandatory privacy policies, stating that, despite an abundance of evidence to the contrary, the harms to kids stemming from privacy violations, dark patterns, or targeting of harmful content are unclear.

This decision effectively threatens the viability of existing and future privacy legislation, including efforts to minimize the amount of information tech companies collect and use for targeted advertising and surveillance.

Mandated transparency

The court also found mandatory assessments of how companies’ use of data can harm kids to be unconstitutional. These kinds of assessments are common corporate accountability measures laws in the United States and around the world. The AADC includes basic data protection and transparency requirements found in many current and proposed privacy laws in the United States, all of which were found unconstitutional by this court.

Safety by design

The court also suggests that using a consumer protection approach to regulate company design decisions is unconstitutional, as decisions about how to design a product or feature lie within a company’s discretion and therefore constitute protected corporate speech under the First Amendment.

The same way that regulation provides a safety framework to ensure that our physical spaces are built using non-toxic materials and designed to accommodate a range of abilities, our digital spaces also need safety frameworks.

Artificial intelligence

If computer code and the way companies make design decisions are protected under the First Amendment as corporate speech, necessary legislation to address AI will be contested. Under this court’s decision, the following efforts critical to regulate AI would be severely threatened:

  • Establishment of comprehensive privacy frameworks and privacy-by-default
  • Mandating of safety-by-design approaches
  • Requirement of age-appropriate designs for youth
  • Mandating of watermarking for AI-generated outputs
  • Requirement of disclosures of interactions with AI systems
  • Requirement of algorithmic impact assessments

With regard to children specifically, in September 2023, the Attorneys General (AGs) of 54 US states and territories called on Congress to investigate what can be done regarding AI-generated child abuse content. The AGs stated that AI is making it easier to create deep fake images of children, thereby leading to greater possibilities of exploitation. This decision would stunt the ability to address the real harms already surfacing from generative AI, for children and for all of us.

The NetChoice v. Bonta decision has far-reaching consequences limiting any legislative attempts to regulate user privacy and data collection – the foundations upon which today’s largest platforms and the newest AI technologies are built.

The results of upholding such a decision would be detrimental, constraining citizens across the U.S. into living in a system where technology products are not subject to any consumer protection oversight.

What’s Next 

We fully expect California’s Attorney General will appeal the 9th Circuit’s decision and offer a full-throated fight against this extreme judicial overreach, but as we’ve made clear: there’s a lot more at stake here.

Anyone who cares about fighting for an internet that is safe and secure should not allow this ruling to stand. The status quo does not work for millions of Americans online, and we cannot let the courts permanently tip the scales in the favor of Google, Apple, Meta, Amazon, and TikTok. No company should have the unfettered right to surveil its users and violate their privacy – and have those predatory actions be protected by the First Amendment.

We believe that we cannot allow differences in our approach to stymie the entire movement to hold Big Tech accountable. We call on all digital rights organizations, privacy rights groups, and civil society organizations to fight against this ruling – in manners that make sense for their organization to do so.

We cannot sit idly by while judicial overreach handcuffs the ability of any legislature – federal or state – to hold Big Tech companies accountable.

Jump to Content