Meta is currently fighting for AI amnesty, blocking state laws that protect children and teens online.
WASHINGTON, DC – Today, The Tech Oversight Project responded to newly unsealed documents in court cases against Meta that prove a years-long cover-up of child sex trafficking and social media addiction on its platforms. Documents also show that the company deliberately lied to the Senate Judiciary Committee about its ability to determine whether its platforms were linked to increased anxiety and depression.
“Mark Zuckerberg has blood on his hands: he has known for over a decade that pedophiles and sex traffickers were targeting children on his platforms, and instead of fixing the problem, what he did was worse than nothing: he killed safety features, buried internal research, and then lied about it to Congress. Now, Zuckerberg is using Meta’s trillions of dollars to lobby Washington on AI amnesty and to defeat lawmakers who stand up to them,” said Sacha Haworth, Executive Director of The Tech Oversight Project. “I don’t know how many times Congress needs to hear this, but Americans are demanding forceful action against Big Tech companies like Meta, and Congress needs to stop caving to this industry’s demands and treat them like the criminals they are.”
Key Findings:
Meta knowingly allowed sex trafficking of kids and teens to flourish on its platforms:
-
- Meta knowingly allowed sex trafficking on its platform, and had a 17-strike policy for accounts known to engage in trafficking.
- “You could incur 16 violations for prostitution and sexual solicitation, and upon the 17th violation, your account would be suspended…by any measure across the industry, [it was] a very, very high strike threshold,” said Instagram’s former Head of Safety and Well-being Vaishnavi Jayakumar.
- Meta knowingly allowed sex trafficking on its platform, and had a 17-strike policy for accounts known to engage in trafficking.
- According Jayakumar’s testimony, Instagram’s “zero tolerance” policy for child sexual abuse material was a policy in name only. The platform did not offer users a simple way to report child sexual abuse content, and this was an issue Jayakumar raised multiple times when she joined the company in 2020.
Meta knowingly worsened young people’s wellbeing on a massive scale:
-
- According to internal documents, Meta designed a “deactivation study,” which found that users who stopped using Facebook and Instagram for a week showed lower rates of anxiety, depression, and loneliness. Meta halted the study and did not publicly disclose the results – citing harmful media coverage as the reason for canning the study.
- An unnamed Meta employee said this about the decision, “If the results are bad and we don’t publish and they leak is it going to look like tobacco companies doing research and knowing cigs were bad and then keeping that info to themselves?”
Meta lied to Congress about what it knew about harms to kids:
- Meta lied to the Senate Judiciary Committee in a 2020 set of written questions when asked about its ability to determine whether increased use of its platform among teenage girls has any correlation with increased signs of depression and increased signs of anxiety.
-
- Meta was aware that millions of adult strangers were contacting minors on its sites; that its products exacerbated mental health issues in teens; and that content related to eating disorders, suicide, and child sexual abuse was frequently detected, yet rarely removed.
- Meta failed to disclose these harms to the public or to Congress, and refused to implement safety fixes that could have protected young users.
- An internal 2022 audit allegedly found that Instagram’s Accounts You May Follow feature recommended 1.4 million potentially inappropriate adults to teenage users in a single day
- Meta only began rolling out privacy-by-default features in 2024, seven years after identifying dangers to minors.
Meta purposefully designed addictive platforms that exploited youth psychology for profit:
-
- In 2018, company researchers surveyed 20,000 Facebook users in the U.S. and found that 58% had some level of social media addiction—55% mild, and 3.1% severe.
- An internal Meta researcher said, “Because our product exploits weaknesses in the human psychology to promote product engagement and time spent,” the researcher wrote, Meta needed to “alert people to the effect that the product has on their brain.”
- Even after determining internally that teen safety was a massive problem on its platforms, Meta CEO Mark Zuckerberg said, “teen time spent be our top goal of 2017.”
- Internal documents as recent as 2024 said, “Acquiring new teen users is mission critical to the success of Instagram.”
- As early as 2017, Meta identified that its products were addictive to children, but those safety concerns were brushed under the rug so that the company could pursue aggressive growth and engagement strategies.
- Brian Boland, Meta’s former Vice President of Partnerships, said “My feeling then and my feeling now is that they don’t meaningfully care about user safety. It’s not something that they spend a lot of time on. It’s not something they think about. And I really think they don’t care.”
-
- By 2020, the growth team had determined that a private-by-default setting would result in a loss of 1.5 million monthly active teens a year on Instagram, which became the underlying reason for not protecting minors.
- Meta used location data to push notifications to students during school hours in what they deem “school blasts.”
- As one employee allegedly put it: “One of the things we need to optimize for is sneaking a look at your phone under your desk in the middle of Chemistry :)”.
- According to federal law, companies must install safeguards for users under 13, and the company broke the law by pursuing aggressive “growth” strategies for hooking “tweens” and children aged 5-10 on their products.
- Internal research cited in the brief suggested there were 4 million users under 13 on Instagram in 2015; by 2018, the plaintiffs claim, Meta knew that roughly 40% of children aged 9 to 12 said they used Instagram daily.
- Internal chats within the company stated, “Oh good, we’re going after <13 year olds now? Zuck has been talking about that for a while…targeting 11 year olds feels like tobacco companies a couple decades ago (and today). Like we’re seriously saying ‘we have to hook them young’ here.”
- Meta developed AI tools to monitor the platforms for harmful content, the company didn’t automatically delete that content even when it determined with “100% confidence” that it violated Meta’s policies against child sexual-abuse material or eating-disorder content.
Meta willfully exacerbated youth self-harm:
- According to plaintiffs, Meta’s AI classifiers did not automatically delete posts that glorified self-harm unless they were 94% certain they violated platform policy.
- In a 2021 internal company survey cited by plaintiffs, more than 8% of respondents aged 13 to 15 reported having seen someone harm themselves, or threaten to do so, on Instagram during the past week.