Companies like Youtube, Instagram, and TikTok designed their products to profit off children who get addicted to their platform
WASHINGTON, DC – Today, The Tech Oversight Project issued a statement after news reports that TikTok had released a so-called “time limit” feature, which is nothing more than a fake ploy to make parents feel safe without actually making their product safe. TikTok’s attempt to deceive parents and lawmakers is another example of a tech platform feigning safety for children and teens in order to avoid oversight.
“TikTok is just the latest in a long line of tech platforms that attempt to deceive users, parents, and lawmakers while continuing to harvest and profit from children’s private data. Companies like Youtube, Instagram, and TikTok centered their business models on getting kids addicted to the platforms and increasing their screen time to sell them ads. By design, Tech platforms do not care about the well-being of children and teens, and if you ask any teacher or health care professional in the country, they will tell you that we are in the middle of a mental health crisis of Big Tech’s making,” said Kyle Morse, Deputy Executive Director of the Tech Oversight Project. “Platforms like TikTok, Youtube, and Instagram will continue to violate children’s privacy to make it a profit – even if that means serving them content that encourages suicide and eating disorders.”
Below is a summary of Google, Meta, and TikTok’s long history of designing platforms to get children and teens addicted to their platforms and how they contributed to the mental health crisis in our country. LINK HERE.
Mental Health:
Facebook
- Facebook knew that Instagram was detrimental to young people’s mental health, particularly teen girls, but said the opposite in public. Reports have found that Facebook and Instagram are intentionally designed to be addictive and lawmakers have called on Facebook to be more transparent about its mental health effects on teenagers.
- Internal Facebook research found “thirty-two percent of teen girls said that when they felt bad about their bodies, Instagram made them feel worse.” It also found that Facebook made body image issues worse for one in three girls and knew that teens blamed Instagram for increases in anxiety and depression without being prompted.
- Wall Street Journal: “The features that Instagram identifies as most harmful to teens appear to be at the platform’s core.” Facebook researchers concluded some of the problems Instagram created with teen mental health were specific to Instagram and not found in social media more broadly.
- Despite acknowledging these problems, Facebook wanted to emulate TikTok in order to expand its base of young users and it did so by rolling out Instagram Reels and a “discovery engine” on Facebook.
- Google’s YouTube has failed to protect young children from disturbing or inappropriate content and in 2019 was fined a record $170 million for violating a 1996 law meant to protect young children’s privacy online. Like, Facebook, YouTube redesigned its algorithm in a way that made it an “addition engine.”
- In 2015, two months after YouTube Kids launched, consumer advocates complained to the FTC about disturbing content geared towards children on the platform.
- In 2018, Parents and medical experts reported that people were manipulating content from well-known children’s franchises and inserted inappropriate or disturbing content on YouTube, which has adverse effects on developing brains.
- CNBC: “Mental health experts warn that YouTube is a growing source of anxiety and inappropriate sexual behavior among kids under the age of 13.” Child psychotherapist said she has seen a rise in cases of children suffering from anxiety triggered by videos they’ve watched on YouTube and the children exhibited loss of appetite, sleeplessness, crying fits and fear.
- In 2021, the House Oversight and Reform subcommittee sent a letter to YouTube CEO Susan Wojcicki seeking information on YouTube Kids and accusing it of not doing enough to protect children from potentially harmful content. The committee said a high volume of children’s videos on YouTube smuggled hidden marketing and advertising with product placements by “children’s influencers” and YouTube did not appear to be trying to prevent “such problematic marketing.”
TikTok
- Researcher found that TikTok was intentionally designed to be addictive and children and teenagers were particularly vulnerable to TikTok’s short content format as a result of underdeveloped prefrontal cortexes, which directed decision-making and impulse control. As a result, experts have become concerns that it could add to the mental health crisis among young people.
- A non-profit study found TikTok may surface potentially harmful content related to suicide and eating disorders within minutes of them creating an account.
- In January 2023, Seattle Public Schools sued TikTok, Facebook and YouTube, alleging the platforms exploited children and contributed to the youth mental health crisis. That lawsuit followed a December 2022 class action lawsuit of 1,200 American families who sued those three companies, alleging the companies knew they were negative affecting children’s mental health.
Algorithms that Harm Historically Marginalized Communities:
Facebook
- Facebook has long been accused of using AI that protects hate speech and suppresses content created by users from historically marginalized groups. Its content rules reportedly only detected broad groups of people, like “white men,” but would not flag hate speech if a protected group contained a characteristic that isn’t protected, like “female drivers” or “black children.” For example, it allowed a Republican congressman’s post about hunting down and killing “radicalized” Muslims to remain up but took down a Boston poet’s post that said white people were racist.
- ProPublica: Facebook’s “hate-speech rules tend to favor elites and governments over grassroots activists and racial minorities…in so doing, they serve the business interests of the global company, which relies on national governments not to block its service to their citizens.”
- Facebook also reportedly ignored internal research it conducts on racial bias in its content moderation program. Internal Facebook research found a new set of proposed rules meant to crack down on bullying made it 50 percent more likely that Black users’ accounts were automatically disabled by the moderation system than white users. Researchers were reportedly told not to share their findings or conduct further research.
- September 2021: Facebook had to apologize after a flaw in its AI software led to a video of black men being labeled as “primates.”
- In 2021, Facebook parent company Meta said it would finally look into whether its platforms treated users differently based on their race. A Facebook civil rights audit found that it put free speech ahead of other values, which undermined its efforts to curb hate speech and voter suppression. It also found Facebook refused to take down posts by then-President Donald Trump that “clearly violated” the company’s policies on hate and violent speech and voter suppression. It also found Facebook exempted politicians from third-party fact-checking and was “far too reluctant to adopt strong rules to limit [voting] misinformation and voter suppression.”
- YouTube has been accused of unfairly targeting users from historically marginalized groups while allowing top creators to violate content moderation rules. 11 current and former YouTube content moderators reportedly said YouTube gave more lenient punishments to top video creators for violating rules that banned demeaning speech, bullying and other graphic content.
- In August 2019, a group of LGBTQ+ creators sued YouTube, alleging that is suppressed their content, restricted their ability to sell advertising and culled their subscribers. The Creators alleged YouTube’s software algorithms and human reviewers single out and remove content that features words common in the LGBTQ+ community, like “gay,” “lesbian” or “bisexual.”
- In June 2020, a group of Black creators sued YouTube alleging that the platform had systematically removed their content without explanation. Washington Post: “The suit is the latest allegation that YouTube’s software, which can automatically remove videos suspected of violating the company’s policies, discriminates against certain groups, such as LGBT People.”
- In December 2020, YouTube announced it would review its content moderation system after years of denying that its algorithms unfairly target users from historically marginalized groups.
TikTok
- TikTok admitted that at one point it had intentionally suppressed content from historically marginalized people under the guise of attempting to prevent cyberbullying. Internal documents reportedly showed TikTok instructing moderators to suppress posts created by “users deemed too ugly, poor or disabled for the platform.”
- A Black TikTok creator reportedly tried to post in support of Black Lives Matter, but content containing the word “Black” was immediately flagged as “inappropriate content.” The creator then reportedly tested the algorithm with white supremacist and neo-Nazi language and the app did not give him the same inappropriate content message.
- Several Jewish TikTok creators have also reportedly said that their content has been regularly removed from the platform for allegedly violating community guidelines.