Press Releases

The Tech Oversight Project Applauds NY-12 Democrat Alex Bores For Becoming AI Super PAC’s First Target


Nov 17, 2025

Attacks from $100M+ AI Super PAC Bolster Bores’ Reputation as AI Safety Champion

WASHINGTON, DC – Today, The Tech Oversight Project responded to news that Leading the Future, a $100-million Big Tech super PAC funded by AI executives, will target and attack NYS Assemblymember Alex Bores in his bid to succeed Congressman Jerry Nadler in New York’s 12th District. Bores is lead sponsor of the RAISE Act, which would force AI companies to develop safety plans that protect us from its most deadly harms, including dangerous AI chatbots, which are already claiming young people’s lives.

“If Big Tech billionaires are attacking you by name, you’re probably doing the right thing,” said Sacha Haworth, Executive Director of The Tech Oversight Project. “Assemblymember Bores understands that economic growth and our obligation to protect humanity, including our most vulnerable, are not a zero-sum game. These attacks from the AI industry are a badge of honor and proof that New Yorkers can trust Bores to put the public’s interest ahead of Big Tech billionaires.”

Background:

Leading the Future, which has the personal backing of Marc Andreessen of a16z and Greg Brockman of OpenAI, launched in September with the explicit goal of attacking Democrats for pursuing tech accountability and AI safety legislation. The super PAC is aiming to replicate the crypto industry’s Fairshake model, which spent hundreds of millions to air negative ads attacking Democrats by mimicking conservative organizations’ most potent attacks but ultimately hiding their toxic political goals from the public. Leading the Future is even led by Josh Vlasto, formerly of Fairshake.

About the RAISE ACT:

The RAISE Act would require frontier models to release a safety plan on how they would reduce the risk of critical harms. The models expected to be subject to enforcement include ChatGPT, developed by OpenAI; Claude by Anthropic; LLaMA by Meta; Gemini by Google; and Copilot by Microsoft.

Frontier models are defined in the bill as those with a compute cost of more than $100 million and exceeding a certain level of compute power. Critical harm is defined as at least 100 deaths or $1 billion in damages, including through cyberattacks or the creation of a biological or chemical weapon. The bill also requires the AI behemoths to publish the safety protocols and share them with the state Division of Homeland Security and Emergency Services. They would also be required to withhold any models that present that risk and report safety incidents — including theft, misappropriation or dangerous autonomous behavior — to the state attorney general within 72 hours. Tech companies that violate the law would face penalties imposed by the state attorney general of up to $10 million for the first violation and $30 million for subsequent violations.

The RAISE Act is supported by The Tech Oversight Project, Encode AI, and “godfathers of AI” Geoffrey Hinton and Yoshua Bengio.

Jump to Content
The Dispatch