Silicon Valley leaders together with White Home AI & Crypto Czar David Sacks and OpenAI Chief Technique Officer Jason Kwon brought about a stir on-line this week for his or her feedback about teams selling AI security. In separate situations, they alleged that sure advocates of AI security usually are not as virtuous as they seem, and are both appearing within the curiosity of themselves or billionaire puppet masters behind the scenes.
AI security teams that spoke with TechCrunch say the allegations from Sacks and OpenAI are Silicon Valley’s newest try to intimidate its critics, however definitely not the primary. In 2024, some enterprise capital companies unfold rumors {that a} California AI security invoice, SB 1047, would ship startup founders to jail. The Brookings Establishment labeled the rumor as one in every of many “misrepresentations” concerning the invoice, however Governor Gavin Newsom finally vetoed it anyway.
Whether or not or not Sacks and OpenAI meant to intimidate critics, their actions have sufficiently scared a number of AI security advocates. Many nonprofit leaders that TechCrunch reached out to within the final week requested to talk on the situation of anonymity to spare their teams from retaliation.
The controversy underscores Silicon Valley’s rising stress between constructing AI responsibly and constructing it to be an enormous shopper product — a theme my colleagues Kirsten Korosec, Anthony Ha, and I unpack on this week’s Fairness podcast. We additionally dive into a brand new AI security regulation handed in California to manage chatbots, and OpenAI’s strategy to erotica in ChatGPT.
On Tuesday, Sacks wrote a put up on X alleging that Anthropic — which has raised considerations over AI’s potential to contribute to unemployment, cyberattacks, and catastrophic harms to society — is just fearmongering to get legal guidelines handed that may profit itself and drown out smaller startups in paperwork. Anthropic was the one main AI lab to endorse California’s Senate Invoice 53 (SB 53), a invoice that units security reporting necessities for giant AI firms, which was signed into regulation final month.
Sacks was responding to a viral essay from Anthropic co-founder Jack Clark about his fears concerning AI. Clark delivered the essay as a speech on the Curve AI security convention in Berkeley weeks earlier. Sitting within the viewers, it definitely felt like a real account of a technologist’s reservations about his merchandise, however Sacks didn’t see it that manner.
Sacks stated Anthropic is operating a “subtle regulatory seize technique,” although it’s price noting {that a} actually subtle technique most likely wouldn’t contain making an enemy out of the federal authorities. In a comply with up put up on X, Sacks famous that Anthropic has positioned “itself constantly as a foe of the Trump administration.”
Techcrunch occasion
San Francisco
|
October 27-29, 2025
Additionally this week, OpenAI’s chief technique officer, Jason Kwon, wrote a put up on X explaining why the corporate was sending subpoenas to AI security nonprofits, comparable to Encode, a nonprofit that advocates for accountable AI coverage. (A subpoena is a authorized order demanding paperwork or testimony.) Kwon stated that after Elon Musk sued OpenAI — over considerations that the ChatGPT-maker has veered away from its nonprofit mission — OpenAI discovered it suspicious how a number of organizations additionally raised opposition to its restructuring. Encode filed an amicus temporary in assist of Musk’s lawsuit, and different nonprofits spoke out publicly towards OpenAI’s restructuring.
“This raised transparency questions on who was funding them and whether or not there was any coordination,” stated Kwon.
NBC Information reported this week that OpenAI despatched broad subpoenas to Encode and six different nonprofits that criticized the corporate, asking for his or her communications associated to 2 of OpenAI’s greatest opponents, Musk and Meta CEO Mark Zuckerberg. OpenAI additionally requested Encode for communications associated to its assist of SB 53.
One outstanding AI security chief informed TechCrunch that there’s a rising break up between OpenAI’s authorities affairs workforce and its analysis group. Whereas OpenAI’s security researchers incessantly publish studies disclosing the dangers of AI techniques, OpenAI’s coverage unit lobbied towards SB 53, saying it might reasonably have uniform guidelines on the federal degree.
OpenAI’s head of mission alignment, Joshua Achiam, spoke out about his firm sending subpoenas to nonprofits in a put up on X this week.
“At what’s probably a threat to my entire profession I’ll say: this doesn’t appear nice,” stated Achiam.
Brendan Steinhauser, CEO of the AI security nonprofit Alliance for Safe AI (which has not been subpoenaed by OpenAI), informed TechCrunch that OpenAI appears satisfied its critics are a part of a Musk-led conspiracy. Nevertheless, he argues this isn’t the case, and that a lot of the AI security group is sort of vital of xAI’s security practices, or lack thereof.
“On OpenAI’s half, that is meant to silence critics, to intimidate them, and to dissuade different nonprofits from doing the identical,” stated Steinhauser. “For Sacks, I believe he’s involved that [the AI safety] motion is rising and other people wish to maintain these firms accountable.”
Sriram Krishnan, the White Home’s senior coverage advisor for AI and a former a16z normal accomplice, chimed in on the dialog this week with a social media put up of his personal, calling AI security advocates out of contact. He urged AI security organizations to speak to “individuals in the true world utilizing, promoting, adopting AI of their houses and organizations.”
A latest Pew examine discovered that roughly half of Individuals are extra involved than excited about AI, nevertheless it’s unclear what worries them precisely. One other latest examine went into extra element and located that American voters care extra about job losses and deepfakes than catastrophic dangers brought on by AI, which the AI security motion is essentially targeted on.
Addressing these security considerations may come on the expense of the AI business’s speedy progress — a trade-off that worries many in Silicon Valley. With AI funding propping up a lot of America’s economic system, the worry of over-regulation is comprehensible.
However after years of unregulated AI progress, the AI security motion seems to be gaining actual momentum heading into 2026. Silicon Valley’s makes an attempt to battle again towards safety-focused teams could also be an indication that they’re working.