Anthropic Quietly Removes Biden-Era AI Safety Pledge From Its Website
American artificial intelligence company, Anthropic, has discreetly removed its commitments to AI safety and security that were made during the Biden administration. This move seems to be a response to the new Trump administration. The information was first made public in a post. The title of the post was "Anthropic Quietly Removes Biden-Era AI Safety Pledge From Its Website." This action has sparked discussions on the changing policies with the change in administration.

Major US AI Firm Withdraws from AI Safety Commitments
Major US artificial intelligence (AI) firm Anthropic has quietly removed the voluntary commitments it had made towards AI safety last year, AI watchdog group The Midas Project informed yesterday. Anthropic removed “White House’s Voluntary Commitments for Safe, Secure, and Trustworthy AI,” which was introduced during US President Joe Biden’s term. The commitments were removed “seemingly without a trace, from their webpage “Transparency Hub“, The Midas Project remarked. It also noted that other changes apart from this one remained minor.
Why It Matters
In July 2023, several AI companies including Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI, had agreed to comply with the aforementioned AI commitments. This was received with much celebration and hopes of building an AI ecosystem that would champion transparency and safety while also bringing the much-touted AI solutions to people’s daily lives.
As part of this commitment, Anthropic stated that it would:
- Share insights on managing AI risks across both industry and government sectors.
- Conduct in-depth research into AI bias and discrimination to ensure fairer algorithms.
The Midas Project pointed out in its tweet: “nothing in the commitments suggested that the promise was (1) time-bound or (2) contingent on the party affiliation of the sitting president.”
Changing landscape of AI regulations
AI regulations in the US have undergone some major changes since US President Donald Trump took over the country’s administration for the second time. In one of the first moves he made after assuming the office, he signed several EOs, many of which repealed actions taken under Biden. One of them was a 2023 directive that outlined measures for ensuring AI safety and security, citizen privacy, equity, protection of consumers and workers’ rights, and promoting innovation.
The EO that repealed the earlier directive was among many others that are being seen as regressive steps in American political scenario. While some believe some of these EOs will face legal challenges since they are subject to judicial review and may be blocked if they violate the Constitution of the United States, the same has not been said about the one on AI safety commitments.
This could be because AI in general remains a topic that is largely still under discussion at the regulatory level or can even be a point of divergence. Just last month, both the US and UK refused to sign the Paris AI Action Summit Joint Statement on “safe” AI. While UK said it had concerns about how national security plays out under the provisions of the statement, US said it was against too much regulation and instead prioritised innovation over safety in the AI domain.