Read your favorite news, except the excluded topics, by you.
Register
No overlapping ads for registered users
anthropical, the AI companion slow the Claude chatbot that was founded with a focalize on safe technology, appears to be grading back its safety commitments in order to keep the company competitive.
The company said on Tuesday it had changed its responsible scaling policy, a set of self-imposed guidelines aimed at preventing the development of AI that could potentially be dangerous and cause situations like large-scale cyberattacks.
While the updated guidelines say Anthropic would still require a "strong argument that catastrophic risk is contained" when developing AI, it now says it will only delay development "until and unless we no longer believe we have a significant lead" — meaning it would keep developing if they don't believe they have a lead over their competitors.
The company said it has taken this step because concerns about the safety of AI in the U.S. Have taken a back seat to its economic potential.
"Despite rapid advances in AI capabilities over the past three years, government action on AI safety has moved slowly," the company said in a blog post.
"The policy environment has shifted toward prioritizing AI competitiveness and economic growth, while safety-oriented discussions have yet to gain meaningful traction at the federal level."
The change in Anthropic’s safety guidelines comes as the Pentagon threatens to pull its contracts with the company unless its technology is allowed to be used for all legal military purposes — though Anthropic says the guideline change is unrelated.
The AI company has historically sold itself as putting safety first.
Anthropic was founded in 2021 by former employees of OpenAI who were concerned that company was putting development ahead of safety. CEO Dario Amodei has also voiced fears about the negative potential of AI including mass human catastrophe, and maintained that safety continued to be the "highest-level focus" for Anthropic in a December interview with Fortune.
The blog post noted the company’s safety practices were always intended to be updated, and that this new iteration improves the company’s "transparency and accountability" with new commitments to regularly publish reports and safety goals.
But Heidy Khlaaf, chief AI scientist at independent research group the AI Now Institute, says despite Anthropic’s safety-first reputation, it has always fallen short when it comes to its attempts to prevent human harm.
From its first safety policy, Khlaaf says Anthropic has focused too much on the possibility of catastrophic events down the road, rather than counting the possibility of harm that could come from current AI technology, like run-of-the-mill errors with chatbots.
The Claude chatbot has in the past been misused in fraud schemes and attempts to create malware, and was recently used to steal Mexican government data according to cybersecurity researchers.
She says the company is now dropping the "veneer of safety" it’s previously used to market itself because it's become clear that’s not in its best interest.
"This is a strategic announcement to show that they're open for business," Khlaaf said.
Canada launches AI watchdog to oversee the technology’s safe development and use
The announcement comes at a time of intense competition between top AI companies like Anthropic, OpenAI and Google, which have competing chatbots and are all striking deals to integrate their technologies with businesses and government departments.
U.S. President Donald Trump's administration has also signalled it's all-in on AI development, and has threatened to withhold funding from states that enact laws that hold back U.S. Dominance in the industry.
Teresa Scassa, Canada Research Chair in Information Law and Policy at the University of Ottawa, says that no-rules attitude from the U.S. Government makes it hard for companies to prioritize safety, "because if they do that, then they are going to be left in the dust."
That puts Canada in a tough place too, she says, because regulation here could set homegrown AI development back compared to the U.S., or encourage Canadian companies to move south of the border where there would be fewer restrictions on their tech.
"And I think the sense is that we can't afford that in Canada right now. So you can see how it's having that kind of knock on effect on AI regulation here," Scassa said.
She says since Canada’s Artificial Intelligence and Data Act died in 2025, the Canadian government, much like in the U.S., hasn't tried to impose any broad AI regulation.
The change in Anthropic’s safety guidelines comes as the company comes under pressure from the Pentagon.
Anthropic struck a deal with the U.S. Department of Defense worth up to $200 million US in July, allowing the government to use their technology for military purposes, but within the company’s usage guidelines — the set of rules Anthropic has for how clients can and can’t use its products, including chatbot Claude.
How AI chatbots can influence feelings of companionships, voting intentions
Those guidelines bar anyone, including the U.S. Government, from using Anthropic’s AI tools for a range of things, including to design or develop weapons.
But according to reports, U.S. Defense Secretary Pete Hegseth issued CEO Amodei an ultimatum in a meeting on Tuesday — giving the company until Friday to allow the military to use its AI tools for all legal military purposes, or risk losing its government contracts.
In their back and forth with the government, Anthropic said it will not allow the government to use its technology in autonomous weapons systems — those that allow AI alone to fire at targets — and mass surveillance systems.
But Pentagon officials told media the dispute doesn’t involve AI's potential uses in autonomous weaponry and mass surveillance, and insist the government has always followed the law.
Anthropic says the update of its responsible scaling policy and demands by the Department of Defense are unrelated. Hegseth’s issues are with the company’s usage policy, rather than the scaling policy, according to Anthropic.
In today's interconnected world, staying informed about global events is more important than ever. ZisNews provides news coverage from multiple countries, allowing you to compare how different regions report on the same stories. This unique approach helps you gain a broader and more balanced understanding of international affairs. Whether it's politics, business, technology, or cultural trends, ZisNews ensures that you get a well-rounded perspective rather than a one-sided view. Expand your knowledge and see how global narratives unfold from different angles.
At ZisNews, we understand that not every news story interests everyone. That's why we offer a customizable news feed, allowing you to control what you see. By adding keywords, you can filter out unwanted news, blocking articles that contain specific words in their titles or descriptions. This feature enables you to create a personalized experience where you only receive content that aligns with your interests. Register today to take full advantage of this functionality and enjoy a distraction-free news feed.
Stay engaged with the news by interacting with stories that matter to you. Like or dislike articles based on your opinion, and share your thoughts in the comments section. Join discussions, see what others are saying, and be a part of an informed community that values meaningful conversations.
For a seamless news experience, download the ZisNews Android app. Get instant notifications based on your selected categories and stay updated on breaking news. The app also allows you to block unwanted news, ensuring that you only receive content that aligns with your preferences. Stay connected anytime, anywhere.
With ZisNews, you can explore a wide range of topics, ensuring that you never miss important developments. From Technology and Science to Sports, Politics, and Entertainment, we bring you the latest updates from the world's most trusted sources. Whether you are interested in groundbreaking scientific discoveries, tech innovations, or major sports events, our platform keeps you updated in real-time. Our carefully curated news selection helps you stay ahead, providing accurate and relevant stories tailored to diverse interests.
No comments yet.