‘Won’t do business with them again’: Trump orders US agencies to stop use of Anthropic AI – Tribune India


The Trump administration on Friday ordered all US agencies to stop using Anthropic’s artificial intelligence technology and imposed other major penalties, culminating an unusually public clash between the government and the company over AI safety.

President Donald Trump, Defence Secretary Pete Hegseth and other officials took to social media to chastise Anthropic for failing to allow the military unrestricted use of its AI technology by a Friday deadline, accusing it of endangering national security after CEO Dario Amodei refused to back down over concerns the company’s products could be used in ways that would violate its safeguards.

“We don’t need it, we don’t want it, and will not do business with them again!” Trump said on social media.

Hegseth also deemed the company a “supply chain risk,” a designation typically stamped on foreign adversaries that could derail the company’s critical partnerships with other businesses.
The company said that “designating Anthropic as a supply chain risk would be an unprecedented action – one historically reserved for US adversaries, never before publicly applied to an American company.”

Anthropic said the “designation would both be legally unsound and set a dangerous precedent for any American company that negotiates with the government.”
Anthropic had said it sought narrow assurances from the Pentagon that its AI chatbot Claude would not be used for mass surveillance of Americans or in fully autonomous weapons. The Pentagon said it was not interested in such uses and would only deploy the technology in legal ways, but it also insisted on access without any limitations.
The government’s effort to assert dominance over the internal decision-making of the company comes amid a wider clash over AI’s role in national security and concerns about how increasingly capable machines could be used in high-stakes situations involving lethal force, sensitive information or government surveillance.
Trump said Anthropic made a mistake trying to strong-arm the Pentagon. He wrote on Truth Social that most agencies must immediately stop using Anthropic’s AI but gave the Pentagon a six-month period to phase out the technology that is already embedded in military platforms.
“The United States of America will never allow a radical left, woke company to dictate how our great military fights and wins wars!” he wrote in all caps.
After months of private talks exploded into public debate this week, Anthropic said Thursday that the government’s new contract language would allow “safeguards to be disregarded at will.” Amodei said his company “cannot in good conscience accede” to the demands.
Anthropic can afford to lose the contract. But the government’s actions posed broader risks at the peak of the company’s meteoric rise from a little-known computer science research lab in San Francisco to one of the world’s most valuable startups.
The president’s decision was preceded by hours of top Trump appointees from the Pentagon and the State Department taking to social media to criticize Anthropic, but their complaints posed contradictions.
Top Pentagon spokesman Sean Parnell said on social media Thursday that Anthropic’s unwillingness to go along with the military’s demands was “jeopardizing critical military operations and potentially putting our warfighters at risk.” Hegseth said Friday that the Pentagon “must have full, unrestricted access to Anthropic’s models for every LAWFUL purpose in defense of the Republic.”
Trump’s social media post also mandated the company “better get their act together, and be helpful” during a six-month phase-out period or there would be “major civil and criminal consequences to follow.” However, Hegseth’s choice to designate Anthropic a supply chain risk uses an administrative tool that has been designed for companies owned by US adversaries to prevent them from selling products that are harmful to American interests.
Virginia Sen Mark Warner, the top Democrat on the Senate Intelligence Committee, noted that this dynamic, “combined with inflammatory rhetoric attacking that company, raises serious concerns about whether national security decisions are being driven by careful analysis or political considerations.” Anthropic didn’t immediately reply to a request for comment on the Trump administration’s actions.
The dispute stunned AI developers in Silicon Valley, where venture capitalists, prominent AI scientists and a large number of workers from Anthropic’s top rivals – OpenAI and Google – voiced support for Amodei’s stand in open letters and other forums.
The move is likely to benefit Elon Musk’s competing chatbot, Grok, which the Pentagon plans to give access to classified military networks, and could serve as a warning to two other competitors, Google and OpenAI, that have still-evolving contracts to supply their AI tools to the military.
Musk sided with Trump’s administration, saying on his social media platform X that “Anthropic hates Western Civilization.”    But one of Amodei’s fiercest rivals, OpenAI CEO Sam Altman, sided with Anthropic and questioned the Pentagon’s “threatening” move in a CNBC interview and a letter to employees that said OpenAI shared the same red lines. Amodei once worked for OpenAI before he and other OpenAI leaders quit to form Anthropic in 2021.
“For all the differences I have with Anthropic, I mostly trust them as a company, and I think they really do care about safety,” Altman told CNBC, hours before he gathered employees for an all-hands meeting Friday.
Retired Air Force Gen. Jack Shanahan, a former leader of the Pentagon’s AI initiatives, wrote on social media this week that “painting a bullseye on Anthropic garners spicy headlines, but everyone loses in the end.”       Shanahan said Claude is already being widely used across the government, including in classified settings, and Anthropic’s red lines were “reasonable.” He said the AI large language models that power chatbots like Claude, Grok and ChatGPT are also “not ready for prime time in national security settings,” particularly not for fully autonomous weapons.
Anthropic is “not trying to play cute here,” he wrote Thursday on LinkedIn. “You won’t find a system with wider & deeper reach across the military.”
The Tribune, now published from Chandigarh, started publication on February 2, 1881, in Lahore (now in Pakistan). It was started by Sardar Dyal Singh Majithia, a public-spirited philanthropist, and is run by a trust comprising five eminent persons as trustees.

The Tribune, the largest selling English daily in North India, publishes news and views without any bias or prejudice of any kind. Restraint and moderation, rather than agitational language and partisanship, are the hallmarks of the newspaper. It is an independent newspaper in the real sense of the term.

The Tribune has two sister publications, Punjabi Tribune (in Punjabi) and Dainik Tribune (in Hindi).
Remembering Sardar Dyal Singh Majithia

source