Technology

Anthropic Challenges US Government Ban Amid AI-Military Dispute

Anthropic, a leading US artificial intelligence company, is contesting a government ban that labelled it a national security risk after it refused to provide unrestricted military access to its Claude AI model. The company appeared in a San Francisco federal court on Tuesday seeking an injunction against the designation, arguing that the ban may be punitive.

District Judge Rita F. Lin expressed concern at the start of the hearing that the government’s move “looks like an attempt to cripple Anthropic,” noting the possibility that the company was being penalized for publicly disagreeing with federal authorities. US President Donald Trump and Defense Secretary Pete Hegseth announced in February that Anthropic would no longer work with the Pentagon following its refusal to allow military use of Claude AI for tasks including lethal autonomous weapons and large-scale surveillance of Americans.

The US government labelled Anthropic a “supply chain risk to national security” and directed federal agencies to stop using the AI model. In response, Anthropic filed two lawsuits on March 9: one challenging the supply chain risk designation and another claiming the Trump administration violated the company’s First Amendment rights.

Judge Lin acknowledged that the Pentagon has the authority to choose which AI tools it uses but questioned whether banning all federal use and publicly urging contractors to cut ties with Anthropic crossed legal boundaries. Government lawyers argued that the action was based solely on potential security risks posed by Claude AI, rather than retaliation for Anthropic’s public statements. They also highlighted the possibility that future updates to Claude could pose additional security concerns.

The case has attracted attention from industry experts, who see the ruling as potentially precedent-setting. Ben Goertzel, computer scientist and CEO of SingularityNET, told Euronews Next that applying a supply chain risk label to a domestic AI firm is unusual and could give the executive branch broad discretion to reinterpret laws. He warned that if the designation blocks Anthropic from selling software to companies working with the government, the impact could be severe, though the company would still have opportunities in commercial markets.

Goertzel also suggested that the ban could have wider implications for the AI industry, creating pressure on other companies to comply with government demands. “He’s [President Trump] trying to teach the AI industry to fall into line like everybody else,” Goertzel said.

Judge Lin indicated that she plans to issue a ruling in the coming days on whether to temporarily pause the government’s ban while the broader case is reviewed. The decision could shape how AI companies navigate military and federal partnerships in the United States and set boundaries on government intervention in commercial AI operations.

Anthropic did not respond to requests for comment at the time of publication. The case continues to be closely watched by AI developers and legal experts concerned with the intersection of technology, national security, and free speech.

You May Also Like

Politics

WASHINGTON — The Pentagon announced on Sunday that the United States will send a Terminal High Altitude Area Defense (THAAD) battery to Israel, alongside...

Health

NEW YORK — Teen smoking in the United States has reached an all-time low in 2024, with significant declines in overall youth tobacco use,...

Politics

WASHINGTON — As the countdown to the November 5 presidential election continues, former President Donald Trump is urging his supporters to aim for a...

Politics

In September, NASA announced that summer 2024 was the hottest on record. Just days later, the U.S. faced the dual impact of Hurricanes Helene...

Copyright © 2024 Great America Times.

Exit mobile version