Anthropic has publicly touted its focus on “safeguards,” seeking to limit military use of its tech
The US military actively used Anthropic’s Claude AI model during the operation to capture Venezuelan President Nicolas Maduro last month, according to reports from Axios and The Wall Street Journal – revealing that the safety-focused company’s technology played a direct role in the deadly overseas raid.
Claude was utilized during the active operation, not merely in preparatory phases, Axios and the WSJ both reported Friday. The precise role remains unclear, though the military has previously used AI models to analyze satellite imagery and intelligence in real-time.
The San Francisco-based AI lab’s usage policies explicitly prohibit its technology from being used to “facilitate violence, develop weapons or conduct surveillance.” No Americans lost their lives in the raid, but dozens of Venezuelan and Cuban soldiers and security personnel were killed on January 3.
Disclaimer : This story is auto aggregated by a computer programme and has not been created or edited by DOWNTHENEWS. Publisher: rt.com






