AI safety researcher quits with a cryptic warning

0
5

“The world is in peril,” Anthropic’s Safeguards Research Team lead wrote in his resignation letter

A leading artificial intelligence safety researcher, Mrinank Sharma, has resigned from Anthropic with an enigmatic warning about global “interconnected crises,” announcing his plans to become “invisible for a period of time.”

Sharma, an Oxford graduate who led the Claude chatbot maker’s Safeguards Research Team, posted his resignation letter on X Monday, describing a growing personal reckoning with “our situation.”

“The world is in peril. And not just from AI, or bioweapons, but from a whole series of interconnected crises unfolding in this very moment,” Sharma wrote to colleagues.

The departure comes amid mounting tensions surrounding the San Francisco-based AI lab, which is simultaneously racing to develop ever more powerful systems while its own executives warn that those same technologies could harm humanity.

It also follows reports of a widening rift between Anthropic and the Pentagon over the military’s desire to deploy AI for autonomous weapons targeting without the safeguards the company has sought to impose.

Disclaimer : This story is auto aggregated by a computer programme and has not been created or edited by DOWNTHENEWS. Publisher: rt.com