en
Back to the list

Anthropic safeguards lead resigns, warns of growing AI safety crisis

source-logo  cryptobriefing.com 2 h
image

Mrinank Sharma, who led safeguards research at Anthropic, resigned from the AI company yesterday and publicly shared his departure letter.

In the letter posted to X, Sharma cited mounting unease over gaps between stated principles and actual decisions at AI organizations and in society more broadly. He described a widening disconnect between ethical commitments and operational realities.

Today is my last day at Anthropic. I resigned.

Here is the letter I shared with my colleagues, explaining my decision. pic.twitter.com/Qe4QyAFmxL

— mrinank (@MrinankSharma) February 9, 2026

“It is clear to me that the time has come to move on,” Sharma wrote.

Sharma spent two years at the Claude developer, where he worked on defenses against AI-enabled biological threats, internal accountability tools, and early frameworks for documenting AI safety measures. He also studied how chatbots can reinforce user biases and gradually reshape human judgment.

The researcher praised former colleagues for their technical skill and moral seriousness but signaled a shift away from corporate AI work. He announced plans to pursue writing, personal coaching, and possibly graduate study in poetry.

His departure follows a period of heightened attention on how leading AI developers manage internal dissent, disclose risks, and balance rapid capability gains against safety research.

cryptobriefing.com