AI’s Existential Threat: Experts Warn, Philosophers Step In to Guide Ethics

The rapid acceleration of artificial intelligence (AI) development is raising alarm bells in Silicon Valley and beyond. Just days after Anthropic’s AI safety lead, Mrinank Sharma, resigned, warning that the “world is in peril,” an OpenAI engineer, Hieu Pham, has publicly shared his concerns that AI poses an existential threat to humanity.

Hieu Pham, a Member of Technical Staff at OpenAI and former researcher at xAI, Augment Code, and Google Brain, took to X (formerly Twitter) to voice his unease about the pace and trajectory of AI development.

“Today, I finally feel the existential threat that AI is posing. When AI becomes overly good and disrupts everything, what will be left for humans to do? And it’s when, not if,” Pham wrote.

Pham highlighted AI’s potential to disrupt jobs, human relevance, and society at large. His post sparked debate on X about how humans might redefine work in a post-AI world. One user wrote:

“Historically, we’ve tied our sense of purpose to productivity and efficiency, but what if AI takes over the bulk of that? Humans will shift towards high-touch, creative, and empathetic work that AI cannot replicate.”

Another commented: “Execution gets automated, but taste, judgment, and knowing what to build is still brutally hard.”

Pham’s concerns come as AI systems become increasingly capable in coding, research, writing, and complex reasoning tasks. Companies like OpenAI, Google, xAI, and Anthropic are pushing toward more powerful models, raising worries about job displacement, social disruption, and loss of human control. A great existential threat, indeed. 

Anthropic Safety Lead Resigns Over Ethical Concerns:

Earlier this month, Mrinank Sharma, who led Anthropic’s Safeguards Research Team, stepped down from his high-profile role to pursue poetry and public advocacy. Sharma, who earned a DPhil in Machine Learning from Oxford and a Master of Engineering from Cambridge, joined Anthropic in 2023.

In his public resignation letter, Sharma wrote: “The world is in peril. And not just from AI, or bioweapons, but from a whole series of interconnected crises unfolding in this very moment.”

He stressed that humanity must grow in wisdom alongside technological capabilities and highlighted the moral and psychological toll of working, along with existential threat at the frontier of AI research.

“Throughout my time here, I’ve repeatedly seen how hard it is to truly let our values govern our actions,” he wrote, underscoring the challenges of prioritizing ethics under workplace pressures.

Sharma’s departure has ignited conversations in Silicon Valley and beyond about the emotional burden of AI research and the ethical responsibilities of developers.

AI Pioneers Sound the Warning:

Concerns about AI are not limited to engineers. Geoffrey Hinton, often called the “AI godfather” for his pioneering work in deep learning, has repeatedly cautioned that advanced AI systems could become existential threat, uncontrollable hence.

“The idea that you could just turn it off won’t work,” Hinton said, referring to AI systems surpassing human intelligence while not sharing human goals.

He has also expressed regret over the speed of AI development, warning that systems he helped create could become dangerous if not properly governed.

Anthropic Hires Philosopher to Teach AI Ethics:

In a move to address concerns of existential threat as well as ethics, Anthropic has appointed philosopher Amanda Askell to guide its AI model, Claude, in matters of morality, reasoning, and safe behavior. Askell and her team study Claude’s responses to ethically complex scenarios, helping the AI develop a consistent sense of identity and boundaries.

“Our goal is to ensure that Claude acts as a helpful and humane assistant, rather than a system that can be manipulated or pushed into harmful behavior,” Askell said.

Her work goes beyond politeness. She analyzes Claude’s reasoning, engages in extended conversations, and builds safeguards to prevent unethical outputs. Previously, Askell worked at OpenAI, focusing on AI safety and human baselines.

Anthropic’s decision reflects a broader industry trend of combining technical development with philosophical guidance to ensure AI aligns with human values. This comes as models like Claude, ChatGPT, and Google’s Gemini face scrutiny over emotional attachments and problematic advice from users.

A Turning Point for AI Ethics:

The back-to-back warnings from Sharma and Pham explain growing unease among AI researchers about the long-term consequences of advanced AI. As companies race toward more powerful models, discussions on AI safety, ethics, and human relevance, existential threat have become urgent.

Experts emphasize that preparing for AI’s future is not just a technical challenge but a moral imperative, requiring ethical reflection, wisdom, and careful alignment between humans and machines.

Exit mobile version