Sam Altman acknowledges AI’s impact on mental health and cybersercurity in an hiring post on X

OpenAI CEO announced on X that the company is hiring a Head of Preparedness, calling it a “critical role at an important time” as AI models advance rapidly

author-image
Deepali
New Update
Head of Preparedness
Listen to this article
0.75x1x1.5x
00:00/ 00:00

OpenAI hiring for the post for the role of Head of Preparedness while stating that there were many risks related to mental health, security and exposure to critical vulnerabilities as AI models were rapidly improving, said the CEO in a social media post. He described the role as “a critical one at an important time”.

Advertisment

As much as $555,000 and equity is being offered as compensation for the role.

Mental health challenges with AI

Sam Altman in a post on X said the models are improving quickly and were starting to present challenges. He acknowledged seeing early signs of AI’s impact on mental health in 2025, and that models were now becoming capable enough in computer security to uncover critical vulnerabilities. This comes following several lawsuits against the company’s AI bot Chat GPT alleging its involvement in teen suicides, mental health harm, and psychosis claims.

As per Open AI’s hiring it defines the role as overseeing how OpenAI identifies and preparing for risks from its most advanced AI models. The role involves leading technical evaluations, threat modelling, and safety measures to ensure new capabilities are deployed responsibly.

Advertisment

Cybersecurity with increase AI use

He also addressed the growing cybersecurity concerns owing to the increased use of AI and said, “If you want to help the world figure out how to enable cybersecurity defenders with cutting edge capabilities while ensuring attackers can't use them for harm, ideally by making all systems more secure, and similarly for how we release biological capabilities and even gain confidence in the safety of running systems that can self-improve, please consider applying.”

The increased use of agentic AI tools, systems that can act autonomously online, raises concerns. Surveys and industry polls show many cybersecurity leaders believe AI-powered attacks are already impacting organisations, with reports of attacks on generative AI infrastructure and deepfake phishing becoming more common. Experts warn AI is lowering the barrier to sophisticated cybercrime while forcing defenders to rethink security models, tools, and skills at speed.

The role as per the company is described fit for someone who can lead the development of safeguards across key risk areas, including cyber and bio risks, ensuring they are effective, technically rigorous, and grounded in clear threat models.

“This will be a stressful job and you'll jump into the deep end pretty much immediately,” Altman said in the post.

Lead role to anticipate misuse and shape strategy

The Head of Preparedness role was introduced at OpenAI in late 2023 to anticipate potential misuse and shape safety strategy as AI models grew more powerful. The function was initially led by Aleksander Madry, a computer scientist, who was later reassigned to a broader research role within the company.

Following Madry’s reassignment, Joaquin Quinonero Candela and Lilian Weng were asked to temporarily oversee the preparedness team as part of a wider safety reorganisation, a move later clarified by OpenAI CEO Sam Altman in a post on X.

After Weng’s departure from OpenAI, Candela led the preparedness function from March 2024 until April 2025. He subsequently transitioned to another role within the company, leaving the Head of Preparedness position vacant and prompting OpenAI to begin hiring for the role.

openai cybersecurity ai