/dq/media/media_files/2025/12/29/head-of-preparedness-2025-12-29-17-56-58.jpg)
OpenAI hiring for the post for the role of Head of Preparedness while stating that there were many risks related to mental health, security and exposure to critical vulnerabilities as AI models were rapidly improving, said the CEO in a social media post. He described the role as “a critical one at an important time”.
As much as $555,000 and equity is being offered as compensation for the role.
Mental health challenges with AI
Sam Altman in a post on X said the models are improving quickly and were starting to present challenges. He acknowledged seeing early signs of AI’s impact on mental health in 2025, and that models were now becoming capable enough in computer security to uncover critical vulnerabilities. This comes following several lawsuits against the company’s AI bot Chat GPT alleging its involvement in teen suicides, mental health harm, and psychosis claims.
"The lawsuits and the CEO’s acknowledgement signal an important shift- tech companies can no longer treat mental health impact as a secondary or unintended consequence. If an AI system is capable of influencing thoughts, emotions, or decisions, then psychological safety must be treated on par with data security or physical safety," says Priyanka MB, Founder & Chief Psychologist at Inspiron Psychological Well-Being Center Private Limited.
Priyanka points that AI has moved beyond being a purely neutral tool to an emotionally responsive interface. Frequent interactions, especially during vulnerable moments, can feel reassuring and validating, she said.
As per Open AI’s hiring it defines the role as overseeing how OpenAI identifies and prepars for risks from its most advanced AI models. The role involves leading technical evaluations, threat modelling, and safety measures to ensure new capabilities are deployed responsibly.
On organistions taking resposibility for what AI does the expert said that, "Responsibility has to move from disclaimers. This includes stronger guardrails around crisis-related content, real-time escalation to human support, transparent limits on what AI can and cannot help with psychologically, and ongoing audits with mental health professionals and not just engineers." The future of AI cannot be AI instead of therapy; it has to be AI alongside human care, points Priyanka.
Cybersecurity with increase AI use
He also addressed the growing cybersecurity concerns owing to the increased use of AI and said, “If you want to help the world figure out how to enable cybersecurity defenders with cutting edge capabilities while ensuring attackers can't use them for harm, ideally by making all systems more secure, and similarly for how we release biological capabilities and even gain confidence in the safety of running systems that can self-improve, please consider applying.”
The increased use of agentic AI tools, systems that can act autonomously online, raises concerns. Surveys and industry polls show many cybersecurity leaders believe AI-powered attacks are already impacting organisations, with reports of attacks on generative AI infrastructure and deepfake phishing becoming more common. Experts warn AI is lowering the barrier to sophisticated cybercrime while forcing defenders to rethink security models, tools, and skills at speed.
The role as per the company is described fit for someone who can lead the development of safeguards across key risk areas, including cyber and bio risks, ensuring they are effective, technically rigorous, and grounded in clear threat models.
“This will be a stressful job and you'll jump into the deep end pretty much immediately,” Altman said in the post.
Lead role to anticipate misuse and shape strategy
The Head of Preparedness role was introduced at OpenAI in late 2023 to anticipate potential misuse and shape safety strategy as AI models grew more powerful. The function was initially led by Aleksander Madry, a computer scientist, who was later reassigned to a broader research role within the company.
Following Madry’s reassignment, Joaquin Quinonero Candela and Lilian Weng were asked to temporarily oversee the preparedness team as part of a wider safety reorganisation, a move later clarified by OpenAI CEO Sam Altman in a post on X.
After Weng’s departure from OpenAI, Candela led the preparedness function from March 2024 until April 2025. He subsequently transitioned to another role within the company, leaving the Head of Preparedness position vacant and prompting OpenAI to begin hiring for the role.
/dq/media/agency_attachments/UPxQAOdkwhCk8EYzqyvs.png)
Follow Us