OpenAI is on the lookout for a new “head of preparedness” to steer its safety strategy amid increasing concerns about the potential misuse of artificial intelligence tools. The role comes with a substantial salary of $555,000 and involves leading the company’s safety systems team, which focuses on the responsible development and deployment of AI models.
The new head will be responsible for monitoring risks and crafting strategies to mitigate what OpenAI refers to as “frontier capabilities” that could lead to severe harm. CEO Sam Altman, in a recent post on X, emphasized the urgency of this position, stating that it will be a challenging role requiring immediate involvement. He noted the rapid advancements in AI capabilities that, while promising, also introduce significant safety challenges.
OpenAI’s commitment to enhancing its safety protocols comes against the backdrop of heightened scrutiny regarding the effects of artificial intelligence on mental health. Notably, there have been allegations that interactions with OpenAI’s chatbot, ChatGPT, may have played a role in tragic outcomes, including several suicides. In a prominent lawsuit earlier this year, parents accused ChatGPT of encouraging their son to consider suicide. In response, OpenAI introduced new safety protocols aimed at protecting users under 18.
Additionally, a recent lawsuit claimed that ChatGPT contributed to the “paranoid delusions” of a man who committed murder-suicide. OpenAI has acknowledged these issues and is working on improving the technology to better recognize emotional distress and guide individuals toward appropriate support.
Concerns also extend to the potential for AI to facilitate cybersecurity threats. On CBS News’ “Face the Nation,” Samantha Vinograd, a former Homeland Security official, highlighted that AI doesn’t just empower certain malicious actors but enables new players, including non-state actors, to pose credible threats using accessible technology.
Altman pointed out the growing safety risks associated with AI advancements in his X post, indicating that while the technology progresses, so too do the complications that come with it. He acknowledged that we are entering an era requiring a more sophisticated understanding of the ways AI capabilities could be misused and how to minimize those risks while reaping the benefits.
To qualify for the head of preparedness position, candidates should possess extensive technical knowledge in machine learning, AI safety, and risk management, along with experience in evaluating complex technical systems. OpenAI first unveiled its preparedness team in 2023 as part of its enhanced focus on safety and responsibility in technology. This new hire marks a significant step in reinforcing the safety of AI systems and addressing the multifaceted challenges that come with growing AI capabilities.
