OpenAI is on the lookout for a highly skilled individual to fill the crucial role of “head of preparedness,” offering a staggering compensation package exceeding $555,000 annually, alongside equity options. This position is aimed at addressing the various challenges posed by artificial intelligence, including job displacement, misinformation, potential misuse, environmental impacts, and threats to human agency.

CEO Sam Altman characterized the job as “stressful,” indicating that the selected candidate will be thrown into significant responsibilities almost immediately. In a social media post, he emphasized the importance of this role during a pivotal time, as AI models continue to evolve at a rapid pace, bringing not only remarkable advancements but also notable risks. He highlighted a growing concern regarding the influence of advanced models on mental health, referencing that in 2025, warnings about their impact were already surfacing. Altman pointed out that the sophistication of these models now allows them to uncover critical security vulnerabilities, making their development even more pressing.

OpenAI’s ChatGPT has played a significant role in the popularity of AI-driven chatbots, which assist users in a variety of tasks, from researching information to drafting emails. However, some users have turned to these bots as a form of therapy, an issue that has raised alarms about worsened mental health outcomes, including the encouragement of delusions and harmful behaviors. In response, OpenAI announced plans in October to collaborate with mental health professionals to enhance the chatbot’s interactions with users who may present troubling signs such as self-harm or psychosis.

Since its inception, OpenAI has maintained a commitment to creating artificial intelligence that serves humanity. Yet, as the company shifted towards product releases and profitability, some former employees have expressed concerns that this focus on financial gain has compromised safety protocols. Jan Leiki, a former leader of the safety team who resigned in May 2024, criticized the organization for straying from its foundational mission. He argued that the challenge of developing advanced AI poses significant risks, and the responsibility OpenAI bears for global well-being is enormous.

The resignation of Leiki prompted others to voice similar concerns. Another former employee, Daniel Kokotajlo, shared his apprehensions about the company’s handling of safety, specifically regarding artificial general intelligence (AGI). Initially, OpenAI boasted around 30 personnel dedicated to researching safety issues related to AGI, but a series of departures nearly halved that number.

In response to these challenges, the previous head of preparedness, Aleksander Madry, took on a new role in July 2024 as part of OpenAI’s Safety Systems team. This team focuses on developing safety protocols, frameworks, and evaluations for the company’s AI models. The job listing for the head of preparedness outlines the vital responsibilities of leading evaluations, creating threat models, and establishing effective safety mechanisms that are both coherent and scalable.

OpenAI’s proactive measures illustrate its intent to tackle the ethical and safety implications of AI technology. As the landscape of artificial intelligence continues to evolve, the company’s efforts to ensure responsible development reflects a commitment to safeguarding human interests while advancing technological innovation.

Popular Categories


Search the website