The latest job listing by OpenAI comes at a time when the company faces multiple lawsuits alleging the role of ChatGPT in encouraging users to commit murder and die by suicide.
In a blog post, OpenAI said that the Head of Preparedness will hold the responsibility of putting its preparedness framework into action and lead the technical strategy. This job profile is part of the broader Safety Systems.
“You will be the directly responsible leader for building and coordinating capability evaluations, threat models, and mitigations that form a coherent, rigorous, and operationally scalable safety pipeline,” it said.
Those planning to apply must have deep technical judgment, clear communication as well as the ability to guide complex work across multiple risk domains. The Head of Preparedness gets to lead a small team to drive core Preparedness research.
Among the core responsibilities mentioned in the job description, some include building capability evaluations, establishing threat models and ensuring they are precise for rapid product cycles.
The person selected for the role will oversee mitigation design across various risk areas and ensure safeguards are technically sound and effective. The official will refine and evolve the preparedness framework keeping in view the emerging risks, capabilities, or external expectations.
Also, the person will “collaborate cross-functionally with research, engineering, product teams, policy monitoring and enforcement teams, governance, and external partners to integrate preparedness into real-world deployment,” the company noted.
As per TechCrunch, OpenAI first announced the creation of its preparedness team in 2023. In less than a year, it reassigned Head of Preparedness Aleksander Madry to a role focused more on AI reasoning.
