
The job description states that the individual in this position will be accountable for:
“Monitoring and preparing for leading-edge capabilities that pose new risks of significant damage. You will be the directly accountable leader for developing and coordinating assessments of capabilities, models of threats, and countermeasures that constitute a systematic, thorough, and operationally scalable safety pipeline.”
Altman further notes that, in the future, this individual will have the responsibility of implementing the company’s “preparedness framework,” ensuring the safety of AI models prior to the introduction of “biological capabilities,” and even establishing limits for self-enhancing systems. He also remarks that it will be an “intense job,” which feels like a considerable understatement.
In light of a number of recent prominent instances where chatbots have been linked to the suicides of young people, it appears to be a bit late to start focusing on the potential mental health risks presented by these technologies. AI-induced psychosis is becoming an increasing concern, as chatbots nurture people’s delusions, promote conspiracy theories, and assist individuals in concealing their eating disorders.