Sam Altman resigns from OpenAI’s Safety and Security Committee. The committee will now function autonomously as OpenAI raises billions in additional funds while coming under more criticism for AI governance. It will be chaired by Carnegie Mellon professor Zico Kolter.
OpenAI has announced the formation of an "independent" Safety and Security Committee, tasked with overseeing the safety of its AI models. Chaired by Carnegie Mellon professor Zico Kolter, the committee includes notable figures such as Quora CEO Adam D’Angelo, retired U.S. Army General Paul Nakasone, and former Sony EVP Nicole Seligman—all of whom are already members of OpenAI’s board of directors.
This committee will play a critical role in ensuring the safe deployment of AI technologies. It recently conducted a safety review of OpenAI's latest model, o1, following the departure of CEO Sam Altman. The committee will receive regular updates from OpenAI’s safety and security teams and will retain the authority to delay releases if safety concerns arise.
In its blog post, OpenAI stated that the committee will receive ongoing reports on technical evaluations for current and future models, as well as post-release monitoring feedback. The company emphasized its commitment to enhancing its model launch processes by establishing a comprehensive safety and security framework with clearly defined criteria for success. This move aims to ensure that the development and deployment of OpenAI's models adhere to stringent safety and security standards.
Sam Altman’s departure from the Safety and Security Committee at OpenAI follows scrutiny from five U.S. senators who raised concerns about OpenAI's policies and governance. This development comes amidst broader internal and external criticism of the company's approach to AI safety and regulation.
Reports indicate that nearly half of OpenAI's staff dedicated to addressing long-term AI risks have left the organization. Some former employees and researchers have accused Altman of resisting substantial AI regulations, instead favoring policies that align more closely with OpenAI's corporate interests.
OpenAI's lobbying efforts have significantly ramped up, with the company allocating $800,000 for federal lobbying in the first half of 2024 alone, compared to $260,000 for the entire previous year. This increase in lobbying expenditure highlights the company's proactive engagement with policymakers and attempts to influence AI-related legislation.
Additionally, Altman’s involvement with the U.S. Department of Homeland Security’s Artificial Intelligence Safety and Security Board reflects his role in shaping AI policy and safety protocols at the national level. This board advises on the safe and secure development and deployment of AI technologies across U.S. critical infrastructure, further emphasizing the high stakes and complex regulatory landscape surrounding AI advancements.
See What’s Next in Tech With the Fast Forward Newsletter
Tweets From @varindiamag
Nothing to see here - yet
When they Tweet, their Tweets will show up here.