
One of the founders of OpenAI who was involved in a failed effort to push out CEO Sam Altman, said he's starting a safety-focused artificial intelligence company. Ilya Sutskever, who left the ChatGPT maker last month, said in a social media post that he's created Safe Superintelligence Inc. with two co-founders. The company's only goal and focus is safely developing "superintelligence" - an AI system that is smarter than humans.
Sutskever and his partners, in a public statement, have emphasized their commitment to avoiding the typical distractions of management and product cycles. They highlighted that their business model is designed to shield their safety and security efforts from short-term commercial pressures.
Safe Superintelligence is rooted in both Palo Alto, California, and Tel Aviv, leveraging their strong connections in these regions to attract top-tier technical talent.
This announcement comes in the wake of Sutskever's involvement in a failed attempt to oust OpenAI CEO Sam Altman last year. The move had sparked significant internal conflict regarding the balance between business pursuits and AI safety priorities at OpenAI. Sutskever has since expressed regret over the boardroom upheaval.
After his departure fromOpenAI, he hinted at a "very personally meaningful" project, the details of which remained undisclosed until now.
Safe Superintelligence Inc. represents Sutskever's intent to address these safety concerns by focusing entirely on the secure development of superintelligent AI, free from the pressures that he believes compromised his previous work at OpenAI.
See What’s Next in Tech With the Fast Forward Newsletter
Tweets From @varindiamag
Nothing to see here - yet
When they Tweet, their Tweets will show up here.