Ilya Sutskever, former chief scientist and co-founder of OpenAI, has embarked on a new venture aimed at advancing safe superintelligence. Following his departure from OpenAI in May, Sutskever has founded Safe Superintelligence Inc. (SSI), positioning it as a direct competitor to his former employer.
In a statement via Twitter, Sutskever described SSI as a deeply personal and meaningful project focused on achieving safe superintelligence through groundbreaking innovations. He emphasized a singular focus on this goal, contrasting with broader corporate mandates and commercial pressures.
Joining Sutskever at SSI are former OpenAI colleagues Daniel Gross and Daniel Levy, who bring expertise from managing OpenAI’s optimization team. The team highlighted their commitment to prioritizing safety without the distractions of typical business cycles or management overhead.
The launch of SSI underscores Sutskever’s continued commitment to advancing AI safety, a concern he has been vocal about following internal disagreements at OpenAI, particularly regarding safety approaches.
Located in Palo Alto, California, and Tel Aviv, Israel, SSI positions itself as an American company with a global outlook, dedicated solely to its mission of safe superintelligence.
In response to Sutskever’s departure, OpenAI’s CEO Sam Altman expressed gratitude for his contributions, while Jakub Pachocki has since taken over as chief scientist, signaling a new chapter for the organization.
The establishment of SSI marks a significant development in the AI research landscape, promising focused efforts on safety and ethical AI development under Sutskever’s leadership.