Safe Superintelligence Raises $1 Billion to Develop Safe AI That Surpasses Human Capabilities

Safe Superintelligence Raises $1 Billion to Develop Safe AI That Surpasses Human Capabilities
Safe Superintelligence (SSI), a new AI startup co-founded by former OpenAI chief scientist Ilya Sutskever, has raised $1 billion to advance its mission of creating superintelligent AI systems with a strong emphasis on safety. Based in Palo Alto and Tel Aviv, SSI plans to use the funds to acquire significant computing power and attract top talent, aiming for a $5 billion valuation. Backed by major venture capital firms including Andreessen Horowitz and Sequoia Capital, SSI will focus on foundational AI research and development over the next few years. The company, led by CEO Daniel Gross and supported by NFDG, is committed to aligning with investors who share its vision of secure and ethical AI development, distinguishing itself from previous approaches in the field.

Safe Superintelligence (SSI), a new AI startup co-founded by former OpenAI chief scientist Ilya Sutskever, has secured $1 billion in funding to develop advanced artificial intelligence systems that exceed human capabilities, according to company executives. The Palo Alto and Tel Aviv-based company aims to prioritize safety in AI development, ensuring that these systems benefit humanity without posing risks.

With a small team of 10 employees, SSI plans to use the funds to acquire significant computing power and attract top-tier talent. The company is building a highly trusted team of researchers and engineers focused on foundational AI research. Despite declining to reveal its valuation, sources close to SSI report it is valued at approximately $5 billion. This substantial investment underscores continued investor confidence in exceptional AI talent, even amid a broader decline in funding interest for unprofitable startups.

SSI's backers include renowned venture capital firms such as Andreessen Horowitz, Sequoia Capital, DST Global, and SV Angel. Additionally, NFDG, an investment partnership managed by Nat Friedman and SSI CEO Daniel Gross, contributed to the funding round.

Gross emphasized the importance of aligning with investors who understand and support SSI’s mission to safely advance AI capabilities. He explained that SSI will spend the next few years focused on research and development before bringing its products to market. The company is also carefully vetting potential hires to ensure they possess both extraordinary capabilities and strong character, aiming to build a culture focused on meaningful work rather than industry hype.

Sutskever’s departure from OpenAI earlier this year marked a significant shift in his career. Previously a key figure at OpenAI, he played a pivotal role in developing AI models like ChatGPT. His decision to co-found SSI followed a turbulent period at OpenAI, where he was involved in a controversial vote to oust CEO Sam Altman. Although he later reversed his stance, the fallout led to his exit from the company.

SSI’s approach to AI development differs from that of OpenAI. Sutskever hinted at a new strategy for scaling AI, one that diverges from traditional methods but could potentially lead to groundbreaking advancements. SSI is currently exploring partnerships with cloud providers and chip companies to meet its computing power needs, though no specific partnerships have been announced yet.

As AI safety becomes an increasingly critical topic, SSI’s mission to create secure, superintelligent AI aligns with growing concerns about the potential risks posed by rogue AI systems. The company’s focus on ethical AI development positions it at the forefront of the ongoing debate about how to safely harness AI’s transformative potential.