Ilya Sutskever, co-founder and former chief scientist of OpenAI, has launched a new AI company called Safe Superintelligence Inc. (SSI). Founded in June 2024, the startup is focused solely on developing safe and aligned superintelligent AI — systems that exceed human intelligence but are firmly rooted in safety and ethical design.
🔐 A Mission-Driven Approach
Unlike many tech startups chasing quick product releases, SSI is taking a different route. The company has declared that it will not release any commercial products in the short term, instead concentrating entirely on long-term safety research and building advanced AI responsibly.
💰 Massive Backing from Industry Giants
SSI has quickly gained traction in the tech world:
-
$2 billion+ in funding raised
-
Valued at $32 billion
-
Backed by major investors like Andreessen Horowitz, Sequoia Capital, DST Global, and SV Angel
-
Strategic support from Alphabet (Google’s parent company) and Nvidia
-
Alphabet has even provided access to its advanced tensor processing units (TPUs) to assist SSI’s research
🧠 Led by AI Heavyweights
The core team includes:
-
Ilya Sutskever – Chief Scientist
-
Daniel Gross – CEO
-
Daniel Levy – Principal Scientist
With offices in Palo Alto and Tel Aviv, SSI is building an elite technical team focused solely on safe AGI development.
🌐 A New Era for AI Safety
SSI’s mission comes at a crucial time, as concerns grow globally around how to govern increasingly powerful AI. By keeping its research free from commercial pressure, SSI aims to lead the way in developing trustworthy and safe superintelligence.
If you like this post, please share it with others on social media. Follow Anchor Biz IT on LinkedIn.