Safe Superintelligence, often abbreviated SSI, is a startup focused on developing a superintelligent AI—one that exceeds human-level capabilities—while maintaining rigorous safety, alignment, and control. Launched in mid-2024 by Ilya Sutskever (formerly OpenAI’s chief scientist), Daniel Gross, and Daniel Levy, SSI is specifically structured around the belief that future AI systems must not just be powerful—but fundamentally safe by design.
SSI works from the assumption that creating superintelligence without embedding deep safety constraints is the single greatest technological risk humanity faces. Their stated philosophy is “safety and capability in tandem”: they believe the path forward is not to delay capabilities, but instead to ensure safety always leads the way.
In practice, SSI operates with a lean, highly focused research team, deliberately avoiding the distractions of product cycles, mass commercialization, or sprawling management overhead. The idea is to let safety research and alignment engineering remain insulated from short-term business pressures. Their model is unconventional: rather than aiming for consumer or enterprise launch products early, SSI’s first goal is to architect superintelligent systems that can reliably understand, reason about, and respect human values.
By 2025, although SSI is not publicly shipping mass products, it has already attracted attention across the AI world—not least because of its high-profile founders, ambitious mission, and willingness to stake everything on safety-first superintelligence. Under Sutskever’s leadership, the company is positioning itself as a new kind of AI lab: one for which the existential question is not can we build something smarter than humans?, but can we build something smarter than humans that remains on humanity’s side?