Ghostboard pixel

Subscribe to Our Newsletter

Success! Now Check Your Email

To complete Subscribe, click the confirmation link in your inbox. If it doesn't arrive within 3 minutes, check your spam folder.

Ok, Thanks

OpenAI co-founder Ilya Sutskever launched Safe Superintelligence Inc.

OpenAI co-founder Ilya Sutskever has launched Safe Superintelligence Inc. (SSI), a new for-profit AI company focused solely on developing safe superintelligent AI systems using an engineering-based approach instead of the traditional guardrails enforced on most AI systems.

Ellie Ramirez-Camara profile image
by Ellie Ramirez-Camara
OpenAI co-founder Ilya Sutskever launched Safe Superintelligence Inc.
Credit: SSI Inc. (via X)

Ilya Sutskever, former chief scientist at OpenAI, recently announced the launch of his new AI safety-focused company. A month after leaving OpenAI, Sutskever teamed up with ex-Y Combinator partner Daniel Gross and former OpenAI engineer Daniel Levy to create Safe Superintelligence Inc. (SSI). The company's only mission is to tackle what Sutskever considers "the most important technical problem of our time"—building safe superintelligent AI systems. Details on how this mission will be accomplished are sparse, except a statement suggesting that unlike traditional tech firms developing AI, SSI will not be concerned with developing appealing (and profitable) products like chatbots and media generators commercialized by many of the biggest players in the industry.

The statement is also reminiscent of OpenAI's initial concept: a research entity attempting to build an artificial general intelligence that possibly surpasses human capabilities across several tasks. While it is worthwhile to revisit that goal, it is also important to consider that most AI firms have turned to marketing their technologies partly out of necessity: computing power is expensive, and most investors expect to see a return on their investments sooner rather than later. Still, with the AI hype pressing on full steam, it is unlikely that the company will struggle to find funding sources. At this time, Sutskever has declined several requests to name SSI’s financial backers or disclose how much he’s raised.

In addition to aiming to surpass human levels of intelligence across some tasks, Sutskever claims that he already has some approaches in mind to bake safety into the technology, as opposed to following the industry-standard approach of applying guardrails and performing extensive safety testing on the fly, and once the AI system is nearly finished. Moreover, Sutskever claims that although LLMs, one of the fundamental building blocks in most current AI systems, will have a salient role in the company, SSI is aiming to go beyond them, concerning itself instead with the safety of systems that can autonomously develop things like technology.

Ellie Ramirez-Camara profile image
by Ellie Ramirez-Camara

Data Phoenix Digest

Subscribe to the weekly digest with a summary of the top research papers, articles, news, and our community events, to keep track of trends and grow in the Data & AI world!

Success! Now Check Your Email

To complete Subscribe, click the confirmation link in your inbox. If it doesn’t arrive within 3 minutes, check your spam folder.

Ok, Thanks

Read More