Ilya Sutskever is easily the first name that comes to mind when you talk about the early days of ChatGPT, even though he no longer works at OpenAI. A co-founder of OpenAI, Sutskever is one of the brilliant minds who worked on ChatGPT and other AI initiatives at the company before leaving it to form his own startup.
Sutskever was involved in the coup that saw the brief ouster of Altman in late 2023, though he later expressed regret for taking part in it. Sutskever resigned from OpenAI last May to form Safe Superintelligence (SSI), a company whose aim is in its name.
That sort of artificial intelligence refers to AI thatâs superior to anything humans can do. Itâs two steps ahead of where weâre currently at. Companies like OpenAI are working on developing AGI or artificial general intelligence. Thatâs AI that can handle tasks with the creativity of a human. AGI will then lead to superintelligence, or at least thatâs the idea behind it.
These terms arenât exactly objective, and the goalposts move depending on the commercial interests of the players involved in the race to AGI and superintelligence. Thatâs where Sutskeverâs approach appears to differ from everyone elseâs. And it turns out that whatever he and his small team are working on, itâs good enough to convince investors to give him billions to continue his research.
Now, thereâs a claim that Sutskever might have discovered a different way to train AI than everyone else, and that process is seeing promising results.
If youâre following AI topics, you might have come across these two paragraphs on Reddit or X about Sutskeverâs work:
Sutskever has told associates he isnât developing advanced AI using the same methods he and colleagues used at OpenAI. He has said he has instead identified a âdifferent mountain to climbâ that is showing early signs of promise, according to people close to the company.
âEveryone is curious about exactly what heâs pushing and exactly what the insight is,â said James Cham, a partner at venture firm Bloomberg Beta, which hasnât invested in SSI. âItâs super-high risk, and if it works out, maybe you have the potential to be part of someone who is changing the world.â
Theyâre from a Wall Street Journal article that covered SSIâs recent financing round. The company just raised $2 billion at a valuation of $30 billion. SSI was valued at $5 billion just this past September.
That sort of growth is impressive for an AI startup but not exactly surprising. Sutskever is one of the most prominent names in AI, especially considering his involvement in developing ChatGPT and his interest in safe superintelligence.
Whatâs surprising is that investors are putting in all that money without actually getting anything in return immediately. Sutskever & Co. will not release any commercial products while they research superintelligence. And itâs not certain that SSI is even getting there or getting there before competitors.
Still, Sutskever and his colleagues must have some AI demos ready to woo investors. Thatâs what makes The Journalâs paragraphs above so exciting. Sutskever figuring out a âdifferent mountain to climbâ to develop superintelligence than everyone else sounds like some sort of breakthrough discovery in AI.
Does that mean that Sutskever wonât use the methods he pioneered while at OpenAI, which involved training smarter AI with the help of vast amounts of data? We can only speculate on that.
The report goes on to say Sutskever is running a tight ship. The team only has about 20 employees operating from offices in Silicon Valley and Tel Aviv. Those who work at SSI are discouraged from disclosing it via platforms like LinkedIn. Those interviewing are told to leave their phones in Faraday cages before entering SSI offices. Faraday cages block cellular and Wi-Fi signals.
The WSJ also says that the SSI team doesnât have well-known names from Silicon Valley. Instead, Sutskever is looking for promising new hires he can mentor rather than experienced people who might jump ship down the road.
While Sutskever will not share specific details about his AI work with the world, he appeared at the NeurIPS AI conference in December, where he teased the kind of superintelligence he is trying to develop. Per The Journal, he said that when superintelligence arrives, it could be âunpredictable, self-aware and may even want rights for themselves.â
âItâs not a bad end result if you have AIs and all they want is to coexist with us,â Sutskever said. This sounds a lot like something he said while still employed at ChatGPT. âOur goal is to make a mankind-loving AGI,â the AI engineer said at OpenAIâs 2022 holiday party.