Ilya Sutskever, co-founder and former chief scientist of OpenAI, made headlines earlier this year after he leaves Called to start his own AI lab Secure Superintendence Inc.He has stayed out of the spotlight since his departure, but made a rare public appearance at the Neural Information Processing Systems (NeurIPS) conference in Vancouver on Friday.
“Pre-training as we know it will undoubtedly end,” Sutskever said on stage. This refers to the first stage of AI model development, when a large language model learns patterns from large amounts of unlabeled data – typically text from the Internet, books, and other sources.
“We have reached peak data and there will be no more.”
During his Neurips talk, Sutskever said that, although he believes existing data can still drive AI development, the industry is increasingly using new data for training. This dynamic will ultimately force changes in the way models are trained today, he said. He compared the situation to fossil fuels: just as oil is a finite resource, the Internet contains a finite amount of human-generated content.
According to Sutskever, “We have reached peak data and there will be no more.” “We have to deal with the data we have. There is only one Internet.”
The next generation of models are going to be “effective in real ways,” he predicted. Agents have really become a topic of discussion In the AI field. Although Sutskever did not define them during his talk, they are generally understood as an autonomous AI system that acts, makes decisions, and interacts with software on its own.
He said that as well as being “agents”, future systems will also be able to reason. Unlike today’s AI, which mostly does pattern-matching based on what a model has previously seen, future AI systems will be able to work things out step-by-step in a way that is more comparable to thinking. .
According to Sutskever, the more reasons a system provides, “the more unpredictable it becomes”. He compared the unpredictability of “truly reasoning systems” to how advanced AI playing chess “are unpredictable for the best human chess players.”
“They will understand things from limited data,” he said. “They will not be confused.”
On stage, he made a comparison between the scaling of AI systems and evolutionary biology, citing research that shows a relationship between brain and body mass across species. He noted that while most mammals follow a scaling pattern, hominids (human ancestors) show a distinct slope in their brain-to-body mass ratio on a logarithmic scale.
He suggested that, just as evolution found a new scaling pattern for the hominid brain, AI might find new approaches to scaling beyond how pre-training works today.
After Sutskever finished his talk, an audience member asked him how researchers could create the right incentive mechanisms for humanity to create an AI that gives it “the freedoms we have as homosapiens.”
“I think in some ways these are questions that people should consider more,” Sutskever responded. He paused for a moment before saying that he “doesn’t feel confident answering questions like this” because it would require “a top-down government structure.” The audience member suggested cryptocurrencies, causing others in the room to laugh.
“I don’t think I’m the right person to comment on cryptocurrencies, but chances are you will [are] There will be a description,” Sutskever said. “You know, in some sense, it’s not a bad end result if you have AI and they just want to co-exist with us and just have rights. Maybe it’ll be okay… I guess things are so incredibly unpredictable. “I hesitate to comment, but I encourage speculation.”