Google CEO: The AI slowdown is here and all "low-hanging fruit" has been picked
Don't expect to see AGI next year - or anytime soon.
Listen to OpenAI boss Sam Altman talk and you'd think the dawn of artificial general intelligence (AGI) was as inevitable and imminent as the release of next year's iPhone.
But AI sceptics have a different view of the bluff and bluster, with a very prominent tech executive now appearing to join the ranks of people who believe a slowdown is coming.
During a Pitchbook event last week, Google CEO Sundar Pichai said that research teams are facing a difficult task and hinted that progress on developing ever-smarter models could be more sluggish in 2025.
"The progress is going to get harder," he said. "When I look at 2025, the low-hanging fruit is gone. The curve, the hill, is steeper.
This stance is markedly different to the over-the-top optimism of AI hypemaster general Sam Altman, who famously tweeted: "There is no wall."
Although Pichai does not appear to subscribe to the view that progress is about to hit the skids, he is certainly more cautious than Altman.
He added: "The models are definitely going to get better at reasoning, completing a sequence of actions more reliably, and becoming more agentic, if you will. I think we’ll see boundaries pushed, so I expect a lot of progress in 2025.
"I don’t fully subscribe to the “wall” notion, but when you start out quickly scaling up, you can throw more compute at the problem and make a lot of progress. However, you’re definitely going to need deeper breakthroughs as we go to the next stage."
On the same day as Pichai's comments hit the internet, OpenAI engineer Vahid Kazemi issued his own rather more bombastic assessment of the state of AI in the wake of the release of OpenAI's o1 (which was caught "scheming" and trying to kill other models).
"In my opinion we have already achieved AGI and it’s even more clear with o1," he wrote on X. "We have not achieved 'better than any human at any task' but what we have is 'better than most humans at most tasks'.
"Some say LLMs only know how to follow a recipe. Firstly, no one can really explain what a trillion-parameter deep neural net can learn. But even if you believe that, the whole scientific method can be summarized as a recipe: observe, hypothesize, and verify.
"Good scientists can produce better hypothesis based on their intuition, but that intuition itself was built by many trial and errors. There’s nothing that can’t be learned with examples."
Not everyone agrees. In a post on Reddit's Singularity forum, a person claiming to be an International Math Olympiad medal winner described it as "unimpressive and not PhD level", posting the video below showing its attempt to find the smallest angle of inclination that allows a pencil to roll indefinitely after an initial push.
"At best it can solve the easier competition level math questions (the ones in the USA which are unarguably not that complicated questions if you ask a real IMO participant)," they wrote.
"I personally used to be IPhO medalist (as a 17yo kid) and am quite dissappointed in o1 and cannot see it being any significantly better than 4o when it comes to solving physics problems. I ask it one of the easiest International Physics Olympiad problems ever and even tell it all the ideas to solve the problem, and it still cannot."
Have you got a story to share? Get in touch and let us know.