Ask HN: What is your best devil's advocate argument against AI progress?
It seems that since the announcement of OpenAI's o3, the general consensus among many tech-adjacent people is that AI will soon eclipse humans in most economically productive cognitive tasks. What we've seen over the past 5 years or so is the predictable march of AI getting better, cheaper and faster month after month. Every new development or breakthrough leads to many more, and any supposed "wall" is quickly surpassed by a new training regime, inference approach or architectural advancement.
There has been a segment of veteran software engineers who, for the last 3 or so years, have been consistently bearish on AI and insist that not in 15-30 years will LLM's and the like ever be more than a nifty tool that speeds up the work of human engineers at best and creates superficially reasonable code that needs to be constantly fixed and maintained at worst. The consensus among these people is that AI is just a massive deluge of hype and empty promises, and that human intelligence is special, deep, and unmatchable by AI at least for many decades.
However, now, there is a growing pro-AI consensus among industry leaders and forecasters. It seems like the default prognostication is that AGI will arrive sometime between 2025 and 2029.
What is your best argument that this won't happen?
If you're interested in my take (you probably don't know me so probably not), I talked about this extensively on a local radio broadcast on Monday: https://www.wxxinews.org/show/connections/2024-12-23/will-ai...
TL;DL: I work in AI and am also a skeptic on AGI. I don't think the current approach to LLM training, even with lots of compute and bundling of chain-of-thought, constitute AGI and while many tasks will be made easier or automated I still think we need people at the wheel.
What I didn't get a chance to discuss, is how this is all just digital. The physical world still needs plenty of people to make things work, and if AI ends up overtaking knowledge workers, we'll all go back to doing things like building houses and working in labs.
Quite simply, hallucinations, costs, and physical limits. Hallucinations, so far intractable, will of course impede AGI. On the physical side, cooling the chips gets difficult and could slow things down. And costwise, if OpenAI or any of the other big money-hemorrhaging services finally kicks the bucket and the AI frenzy cools down, it's hard to have AGI if no one funds it.
My best anti AGI argument is that true reasoning remains elusive for LLMs and indeed any AI technology; that true reasoning is a prerequisite for AGI; and that solving true reasoning requires solutions that we don’t have yet and in fact don’t have much promising progress on. People who believe in a short timeline for AGI don’t have an actual answer for this other than “look at all this recent progress”, which is not an answer at all.
I think there is a lack of a consistent definition of AGI that everyone is speaking towards, so it's really hard to know what people mean when they simple say "AGI is/is not happening".
From Wikipedia:
"Human cognitive capabilities" is often misunderstood to just mean "thoughts", but this definition is a bit limiting and there seems to be some incentive for pro-AGI people to make everyone think that this is all the brain does. The brain also receives information from an enormous number of sensors throughout the body, such as proprioceptors and other afferents. It also commands an enormous number of muscles, down to the muscle fiber level - this is what allows fine motor control, and is something robots still cannot replicate very well even after over half a century of development. Beyond that, there is so much more going on at other levels..."Society of Mind" is a great book by Marvin Minsky that tries to attack this subject.I also think AGI must be able to push knowledge in a way that the best humans have done as well. So by this definition, AGI would mean a computer which can not only think the advanced thoughts that humans think, but can also generate new, divergent insights of the sorts which visionaries have thought of (e.g. Shannon's theory of information, Einstein's relativity, Gandhi's application of ahimsa, Gautama's Buddhism etc...).
Put together, I would expect true AGI to not merely match an "average" human's capabilities, but also be capable of exceeding it in the way an Albert Einstein or a LeBron James can. I think we are still decades away from either of those things happening, if they happen at all.
Finally, this is by no means an anti-AI take. I use LLMs daily as part of my core workflow, and rely on them for other tasks that a few years ago I had no capability of doing. Rather, I define all these terms to cut through so much of the marketing jargon and BS that AI companies are flooding us with right now. Let's keep an eye on the correct target, and not just settle for whatever some company thinks AGI means and which will pad their wallets the best in the near term.
> the general consensus among many tech-adjacent people
You cannot have a "consensus" in a sample size of "many" people. It's just a widely-held opinion at that point, not a consensus at all.
There is no such general consensus.