Apple recently published a paper that subtly acknowledges what many in the Artificial Intelligence (AI) community have been hinting at for some time: Large language models (LLMs) are approaching their limits. These systems — like OpenAI’s GPT-4 — have dazzled the world with their ability to generate human-like text, answer complex questions, and assist in tasks across industries. But behind the curtain of excitement, it’s becoming clear that we may be hitting a plateau. This isn’t just Apple’s perspective. AI experts such as Gary Marcus have been sounding the alarm for years, warning that LLMs, despite their brilliance, are running into significant limitations.
Yet, despite these warnings, venture capitalists (VCs) have been pouring billions into LLM startups like lemmings heading for a cliff. The allure of LLMs, driven by the fear of missing out on the next AI gold rush, has led to a frenzy of investment. VCs are chasing the hype without fully appreciating the fact that LLMs may have already peaked. And like lemmings, most of these investors will soon find themselves tumbling off the edge, losing their me-too investments as the technology hits its natural limits.
LLMs, while revolutionary, are flawed in significant ways. They’re essentially pattern-recognition engines, capable of predicting what text should come next based on massive amounts of training data. But they don’t actually understand the text they produce. This leads to well-documented issues such as hallucination — where LLMs confidently generate information that’s completely false. They may excel at mimicking human conversation, but they lack true reasoning skills. For all the excitement about their potential, LLMs can’t think critically or solve complex problems the way a human can.
Moreover, the resource requirements to run these models are astronomical. Training LLMs requires enormous amounts of data and computational power, making them inefficient and costly to scale. Simply making these models larger or training them on more data isn’t going to solve the underlying problems. As Apple’s paper and others suggest, the current approach to LLMs has significant limitations that cannot be overcome by brute force.
This is why AI experts such as Marcus have been calling LLMs “brilliantly stupid”. They can generate impressive outputs but are fundamentally incapable of the kind of understanding and reasoning that would make them truly intelligent. The diminishing returns we’re seeing from each new iteration of LLMs are making it clear that we’re nearing the top of the S-curve for this particular technology.
But this doesn’t mean AI is dead — not even close. The fact that LLMs are hitting their limits is just a natural part of how exponential technologies evolve. Every major technological breakthrough follows a predictable pattern, often called the S-curve of innovation. At first, progress is slow and filled with false starts and failures. Then comes a period of rapid acceleration, where breakthroughs happen quickly, and the technology begins to change industries. But eventually, every technology reaches a plateau as it hits its natural limits.
We’ve seen this pattern play out with countless technologies before. Take the internet, for example. In the early days, sceptics dismissed it as a tool for academics and hobbyists. Growth was slow, and adoption was limited. But then came a rapid acceleration, driven by improvements in infrastructure and user-friendly interfaces, and the internet exploded into the global force it is today.
The same happened with smartphones. Early versions were clunky and unimpressive, and many doubted their long-term potential. But with the introduction of the iPhone, the smartphone revolution took off, transforming nearly every aspect of modern life.
One of the most promising areas of AI development is neurosymbolic AI. This hybrid approach combines the pattern recognition capabilities of neural networks with the logical reasoning of symbolic AI. Unlike LLMs, which generate text based on statistical probabilities, neurosymbolic AI systems are designed to truly understand and reason through complex problems. This could enable AI to move beyond merely mimicking human language and into the realm of true problem-solving and critical thinking.
Another key area of research is focused on making AI models smaller, more efficient, and more scalable. LLMs are incredibly resource-intensive, but the future of AI may lie in building models that are more powerful while being less costly and easier to deploy. Rather than making models bigger, the next wave of AI innovation may focus on making them smarter and more efficient, unlocking a broader range of applications and industries.
Context-aware AI is also a major focus. Today’s LLMs often lose track of the context in conversations, leading to contradictions or nonsensical responses. Future models could maintain context more effectively, allowing for deeper, more meaningful interactions.
The ethical challenges that have plagued LLMs — such as bias, misinformation, and their potential for misuse — are also being tackled head-on in the next wave of AI research. The future of AI will depend on how well we can align these systems with human values and ensure they produce accurate, fair, and unbiased results. Solving these issues will be critical for the widespread adoption of AI in high-stakes industries like health care, law, and education.
Every great technological leap is preceded by a period of frustration and false starts, but when the inflection point hits, it leads to breakthroughs that change everything. That’s where we’re headed with AI. When the next S-curve hits, it will make today’s technology look primitive by comparison. The lemmings may have run off a cliff with their investments, but for those paying attention, the real AI revolution is just beginning.
Vivek Wadhwa is CEO,Vionix Biosciences.The views expressed are personal