But then, last month, ChatGPT 5 dropped like a dead albatross and suddenly all bets are off.
“Horrible”, “disaster” and “screwup” were some of the labels applied to the launch by outlets like The Verge, Medium and Bloomberg Law. The new model was so underwhelming that it made many rethink when, if ever, Artificial General Intelligence - the point at which AI will be better than humans at all intellectual tasks - will be reached. Redditors led a successful campaign to have the previously deleted GPT4 series models reinstated. In a world-first, OpenAI boss Sam Altman expressed contrition and admitted: “We totally screwed up.”
What was so bad about ChatGPT 5? One thing was its personality makeover that made it seem cold and uninterested in your stupid questions. And even though in theory it performed better on testing benchmarks, user experience was that GPT 5 responses were often superficial and full of hallucinations.
On closer examination, it seemed that the reason for this was that GPT 5 was optimized less to benefit all humanity - as promised in OpenAI’s mission statement - but more to save some green. At its core is a "router" that analyses your incoming prompt and directs it to the lightest, most computationally inexpensive model from OpenAI’s lineup that might just satisfy your query. It will only give you access to its really intelligent models if you ask it a very hard question in a very clever way.
This "AI shrinkflation" naturally infuriated OpenAI’s paying customers. They believed they were paying for access to OpenAI’s most advanced model, but in reality, many of their queries were now being handled by a less capable, cheaper alternative.
The experience was so bad it gave credence to critics like Gary Markus who argue there are diminishing returns to the current AI development paradigm of scaling - that is, simply adding more data and computational power. He says that GPT 5 shows AI development has plateaued, and that a completely new architecture will be needed for AI to achieve AGI, if ever it does.
So is that it then - is the AI bust upon us? Will you get to keep your job after all?
No. It was up to the ever-sober Economist to put things in perspective. In a thorough analysis, it showed how ChatGPT 5’s performance improvements are exactly on trend, continuing the soaring rise in AI’s capabilities. That it feels underwhelming is more to do with users’ jacked expectations of revolutionary breakthroughs every few months.
Much of this hype can be laid at the door of Altman himself. Competitors will have taken some satisfaction in the knowledge that ChatGPT 5 fiasco will have users exploring alternatives. Another effect, some argue, may be the redirecting of attention from the abstract, pure-research goal of AGI to practical applications that might actually benefit all humanity. And considering the environmental and energy costs of AI, the development of more cost-effective systems has to be welcome, as long of course as as the benefits are distributed.
Most tellingly, perhaps, one thing OpenAI’s disastrous roll-out of ChatGPT5 didn’t do was dent investors’ enthusiasm. Tech stocks went up after the release. Much evidence supports the idea that these are overvalued, but the spigots keep flowing. Stanford University and The Guardian put US Big Tech and corporate investment in AI at well over half a trillion dollars this year. The AI juggernaut hurtles on.
So, do quit your day job, but maybe after Christmas.
Joe Smith is Founder of the AI consultancy 2Sigma Consultants. He studied AI at Imperial College Business School and is researching AI’s effects on cognition at Chulalongkorn University. He is author of The Optimized Marketer, a book on how to use AI to promote your business and yourself. Contact joe@2Sigmaconsultants.com.