How tech gets smarter
A fascinating story doing the rounds this week focused on an academic assault upon tech’s latest darling as it was reported that ChatGPT was getting “dumber”.
I found this episode a super interesting development, and it got me thinking about AI, but also about the painstaking process of developing any revolutionary piece of technology.
A little less wow
The story goes that a select group of ChatGPT users have spent recent months testing whether their trusty AI companion is becoming less intelligent. The investigation led to a paper being published by researchers at Stanford and Berkeley which claimed that recent versions of ChatGPT respond less impressively today than a few months ago.
One example showed how the tech was creating less accurate answers to complex math questions. In its infancy, the system correctly answered questions about big prime numbers nearly every time. But, recently, it only returned correct responses a fraction of the time.
Some commentators pushed back, suggesting the results were merely a case of the “wow factor” wearing off or a failure to use the tech correctly. But the paper makes a compelling case for the limitations of what was thought, a few weeks ago, to be tectonic-shifting tech.
Not so fast
Putting aside the minutia of ChatGPT’s supposed Benjamin Button-style return to infancy, this episode is a demonstration of the fact that, while exciting, the tech remains nascent. In any case, it’s clear we are a long way off Terminator-esque “AGI” (phew), and in terms of real world impact, despite many B2B use cases, we remain in the hobbyist phase.
For me, the current state of AI is a phase of tech evolution like many others in history. It is exciting, of late we’ve likely seen a big step rather than a linear progression, and the potential is clear. But it has largely been the bonfire of over-hyped headlines that has led many to believe that its advent, and its real-world integration via the likes of ChatGPT (which is, bluntly, little more than a question-answer-formatted Google), meant we had entered a new epoch.
Not so fast, I say – and not so fast should be heeded by politicians, who are doing all they can to hitch their old-school wagons to the straining AI horse, seeking to harvest as much as they can of the ‘revolutionary’ and ‘world-changing’ discourse for own political gain.
Rishi, really excited
When I hear politicians jumping on the AI-bandwagon, the cynical part in me leaps to assume that they are hoping that what some clever people invented in Cambridge or on the West Coast is going to be the silver-bullet that (re)invigorates their political ambitions or (re)activates their nation’s economy in the absence of a willingness to take less voter-friendly policy decisions.
This is super prevalent in the UK, and spectrum-wide. This week, Labour leader, Sir Keir Starmer, met Google CEO, Sundar Pichai, to discuss the “possibilities of AI” in an effort to energise his voter base prior to next year’s general election, which will be fought on the economy. AI could reinvigorate an ageing workforce and ailing economy – but whether this can be transmitted to voters without terrifying them with words like ‘automation’ remains to be seen.
The man in power, PM Sunak, is approaching the topic from a different position, seeking to use it cement the status-quo (UK tech influence, Conservative tech investment, Conservative party rule) by ensuring that the UK is at the heart of the global conversation around AI. Whether this is true or not remains to be seen, and this snarky column sums up my views on the subject, as does the headline: “The AI delusion: Britain can’t wish itself into a ‘global leadership’ role.”
Central support
Rather than trying to pick winners, be they companies or technologies, governments all over the world need to focus on creating the best environment for success. That includes simplifying regulation, removing trade barriers, improving access to talent and capital, and utilising procurement budgets they truly act as an enabler and accelerator of technology.
We forget how many foundational pieces of tech (in a range of industries, from computing to healthcare to transportation), over the past century were developed due to the US Government or military, who seeded, invested in, and bought tech from US companies.
The US Government’s “buying” from companies doesn’t get as much airtime as it should, but it's been a vital driver of innovation and development. In Maslow’s Hierarchy of Tech Startups, there’s one thing that I believe ranks higher than funding and talent, and that’s a startups need for customers – and there is no greater customer than the US Government for a scaling business. Current and future UK Governments should take note.
Long term smarts
Whether ChatGPT is getting better or worse at maths shouldn’t really be a major discussion point for anyone interested in the future of tech. What should be is the way we, as a collective, are seeking to develop these technologies for the greater good over time.
Technology’s evolution rarely happens in great shifts. Rather, it is typically achieved in a largely linear fashion, fuelled by diligent development, investment and regulation from all participants. Of course, governments central to that effort, and the smart move for them right now would be to do all they can to enable the development of AI over the long term, rather than just seeking to use it for short term political gain.