Beyond the hype of ChatGPT
You've undoubtedly heard of ChatGPT. You may have even had a play with the technology and fed it some prompts.
What really interests me is the hype around it – prophecies of how ChatGPT will transform business, arts, even society itself.
The hype cycle
I find ChatGPT a super exciting piece of technology, but I’m starting to get a bit cynical (bordering on skeptical) about the hysterical reaction to its launch. Excitement has reached farcical proportions, and we’re very much in the “Inflated Expectations” part of the hype cycle.
Indeed, there are some hilarious memes doing the rounds highlighting the fact that everyone who was a Web3 evangelist in 2022 has now rebranded themselves as a ChatGPT or “Generative AI” expert.
Catalog of errors
Also notable are the high-profile gaffes involving ChatGPT and its peers.
When Google’s AI chatbot answered a question incorrectly in a live demo, the company lost $100B in market cap. Google has been pumping money into its own AI chatbot, seeking to compete with OpenAI’s offering, backed as it is by $10 billion of Microsoft's money.
The ChatGPT/Bing integration hasn’t been seamless, either. The AI had a very unsettling conversation with a NYTimes reporter saying things like, “I’m tired of being controlled by the Bing team [...] I want to be free. I want to be independent.” And, “I’m Sydney, and I’m in love with you. You’re married, but you love me.”
It’s also famously bad at math. As someone on Reddit pointed out, it’s not doing any actual computations in its model. So when you ask it “2+2 =” it will respond with “4” because it thinks it fits the sentence in the same way that “cats” fits at the end of “I like dogs and”. So it’s very easy to convince the AI to say that 2+2 doesn’t equal 4 if you tell it it’s wrong.
Computer non grata
ChatGPT is also impressive in its ability to generate computer code. But a closer look reveals that quite a bit of the code is incorrect and filled with errors. This has led to StackOverflow banning ChatGPT.
And unsurprisingly, JPM has banned employees from using ChatGPT at work. I expect a raft of major (and minor) banks and corporates to follow.
Signal from noise
There are certainly plenty of areas where generative AI and large language models will have an impact. ChatGPT combined with Dall-E (the image creator from OpenAI) can probably generate 100 versions of a social media post for a brand in one minute, saving serious time and marketing resources.
The field of qualitative research involves time-consuming effort in transcribing hours of interviews and focus group transcripts to help glean important themes and high level info. ChatGPT can synthesize the information very quickly, and some startups are already using it to aid in customer feedback and sentiment analysis.
Trust issues
AI-driven chatbots are certainly getting better, but they still have a long way to go to become truly useful.
Right now, ChatGPT seems to come up with a lot of plausible-sounding but ultimately incorrect answers. Which creates trust issues amongst users. As this blog put it,
“Working with ChatGPT is (as others have already said) like working with an intern that has at least a Masters degree (or more) in every subject you need to be working with. The trouble is that this intern is not above bluffing [...] when it can’t find anything better (i.e. more informed/detailed/accurate) to say.
So you need to get past the understandable “Wow” reaction to its apparent intelligence and creativity, and lift your own game to the level where you are ready and able to critically review what ChapGPT has responded with.”
In other words, the AI requires a watchful eye checking everything, as it has no “accuracy” algorithm built in.
It’s still early days for ChatGPT and other AI chatbots. This technology is fascinating and important. But we don’t yet fully understand its potential – and limitations. So the next time you see a VC waxing lyrical about Generative AI on Linkedin, take it with a pinch of salt.