Steve Jobs used to tell a great story about what a computer is; he called it a "bicycle of the mind". It's that story I think of when I look at the new generation of artificial intelligence (AI). Some people treat AI like it's a machine with brain, but I reckon we should treat AI like a new bicycle.
Huh? Bicycle? If you're not familiar with the story take a moment to watch the first minute or so of his interview for The Machine That Changed The World TV series in 1990:
I remember reading an article when I was about twelve years old, I think it might have been Scientific American, where they measured the efficiency of locomotion for all these species on planet Earth. How many kilocalories did they expend to get from point A to point B? And the Condor won; came in at the top of the list, surpassed everything else, and humans came in about a third of the way down the list which was not such a great showing for the crown of creation.
But somebody there had the imagination to test the efficiency of a human riding a bicycle. A human riding a bicycle blew away the Condor all the way up the top of the list. And it made a really big impression on me that we humans are tool builders, and that we can fashion tools that amplify these inherent abilities that we have to spectacular magnitudes.
And so for me, a computer has always been a bicycle of the mind. Something that takes us far beyond our inherent abilities.
And I think we're just at the early stages of this tool. Very early stages, and we've come only a very short distance. And it's still in its formation but already we've seen enormous changes. I think that's nothing compared to what's coming in the next hundred years.
His final line is spot on—"that's nothing compared to what's coming in the next hundred years". With the explosive arrival of ChatGPT our conception of what a computer is capable of has shifted. People, real people who aren't tech nerds are embracing artificial intelligence and finding ways to use AI.
What worries me is that some people treat AI like it can think, like it is a brain. That is the wrong way to look at it. Instead we should be treating AI as a tool that accelerates our thinking rather than replaces it. In a word: a bicycle.
Let's first define what I mean by AI: I mean the AI everyone is talking about today. For the last few years when people talked about AI they mostly meant machine learning (ML), and in recent months all the talk has been about specific kinds of ML known as large language models (LLMs) and generative AI.
LLMs are computer systems trained to find associations between large amounts of data such as millions of web pages. Generative AI uses LLMs to generate text, images, and other media in response to prompts given by a human. When built into an app which can chat with you like ChatGPT these generative AIs do a remarkably good job of responding to questions in a convincing and natural way.
It's important to note that experts do not consider this kind of AI to be an artificial general intelligence (AGI), meaning that it cannot think like a human or other animal. Even OpenAI—which has made what I reckon are unsound claims of early artificial general intelligence in ChatGPT—stops short of saying ChatGPT can actually think for itself.
AI doesn't think, instead it predicts what a thinking person might say. It has immense knowledge of what people say thanks to all those scraped web pages it learned from. This is far more knowledge than any of us could ever view, let alone remember, which should make AI a useful tool for discovering existing knowledge.
But AI struggles to understand that knowledge. It still can't handle negation words like 'not', it can't do basic maths reliably, and it makes up bullshit which sounds plausible. Presumably it will get better as new AI models are created, but the lack of basic logic in AI right now demonstrates how a machine doesn't need to be able to think or comprehend its data to give plausible responses. If AI doesn't think right now then why would fixing up mistakes in its responses suddenly turn it into a thinking machine?
One day we might have computers that can think but we don't have that today, nor do I reckon it is imminent.
This matters because our society is at risk of asking too much from AI. If we believe AI can think then we'll be tempted to replace humans with machines, only to discover later that the AI has made arbitrary, capricious decisions that did a lot of harm.
We've already seen examples where replacing humans with AI failed us. Our corporate regulator ASIC is using AI that has rejected reports of serious criminal conduct without anyone looking at them. The Robodebt scheme automatically issued hundreds of thousands of false debts on Australians mostly without a human reviewing them. We should not blame the machine for making bad decisions; we must blame ourselves for letting a machine make a decision it is not capable of.
We should be using AI as tool to accelerate us, to help us make decisions, but not to make decisions for us.
AI chatbots are a good example of how to use AI to accelerate. When I see a company chatbot I don't see it as a salesperson or a customer support representative talking to me, I see it as the company making me do the work. That's not necessarily a bad thing; a good AI chatbot could help me solve a problem myself by providing the information that I need more quickly than a person ever could. But the company is giving me a tool to help myself, not a person that can solve any problem I encounter. The AI is not a substitute for being able to speak with a person when I encounter a problem that I cannot resolve myself. I must be able to decide when the problem is resolved, not an AI.
Perhaps the need for people to be ultimately responsible seems self-evident when I put it that way. But with AI being the hot new tech surely there will be hucksters out there shilling the impossible, promising AI can make decisions and make people redundant. When you hear that please remember the limitations of AI. Keep in mind that AI is a bicycle and not a brain.
For those of you who work in tech—like me—then you have a special responsibility. You should know the limits of AI better than most. So if you are asked to make AI to do something that you know it can't then it's up to you to say no. It's up to you to help humans ride in tandem with AI.