For the last few months, there’s one question that I’ve been asked countless times.
It comes up without fail during idle moments: coffee breaks at work or standing around, out at the dog park.
What do you think about AI?
Usually, the tone is quietly sceptical.
For me, the way it’s asked conveys a weary distrust of tech hype, but also a hint of concern. People are asking: Should I be paying attention to this?
Loading…
Sure, at the start of 2023, many of us were amazed by new generative artificial intelligence (AI) tools like ChatGPT.
But, as the months have passed, these tools have lost their novelty.
The tech industry makes big claims about how AI is going to change everything.
But this is an industry that has made big claims before — and been proved wrong. It’s happened with virtual reality, cryptocurrency, NFTs and the metaverse. And that’s just in the past three years.
So, what do I think of AI?
For the past few months I’ve been working on a podcast series about AI for the ABC, looking broadly at this topic.
It’s been a bit like trekking through a blizzard of press releases and product announcements.
Everything solid dissolves into a white-out of jargon and dollar signs.
There’s so much excitement, and so much money invested, that it can be hard to get answers to the big underlying questions.
And, of course, we’re talking about the future! That’s one topic on which no-one ever agrees, anyway.
But here’s what I’ve learned from speaking to some of the top AI experts.
1. The AI we have is nothing like AI in the movies
Forget Terminator. Forget 2001: A Space Odyssey.
Hollywood’s long-ago visions of the future are getting in the way of understanding the AI we have today.
If you picture a skeletal robot with red eyes every time someone says “AI”, you’ll have totally the wrong idea about what AI can do, what it can’t, and what risks we should reasonably worry about.
Most of the AI tools we use, from ChatGPT to Google Translate, are machine learning (ML).
If AI is the broad concept of machines being able to carry out tasks in what that we would consider “smart”, ML is one way of achieving this.
The general idea is that, instead of telling a machine how to do a task, you give them lots of examples of wrong and right ways of doing the task, and let them learn for themselves.
So for driverless cars, you give a ML system lots of video and other data of cars being driven correctly, and it learns to do the same.
For translation, you give a ML tool the same sentences in different languages, and it figures out its own method of translating between the two.
Why does this distinction between telling and learning matter?
Because a ML tool that can navigate a roundabout or help you order coffee in French isn’t plotting to take over the world.
The fact it can do these narrow tasks is very impressive, but that’s all it’s doing.
It doesn’t even “know” the world exists, says Rodney Brooks, a world-leading Australian roboticist.
“We confuse what it does with real knowledge,” he says.
2. AI is probably dumber than you realise
Rodney Brooks has one of the most impressive resumes in AI. Born, raised and educated in Adelaide, during the 1990s he ran the largest computer science department in the world, at MIT. He’s even credited with inventing the robotic vacuum cleaner.
“Because I’ve built more robots than any other human in the world, I can’t quite be ignored,” he told me when I called him at his home in San Francisco, one evening.
Professor Brooks, who’s a professor emeritus at MIT, says the abilities of today’s AI, though amazing, are wildly over-estimated.
He makes a distinction between “performance” and “competence”.
Performance is what the AI actually does — translate a sentence for example. Competence is its underlying knowledge of the world.
With humans, someone who performs well is also generally competent.
Say you walk up to a stranger and ask them for directions. If they answer with confidence, we figure we can also ask them other things about the city: where’s the train station? How do you pay for a ticket?
But that doesn’t apply to AI. An AI that can give directions doesn’t necessarily know anything else.
“We see ChatGPT do things … and people say ‘It’s really amazing’. And then they generalise and imagine it can do all kinds of things there’s no evidence it can do,” Professor Brooks says.
“And then we see the hype cycle we’ve been in over the last year.”
Another way of putting this is we have a tendency to anthropomorphise AI — to see ourselves in the tools we’ve trained to mimic us.
As a result, we make the wrong assumptions about the scale and type of intelligence beneath the performance.
“I think it’s difficult for people, even within AI, to figure out what is deep and what is a technique,” Professor Brooks says.
Now, many people in AI say it’s not so clear cut.
Rodney Brooks and others may be completely wrong.
Maybe future, more advanced versions of ChatGPT will have an underlying model of the world. Performance will equate to competence. AI will develop a general intelligence, similar to humans.
Maybe. But that’s a big unknown.
For the moment, AI systems are generally very narrow in what they can do.
3. Among experts, there’s furious disagreement about the future
From the buzz out of Silicon Valley, you could be forgiven for thinking the course of the future is pretty much decided.
Sam Altman, the boss of OpenAI, the company that built ChatGPT, has been telling everyone that AI smarter than any human is right around the corner. He calls this dream Artificial General Intelligence, or AGI.
Perhaps as a result of this, minor advances are often communicated to the public as though they’re proof that AI is becoming super-intelligent. The future is coming, get out of the way.
ChatGPT can pass a law exam? This changes everything.
Google has a new chatbot? This changes everything
Beyond this hype, there are lots of varying, equally valid, expert perspectives on what today’s AI is on track to achieve.
The machine learning optimists, people like Sam Altman, are just one particularly vocal group.
They say that not only will we achieve AGI, but it will be used for good, ushering in a new age of plenty.
“We are working to build tools that one day can help us make new discoveries and address some of humanity’s biggest challenges, like climate change and curing cancer,” Mr Altman told US lawmakers in May.
Then, there’s the doomers. They broadly say that, yes, AI will be really smart, but it won’t be addressing climate change and curing cancer.
Some believe that AI will become sentient and aggressively pursue its own goals.
Other doomers fear powerful AI tools will fall into the wrong hands and be misused to generate misinformation, hack elections, and generally spread murder and mayhem.
Then there’s the AI sceptics. People like Rodney Brooks.
The real danger, they say, isn’t that AI will be too smart, but it will be too dumb, and we won’t recognise its limits.
They point to examples of this happening already.
Driverless cars are crashing into pedestrians in San Francisco. Journalists are being replaced by faulty bots. Facial recognition is leading to innocent people being locked up.
“Today’s AI is a very powerful trick,” Professor Brooks says.
“It’s not approaching, or it’s not necessarily even on the way, to a human-level intelligence.”
And there’s a fourth group (these groups overlap in complicated ways), who say that all of the above misses the point.
We should worry less about what AI will become, and talk more about what we want it to be.
Rumman Chowdhury, an expert in the field of responsible AI, says talking about the future as something that will happen to us, rather than something we shape, is a cop out by tech companies.
AI isn’t a sentient being, but just another tech product.
“In anthropomorphising and acting like artificial intelligence is an actor that makes independent decisions, people in tech absolve themselves of the sins of the technology they built,” she says.
“In their story, they’re a good guy trying to make this thing to help people.
“They’ve made us believe this AI is alive and making independent decisions and therefore they’re not at fault.“
4. Worry less about the AI and more about the AI companies
Most of the popular discussion about AI and the future focuses on what happens when AI gets too powerful.
This is sometimes called the “alignment problem”. It’s the idea that, in the end, sentient AI will not do what we what.
Within the AI community, the term “p(doom)” is used to describe the probability of this happening. It’s a percentage chance that AI is going to wipe out humanity. “My (p)doom is 20 per cent” etc.
But the most chilling vision of the future I heard wasn’t one where robots stage an uprising.
Instead, it was much more mundane and plausible. A boring dystopia.
It’s a future where AI pervades every aspect of our lives, from driving a car to writing an email, and a handful of companies that control this technology get very rich and powerful.
Maybe in this future AI is super-intelligent, or maybe not. But it’s at least good enough to displace workers in many industries.
New jobs are created, but they’re not as good, because most people aren’t as economically useful as they were. The skills these jobs require — skills that were once exclusively human — can be done by AI.
High-paying, creative jobs become low-paying ones, usually interacting with AI.
This is the fear that partly motivated US actors and screenwriters to go on strike this year. It’s why some authors are suing AI companies.
It’s a vision of the future where big tech’s disruptions of certain industries over the past 20 years — Google and Facebook sucking advertising revenue out media and publishing, for instance — is just the preamble to a much larger, global transfer of wealth.
“The thing I worry about is there are fewer and fewer people holding more and more wealth and power and control,” Dr Chowdhury says.
“As these models become more expensive to build and make, fewer and fewer people actually hold the keys to what’s going to be driving essentially the economy of the entire world.”
Michael Wooldridge, a computer scientist at Oxford University and one of the world’s leading AI researchers, is also worried about this kind of future.
The future he envisions is less like The Terminator, and more like The Office.
Not only are most people paid less for the same work, but they’re micromanaged by AI productivity software.
In this “deeply depressing” scenario, humans are the automata.
“A nagging concern I have is that we end up with AI as our boss,” Professor Wooldridge says.
“Imagine in a very near future we’ve got AI monitoring every single keystroke that you type. It’s looking at every email that you send. It’s monitoring you continually throughout your working day.
“I think that future, unless something happens, feels like it’s almost inevitable.”
5. The history of AI is one of broken dreams and bad predictions
Sixty years ago, in the glory days of early AI research, some leading experts were convinced that truly intelligent, thinking machines were a decade or two away.
About 10 years later, in the early 1980s, the same thing happened: A few breakthroughs led to a flurry of excitement. This changes everything.
But as we know now, it didn’t change everything. The future that was imagined never happened.
The third AI boom started in the 2010s and has accelerated through to 2023.
It’s either still going, or tapering off slightly. In recent months, generative AI stocks have fallen in the US.
ChatGPT set the record for the fastest selling user base ever, in early 2023. But it hasn’t maintained this momentum. Visits to the site fell from June through to August this year.
To explain what’s going on, some analysts have referenced Amara’s Law, which states that we tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.
They’ve also pointed to something called the Gartner Hype Cycle, which is a graphical representation of the excitement and disappointment often associated with new technologies.
The “innovation trigger” for the latest boom in AI probably happened around the early 2010s, with a sudden advance in machine learning (this is sometimes called the ImageNet Moment).
The “peak of inflated expectations” may have arrived in early 2023, with ChatGPT (a machine learning tool).
We may now be sliding into the “trough of disillusionment”, as some applications of the technology fail to meet these very high expectations.
That’s not to dismiss the potential of AI.
In fact, precisely because AI has so much potential, it’s hard to make accurate predictions about how it’ll be used.
Rodney Brooks quotes the great science fiction author Arthur C. Clarke, when talking about the public reaction to today’s AI: “Any sufficiently advanced technology is indistinguishable from magic.”
“If it’s sufficiently advanced that we don’t understand it, we don’t know what the limits on it are,” Professor Brooks says.
“We don’t have a good model in our heads to say, ‘Oh, it can do this, but it can’t do that’, because it’s doing something that surprised us.
“And too many people think, therefore, it can do just about everything.”
The AI we have is nothing like AI in the movies, but maybe some movies can help us prepare for this future.
Like Dorothy in the Wizard of Oz, beware of fake magicians.
There’s a lot of hype around AI — especially driverless taxis. They’re already here. Would you take a ride in one? Listen to the full story of San Francisco’s driverless cars experiment and subscribe to RN Science Friction.
Get all the latest science stories from across the ABC.