Wed. Jul 3rd, 2024
Occasional Digest - a story for you

For the last few months, there’s one question that I’ve been asked countless times.

It comes up without fail during idle moments: coffee breaks at work or standing around, out at the dog park.

What do you think about AI?

Usually, the tone is quietly sceptical.

For me, the way it’s asked conveys a weary distrust of tech hype, but also a hint of concern. People are asking: Should I be paying attention to this?

Loading…

Sure, at the start of 2023, many of us were amazed by new generative artificial intelligence (AI) tools like ChatGPT.

But, as the months have passed, these tools have lost their novelty.

The tech industry makes big claims about how AI is going to change everything.

But this is an industry that has made big claims before — and been proved wrong. It’s happened with virtual reality, cryptocurrency, NFTs and the metaverse. And that’s just in the past three years.

So, what do I think of AI?

For the past few months I’ve been working on a podcast series about AI for the ABC, looking broadly at this topic.

It’s been a bit like trekking through a blizzard of press releases and product announcements.

Everything solid dissolves into a white-out of jargon and dollar signs.

There’s so much excitement, and so much money invested, that it can be hard to get answers to the big underlying questions.

And, of course,  we’re talking about the future! That’s one topic on which no-one ever agrees, anyway.

But here’s what I’ve learned from speaking to some of the top AI experts.

1. The AI we have is nothing like AI in the movies

Forget Terminator. Forget 2001: A Space Odyssey.

Hollywood’s long-ago visions of the future are getting in the way of understanding the AI we have today.

If you picture a skeletal robot with red eyes every time someone says “AI”, you’ll have totally the wrong idea about what AI can do, what it can’t, and what risks we should reasonably worry about.

Terminator 2
The Terminator movie was released during a 1980s AI boom that proved to be over-hyped.

Most of the AI tools we use, from ChatGPT to Google Translate, are machine learning (ML). 

If AI is the broad concept of machines being able to carry out tasks in what that we would consider “smart”, ML is one way of achieving this.

The general idea is that, instead of telling a machine how to do a task, you give them lots of examples of wrong and right ways of doing the task, and let them learn for themselves.

So for driverless cars, you give a ML system lots of video and other data of cars being driven correctly, and it learns to do the same.

For translation, you give a ML tool the same sentences in different languages, and it figures out its own method of translating between the two.

Why does this distinction between telling and learning matter?

Because a ML tool that can navigate a roundabout or help you order coffee in French isn’t plotting to take over the world.

The fact it can do these narrow tasks is very impressive, but that’s all it’s doing.

It doesn’t even “know” the world exists, says Rodney Brooks, a world-leading Australian roboticist.

“We confuse what it does with real knowledge,” he says.

2. AI is probably dumber than you realise

Rodney Brooks has one of the most impressive resumes in AI. Born, raised and educated in Adelaide, during the 1990s he ran the largest computer science department in the world, at MIT. He’s even credited with inventing the robotic vacuum cleaner.

“Because I’ve built more robots than any other human in the world, I can’t quite be ignored,” he told me when I called him at his home in San Francisco, one evening.

Professor Brooks, who’s a professor emeritus at MIT, says the abilities of today’s AI, though amazing, are wildly over-estimated.

He makes a distinction between “performance” and “competence”.

Performance is what the AI actually does — translate a sentence for example. Competence is its underlying knowledge of the world.

Australian roboticist Rodney Brooks

Australian roboticist Rodney Brooks has developed robots that labour in factories, defuse bombs in war zones, and handle uranium in nuclear power plants.(Supplied: MIT)

With humans, someone who performs well is also generally competent.

Say you walk up to a stranger and ask them for directions. If they answer with confidence, we figure we can also ask them other things about the city: where’s the train station? How do you pay for a ticket?

But that doesn’t apply to AI. An AI that can give directions doesn’t necessarily know anything else.

“We see ChatGPT do things … and people say ‘It’s really amazing’. And then they generalise and imagine it can do all kinds of things there’s no evidence it can do,” Professor Brooks says.

“And then we see the hype cycle we’ve been in over the last year.”

Another way of putting this is we have a tendency to anthropomorphise AI — to see ourselves in the tools we’ve trained to mimic us.

As a result, we make the wrong assumptions about the scale and type of intelligence beneath the performance.

“I think it’s difficult for people, even within AI, to figure out what is deep and what is a technique,” Professor Brooks says.

Now, many people in AI say it’s not so clear cut.

Rodney Brooks and others may be completely wrong.

Maybe future, more advanced versions of ChatGPT will have an underlying model of the world. Performance will equate to competence. AI will develop a general intelligence, similar to humans.

Maybe. But that’s a big unknown. 

For the moment, AI systems are generally very narrow in what they can do.

3. Among experts, there’s furious disagreement about the future

From the buzz out of Silicon Valley, you could be forgiven for thinking the course of the future is pretty much decided.

Sam Altman, the boss of OpenAI, the company that built ChatGPT, has been telling everyone that AI smarter than any human is right around the corner. He calls this dream Artificial General Intelligence, or AGI.

Perhaps as a result of this, minor advances are often communicated to the public as though they’re proof that AI is becoming super-intelligent. The future is coming, get out of the way.

ChatGPT can pass a law exam? This changes everything.

Google has a new chatbot? This changes everything

Beyond this hype, there are lots of varying, equally valid, expert perspectives on what today’s AI is on track to achieve.

The machine learning optimists, people like Sam Altman, are just one particularly vocal group.

A man in blue suit sitting at desk with people sitting behind him.

OpenAI CEO Sam Altman is mostly optimistic about AI, although he says it could be misused.(AP Photo: Patrick Semansky)

They say that not only will we achieve AGI, but it will be used for good, ushering in a new age of plenty.

“We are working to build tools that one day can help us make new discoveries and address some of humanity’s biggest challenges, like climate change and curing cancer,” Mr Altman told US lawmakers in May.

Then, there’s the doomers. They broadly say that, yes, AI will be really smart, but it won’t be addressing climate change and curing cancer.

Some believe that AI will become sentient and aggressively pursue its own goals.

Other doomers fear powerful AI tools will fall into the wrong hands and be misused to generate misinformation, hack elections, and generally spread murder and mayhem.

Source link

Discover more from Occasional Digest

Subscribe now to keep reading and get access to the full archive.

Continue reading