Menu

Is Australia lagging on AI?

Jan 25, 2024 •

All over the world, humanity is rushing to regulate the development of artificial intelligence. Now, the Australian government has announced its first steps toward controlling the development of AI. But is it already too late?

Today, Professor Toby Walsh, author of ‘Faking It’, on whether Australia is going far enough to regulate AI, and the consequences of getting it wrong.

play

 

Is Australia lagging on AI?

1158 • Jan 25, 2024

Is Australia lagging on AI?

[Theme Music Starts]

ANGE:

From Schwartz Media, I’m Ange McCormack. This is 7am.

All over the world, humanity is rushing to regulate the development of artificial intelligence.

Now, the Australian government has announced its first steps toward controlling the development of AI. But is it already too late? And do we really understand what the risks are?

The technology is advancing at such a fast pace, some examples are becoming indistinguishable from real life. Just like this, a speech-synthesis clone of our host, Ange McCormack.

Today, Professor Toby Walsh, author of Faking It, on whether Australia is going far enough, and the consequences of getting it wrong.

It’s Thursday, January 25.

And - hi this is me, the real, human Ange McCormack now, don’t worry, I’m back with you for the rest of this episode.

[Theme Music Ends]

ANGE:

Toby, at this point most of us have used AI in some way, but it can still be kind of shocking and to learn how clever and powerful it is. I think some of our listeners, right now, will be a little surprised at how accurate the AI we used in the introduction to this episode really was. But I imagine something like voice cloning is just scratching the surface of what this technology can accomplish.

As an expert in artificial intelligence yourself, what are you personally impressed with when it comes to what AI can already do?

TOBY:

It is just scratching the surface in terms of what AI can already do, one of the greatest promises is in healthcare, in medicine. There was a recent study done where they took the UK gene bank. So the codes, the letters that make up terabytes of data is too big for humans to look at. But AI can plumb that and look into it. They've already started to make some discoveries.

They can now tell from you or just from your genotype. Well, for example, how tall can you get to be, to within an inch? Now, and they could do that at birth, whether you're going to be a good basketball player or not.

Now, of course, that's not particularly, you know, medically useful, but there's lots of other actually quite useful medical things that they can tell. For example, they can tell whether you're going to be likely to have bowel cancer. Third most common and deadly cause of cancer. But unfortunately, by the very nature of bowel cancer, by the time people notice they've got it. It's often sadly too late. Well, now we can tell at birth whether you're one of those people.

So things like that, it's going to completely transform the way we go about medicine.
And I think what one thing is, is clear. You know, it's not the first technology that's touched our lives. Many other technologies have touched our lives in other ways. But I think what's different this time is the speed at which it is happening now.

The internet has been very transformative, but it took a decade or so for it to happen. We had to get people online, and we had to get people connected, smartphones again. They took a best part of a decade for people, everyone, to go out by smartphone and start using all those apps.

I don't think it's a coincidence that ChatGPT was the fastest growing app ever. Million people have discovered it by the end of the first week, 100 million at the end of the second month. And today, just over a year later, it's in the hands of over a billion people. We've never had technologies that could so quickly reach into people's lives and change the way they go about their work.

ANGE:

Yeah, it is staggering how quickly it has entered our lives and not just entered them in a novel way, but already in quite a meaningful way. So there is a sense of urgency here to make sure that AI doesn't get out of hand, so to speak.

I want to ask you what getting out of hand might look like. What are the most dire risks or threats that are in front of us if AI doesn't have strong regulation around it?

TOBY:

Well, AI is a very pervasive technology. It's going to be in many different parts of our lives in how we work, how we play, you know, politics in, in the way we go about war. It's hard, in fact, to think of a part of our lives in some sense. It's not going to touch. Some people are very concerned about, you know, the longer risks that AI poses, that possibly existential risk they pose to humanity itself.

Audio excerpt – News Reporter:

“Elon Musk has joined artificial intelligence experts and industry executives worried about AI's impact on society. He is among signatories to an open letter calling for a six month pause in developing systems stronger than OpenAI's GPT four. The letter was issued by the non-profit Future of Life Institute and signed by more than 1000 people.”

TOBY:

Technologies have always transformed the way that we work. They've taken jobs away and created new jobs. And AI's going to be no different. What we don't know, and I don't think anyone has any real idea of the net effect it's going to be whether it takes away more jobs than it destroys.

Audio excerpt – News Reporter:

“Recent analysis from investment firm Goldman Sachs looked at the global impact and found AI could replace 300 million full time jobs, including positions in the legal and engineering fields.”

TOBY:

There are some jobs that we won't be doing in 20 or 30 years time. I'm not sure that many people are going to be a truck driver in 20 or 30 years time. We're going to have autonomous trucks. They're going to drive much more efficiently and much more safely than human drivers. People in graphic design are already being impacted by generative AI, by these some of these AI tools coming along.

We're already starting to see its impact upon politics.

Actually, this year is actually a very critical year. Over 4 billion people around the world go to the polls. And we've already seen, we've seen in the elections at the end of last year in Argentina and Slovakia, the use of deepfake, fake imagery generated by AI, fake audio, fake video generated by AI.

Audio excerpt – News Reporter:

“AI warnings continued to grow around the use of deepfake, AI altered media depiction depicting fake representation of others, usually celebrities, and some are sounding the alarm that 2024 could be the first, quote, deepfake election. Quite scary.”

TOBY:

Just this week, for example, when there were fake robocalls that have been made that supposedly have President Biden trying to persuade people not to turn out and write his name in on the New Hampshire ballot.

Audio excerpt – Fake Biden:

“Voting this Tuesday only enables the Republicans in their quest to elect Donald Trump. Again.”

TOBY:

So it is, you know, potentially is going to have very strong costs upon our democratic processes.

ANGE:

Toby, can you explain how the Australian government is responding to these questions? How far does the government's response go to addressing some of those risks that we've just been talking about?

TOBY:

So last week we saw the Minister for Industry and Science, step up at Parliament and released the government's response on how it's going to deal with the safe and responsible use of AI.

Audio excerpt – Ed Husic:

“We want to get the benefits of AI, uh, while also, uh, shoring up and fencing off the risks as much as we can, uh, and design modern laws for modern technology.”

TOBY:

So there were three likes to the government's response.

Audio excerpt – Ed Husic:

“first, working with industry to develop a voluntary AI safety standard. Uh, in the near term, we'll also look to introduce voluntary labeling and watermarking of AI generated material, which I'm happy to discuss, uh, through the course of our discussions today. And we'll also set up an expert advisory group to help guide the development of mandatory guardrails.”

TOBY:

One feels this is a bit late in the day, and many other countries are much further down the road on this. And a bit light. I mean, just setting up a committee. It's going to take a while for that committee to come back with the recommendations.

This is a very fast moving field, but we're not seeing the sort of scale of government investment that reflects the opportunity there. And it's hard not to draw a comparison.

Last year, the UK government put out a similar report about its response, its regulatory response to the threats that AI poses. And at the same time, they made an announcement that they were going to invest another billion pounds on top of the already greater than billion pounds that they've invested, you know, into intelligence.

And I can only compare that to the Australian government's investment in AI. Over the last 5 or 10 years, they've invested less than $200 billion, which is a 20th of what the UK government is, despite the fact that UK has only got three times the population, three times the GDP of Australia. So I do wonder if the government isn't moving just a little slowly on this.

ANGE:

After the break - how other countries are beating Australia in the race to regulate AI.

[Advertisement]

ANGE:

So Toby Australia is in the early stages of developing its AI regulation. You've said it might be coming a bit too late in the day and might be a bit light on. What are other countries doing in this space? How do we compare to how they're approaching this problem?

TOBY:

I think it's fair to say we're pretty much at the back of the pack here compared to Europe, compared even to China. China has actually been much more proactive in regulation compared to the United States.

Europe was well in the front and actually until quite recently, the EU AI act, which is soon going to come into force. European regulators have been working on that since 2020. And even though that that is about to be signed into law, they're just putting the finishing touches, the final, tweaking of words as we speak. That won't be law for another year.

So it's, you know, it takes a while, takes, you know, the best part of five years or so to get these things on the statute books and being applied. So we are quite well behind. US was lagging a bit, but they leapfrog to the front with a presidential executive order.

Audio excerpt – Joe Biden:

“To ban targeting and advertising to children. To limit. The personal data of these collectors, these these companies collect on us.”

TOBY:

This executive order, one of the largest executive orders ever written, actually, came out a few months ago.

Audio excerpt – Joe Biden:

“There's no greater change than I can think of in my life than AI presents as a potential. Exploring the universe, fighting climate change and in cancer we know it, and so much more.”

TOBY:

And countries now setting the laws are setting important precedents. And so the EU AI act is probably going to set an important precedent, for what sort of, AI regulation that we end up with here.

ANGE:

And countries are all coming at it at different angles and I guess with different levels of urgency. The thing is, AI isn't exactly bound by borders being a kind of global technology being used everywhere.

I'm wondering how much regulation in a country like Australia, or even as somewhere as big and powerful as the US will help if in the meantime, it's being used kind of without checks and balances everywhere else in the world.

TOBY:

Yeah. I mean, the various angles to this question. One is that, you know, it's worth pointing out that nations can effectively regulate these tech giants, even though they're multinational. They're very powerful. You know, the trillion dollar corporations that are as wealthy as small countries.

But we have indeed, in Australia, we've actually been remarkably effective. We had, as an example, after the terrible tragedy that happened in Christchurch, we enacted laws to actually hold the social media platforms responsible for content and to ensure that it's taken down promptly.

Those were groundbreaking laws and the first type of such laws anywhere on the planet. And analysis has said that although they're not perfect, they're still harmful, hateful content that appears on social media platforms. It's now taken down quicker, and a number of other countries now have adopted such similar laws. It's worth saying that we can pass laws that will have teeth and will have impact upon the tech companies.

And then the other thing is, I think worth pointing out is that, you going to expect a variety of different laws and approaches and standards because, you know, all these countries are different.

You're in the US, you're going to have, you know, a different approach to what you're going to have in China, to a different approach to what you have in Europe.

There's a greater emphasis, perhaps in the US on the, on the freedoms of the individual. And if you move, towards the east, you find, you know, a greater respect for the, you know, the collective good of society as a whole. And that's going to require different types of regulation.

And I don't really hold out much hope for international regulation. I mean, the United Nations is a it's that fine institution, but it does struggle to do anything effectively.

And so the lowest common denominator that we're going to get agreement that we're going to get amongst, you know, these different trading blocs is going to be hardly worth the paper it's written on. So actually I think, regulation is perhaps best effected at the level of the nation state.

ANGE:

And, Toby, we've talked about the risks and threats of AI and what happens if we get this task of regulation wrong. And it's easy to get alarmed by all of that. But what if we get it right? How could a well-regulated AI industry transform our lives for the better?

TOBY:

Oh, I mean, that's why I get up in the morning. That's why I spend the whole of my life last 40 years working at AI, because I think the positives are going to easily outweigh any of the negatives. The government report that it just released, outlined some of those benefits. They they predicted that by 2030, end of this decade, it might have grown, the Australian economy by $600 billion annually to our GDP. That's a 40% increase in our GDP.

Now, I don't see almost any other technology coming along that is going to actually help us grow our wealth that much. We are living in significant financial headwinds at the moment. The way it's going to transform education, these are going to make personal tutors that allow us to deliver really wonderful education to people the way that it's going to transform so many aspects of our business.

So I'm very optimistic, even if it's going to be somewhat difficult the next 10 or 20 years as we navigate some of these problems, that it's by embracing these technologies, like televisions, that we are going to actually come through it live on the planet, perhaps in a more sustainable way, live healthier, longer, happier, wealthier lives.

ANGE:

Toby, thanks so much for your time today.

TOBY:

Been a pleasure.

[Advertisement]

[Theme Music Starts]

ANGE:

Also in the news today …

Former News Limited chief executive, Kim Williams, has been named the new chair of the ABC.

The announcement came the day after the outgoing chair, Ita Buttrose, dismissed a statement from journalists that they had lost confidence in the ABC’s leaders to defend them and their editorial independence from external pressure.

When asked about the ABC’s Middle East reportage, Mr Williams stated his commitment to independence.

And …

Prime Minister Anthony Albansese is expected to amend planned stage 3 tax cuts, legislated by the Morrison government in 2019.

Critics of the tax cuts have argued they disproportionately benefit the highest income earners - and it’s predicted the government will put forward amendments to shift the benefit more toward those on lower and middle incomes.

I’m Ange McCormack. This is 7am. We’ll be back again tomorrow with an episode about the culture war around January 26.

[Theme Music Ends]

All over the world, humanity is rushing to regulate the development of artificial intelligence.

Now, the Australian government has announced its first steps toward controlling the development of AI. But is it already too late? And do we really understand what the risks are?

The technology is advancing at such a fast pace that some examples are becoming indistinguishable from real life.

Today, Professor Toby Walsh, author of Faking It, on whether Australia is going far enough to regulate AI and the consequences of getting it wrong.

Guest: Author of Faking It, Professor Toby Walsh

Listen and subscribe in your favourite podcast app (it's free).

Apple podcasts Google podcasts Listen on Spotify

Share:

7am is a daily show from The Monthly and The Saturday Paper.

It’s produced by Kara Jensen-Mackinnon, Cheyne Anderson and Zoltan Fesco.

Our senior producer is Chris Dengate. Our technical producer is Atticus Bastow.

Our editor is Scott Mitchell. Sarah McVeigh is our head of audio. Erik Jensen is our editor-in-chief.

Mixing by Andy Elston, Travis Evans and Atticus Bastow.

Our theme music is by Ned Beckley and Josh Hogan of Envelope Audio.


More episodes from Toby Walsh




Subscribe to hear every episode in your favourite podcast app:
Apple PodcastsGoogle PodcastsSpotify

00:00
00:00
1158: Is Australia lagging on AI?