Back to Index

Jeremy Howard on ABC Weekend Breakfast


Transcript

We must be clear eyed and vigilant about the threats emerging from emerging technologies that can pose, don't have to, but can pose to our democracy and our values. Well, that's US President Joe Biden there speaking after signing the consensus with seven leading tech companies, including Meta, Google, Microsoft, and OpenAI.

So AI and tech entrepreneur Jeremy Howard wrote about this very topic just a few months ago. He joins us now from Brisbane. Jeremy, thank you so much for taking the time to speak with us. You're welcome, Fazia. So let's start off with the very obvious here. These safeguards are voluntary.

They're signed by seven major tech companies. These tech companies compete with each other really. So what's the point of this agreement? Well, the basic idea here is that AI has come a long way in the last year or two. And if you've ever used something like chat GPT, you'll have seen that you can now literally hold a conversation with an artificial intelligence model on almost any topic.

Sometimes it says ridiculous things, but most of the time you'll have a pretty decent conversation. The technology's come to a whole new level. This is going to be a great productivity booster. There's a lot of things that are much easier to do now for more people than there used to be.

But of course, some of those things could be bad things. And so people do worry about the misuse of this technology as well. We do understand the pros and cons of AI, but it's a voluntary agreement. It's a voluntary consensus. Is this just a PR job for the tech companies?

I worry it actually might be something more than that. My concern is this is the American government starting an agreement with seven American firms on a vastly powerful technology that America's the clear leader in. I find this an overly cozy regulatory relationship, to be honest. And it might be pretty bad news for Australia.

Australia relies on the ability to access what's called open source, which is to say we don't have any of these models ourselves. We haven't built any of these in Australia. So we rely on being able to access the weights of these models that are released on the internet. And under this voluntary agreement, they're saying, actually, they are not going to do that anymore.

And it's going to put Australia and other countries in a pretty challenging situation. It does seem, too, that these tech companies are able to make a jump there on actual shaping of policies when it comes to regulating AI. Am I reading that right? Yes, exactly. I mean, we're seeing the head of OpenAI, Sam Altman, visiting world leaders, hanging out with Joe Biden, we're seeing Eric Schmidt from Google going around Congress lobbying.

There's a huge amount of lobbying going on and a very, very cozy relationship developing between big tech and the American government. And also starting to see signs of that in Europe as well. I think this should be a big worry, particularly for countries like Australia, if we're going to get onto the frontier of this, then it's going to be harder and harder the more regulatory barriers Australia's face to accessing these big markets.

So what can Australia do then? Be involved in the lobbying, make sure that they're in the room as well? I think first and foremost, Australia needs to really get our game together when it comes to the technology. So all of these models are built on a single set of technologies called deep learning and neural networks.

That's an area which Australia unfortunately is not at the forefront of. There's basically no funding for this. If you think about groups like Google's DeepMind or Google's Brain, you can see from their names that they're all about building and harnessing these neural networks. So Australia actually just needs to get back on the technology leadership here.

We need funding for this kind of technology. We need to be working hard to bring the many brilliant expats back into Australia who have been fleeing the countries in years because of the lack of technical leadership in this area. Once we've managed to return to a position of technical leadership like we had in the early days of computing, for example, then we'll be much better placed to actually lobby because we'll have Australian companies and Australian projects that we can lobby for.

Jeremy, just going back to this voluntary agreement that the major tech companies have signed with the White House, the four main takeaways that I'm getting from this agreement is security testing of AI, watermarks to flood something that is AI generated, being transparent about AI capabilities, working against bias, discrimination and invasion of privacy, all well and good.

But I do wonder if we take the speed at which AI is evolving, can it even be regulated? So that's a really great question, but I first want to mention a key fifth component that's been little noticed, which is the key fifth component is also that they're committing to not sharing these models with other people, keeping them secret.

My guess is actually that this kind of regulation will not only be pointless, but actually damaging to global safety and to global innovation. As you're kind of implying, this probably isn't possible to directly regulate. I think what we need instead is a much more democratic approach where everybody has access to this powerful technology, just like everybody has access to the vote and everybody has access to education, everybody has access to the internet.

It's not by restricting these things to an elite few that we see, you know, society thrive. It's actually through giving society access to these things that we see it thrive. And I think that's the safer approach, most likely. Jeremy, it's such a fascinating topic. Thank you so much for sharing your insights with us on this Sunday morning.

We're glad to have you on the show. Jeremy Howard, there, tech entrepreneur. Thank you. Thank you.