back to index

Michael Kearns: Game Theory and Machine Learning


Chapters

0:0 What is game theory
0:52 What is algorithmic game theory
1:52 Most beautiful idea in game theory

Whisper Transcript | Transcript Only Page

00:00:00.000 | Speaking of markets, a lot of fascinating aspects of this world arise not from individual
00:00:07.400 | humans but from the interaction of human beings.
00:00:12.080 | You've done a lot of work in game theory.
00:00:14.320 | First can you say what is game theory and how does it help us model and study things?
00:00:19.680 | Yeah, game theory of course, let us give credit where it's due, you know, comes from the economists
00:00:25.360 | first and foremost, but as I've mentioned before, like, you know, computer scientists
00:00:30.680 | never hesitate to wander into other people's turf and so there is now this 20-year-old
00:00:36.320 | field called algorithmic game theory.
00:00:38.760 | But you know, game theory first and foremost is a mathematical framework for reasoning
00:00:45.480 | about collective outcomes in systems of interacting individuals.
00:00:52.520 | So you need at least two people to get started in game theory and many people are probably
00:00:58.680 | familiar with prisoner's dilemma as kind of a classic example of game theory and a classic
00:01:03.320 | example where everybody looking out for their own individual interests leads to a collective
00:01:10.040 | outcome that's kind of worse for everybody than what might be possible if they cooperated,
00:01:16.160 | for example.
00:01:17.440 | But cooperation is not an equilibrium in prisoner's dilemma.
00:01:22.000 | And so my work and the field of algorithmic game theory more generally in these areas
00:01:28.360 | kind of looks at settings in which the number of actors is potentially extraordinarily large
00:01:37.320 | and their incentives might be quite complicated and kind of hard to model directly, but you
00:01:43.400 | still want kind of algorithmic ways of kind of predicting what will happen or influencing
00:01:48.360 | what will happen in the design of platforms.
00:01:52.120 | So what to you is the most beautiful idea that you've encountered in game theory?
00:01:59.400 | There's a lot of them.
00:02:00.400 | I'm a big fan of the field.
00:02:03.000 | I mean, you know, I mean technical answers to that of course would include Nash's work
00:02:08.440 | just establishing that, you know, there's a competitive equilibrium under very, very
00:02:13.720 | general circumstances, which in many ways kind of put the field on a firm conceptual
00:02:20.240 | footing because if you don't have equilibrium, it's kind of hard to ever reason about what
00:02:24.920 | might happen since, you know, there's just no stability.
00:02:28.400 | So just the idea that stability can emerge when there's multiple-
00:02:31.920 | Or that, I mean, not that it will necessarily emerge, just that it's possible, right?
00:02:35.560 | It's possible.
00:02:36.560 | Like the existence of equilibrium doesn't mean that sort of natural iterative behavior
00:02:40.780 | will necessarily lead to it.
00:02:42.840 | In the real world, yes.
00:02:43.840 | Yeah.
00:02:44.840 | Maybe answering slightly less personally than you asked the question, I think within the
00:02:48.240 | field of algorithmic game theory, perhaps the single most important kind of technical
00:02:55.960 | contribution that's been made is the realization between close connections between machine
00:03:01.840 | learning and game theory, and in particular between game theory and the branch of machine
00:03:06.040 | learning that's known as no regret learning.
00:03:09.020 | And this sort of provides a very general framework in which a bunch of players interacting in
00:03:16.400 | a game or a system, each one kind of doing something that's in their self-interest will
00:03:22.000 | actually kind of reach an equilibrium and actually reach an equilibrium in a pretty,
00:03:28.720 | you know, a rather, you know, short amount of steps.
00:03:33.600 | So you kind of mentioned acting greedily can somehow end up pretty good for everybody.
00:03:42.320 | Or pretty bad.
00:03:43.540 | Or pretty bad.
00:03:44.540 | Yeah.
00:03:45.540 | It will end up stable.
00:03:46.540 | Yeah, right.
00:03:47.540 | And, you know, stability or equilibrium by itself is not necessarily either a good thing
00:03:54.220 | or a bad thing.
00:03:55.420 | So what's the connection between machine learning and the ideas of-
00:03:58.060 | Well, I mean, I think we've kind of talked about these ideas already in kind of a non-technical
00:04:03.180 | way, which is maybe the more interesting way of understanding them first, which is, you
00:04:08.300 | know, we have many systems, platforms, and apps these days that work really hard to use
00:04:16.180 | our data and the data of everybody else on the platform to selfishly optimize on behalf
00:04:22.980 | of each user.
00:04:24.260 | Okay?
00:04:25.260 | So, you know, let me give, I think, the cleanest example, which is just driving apps, navigation
00:04:30.980 | apps, like, you know, Google Maps and Waze, where, you know, miraculously compared to
00:04:36.340 | when I was growing up, at least, you know, the objective would be the same when you wanted
00:04:40.900 | to drive from point A to point B, spend the least time driving.
00:04:44.620 | Not necessarily minimize the distance, but minimize the time, right?
00:04:48.760 | And when I was growing up, like, the only resources you had to do that were like maps
00:04:52.820 | in the car, which literally just told you what roads were available.
00:04:57.660 | And then you might have like half-hourly traffic reports just about the major freeways, but
00:05:02.820 | not about side roads.
00:05:03.940 | So you were pretty much on your own.
00:05:06.300 | And now we've got these apps.
00:05:08.180 | You pull it out and you say, "I want to go from point A to point B."
00:05:11.020 | And in response kind of to what everybody else is doing, if you like, what all the other
00:05:15.620 | players in this game are doing right now, here's the, you know, the route that minimizes
00:05:21.420 | your driving time.
00:05:22.420 | So it is really kind of computing a selfish best response for each of us in response to
00:05:28.820 | what all of the rest of us are doing at any given moment.
00:05:32.500 | And so, you know, I think it's quite fair to think of these apps as driving or nudging
00:05:38.520 | us all towards the competitive or Nash equilibrium of that game.
00:05:44.820 | Now you might ask like, "Well, that sounds great.
00:05:47.020 | Why is that a bad thing?"
00:05:48.660 | Well, you know, it's known both in theory and with some limited studies from actual
00:05:57.540 | like traffic data that all of us being in this competitive equilibrium might cause our
00:06:04.300 | collective driving time to be higher, maybe significantly higher than it would be under
00:06:10.100 | other solutions.
00:06:12.020 | And then you have to talk about what those other solutions might be and what the algorithms
00:06:16.580 | to implement them are, which we do discuss in the kind of game theory chapter of the
00:06:19.980 | book.
00:06:20.980 | But, but similarly, you know, on social media platforms or on Amazon, you know, all these
00:06:28.540 | algorithms that are essentially trying to optimize our behalf, they're driving us in
00:06:33.620 | a colloquial sense towards some kind of competitive equilibrium.
00:06:37.540 | And you know, one of the most important lessons of game theory is that just because we're
00:06:40.900 | at equilibrium doesn't mean that there's not a solution in which some or maybe even all
00:06:45.340 | of us might be better off.
00:06:47.980 | And then the connection to machine learning, of course, is that in all these platforms
00:06:51.220 | I've mentioned, the optimization that they're doing on our behalf is driven by machine learning.
00:06:56.660 | You know, like predicting where the traffic will be, predicting what products I'm going
00:07:00.340 | to like, predicting what would make me happy in my news feed.
00:07:03.020 | And so, you know, I think that's a really important lesson to learn.
00:07:04.020 | And I think that's a really important lesson to learn.
00:07:05.020 | And I think that's a really important lesson to learn.
00:07:06.020 | And I think that's a really important lesson to learn.
00:07:06.020 | [end]