back to index

Escaping the Local Optimum of Low Expectation


Chapters

0:0 Overview - The Voice poem
6:46 Artificial intelligence
13:44 Open problems in AI
14:10 Problem 1: Learning to understand
17:15 Problem 2: Learning to act
19:28 Problem 3: Reasoning
20:44 Problem 4: Connection between humans & AI systems
23:57 Advice about life as an optimization problem
24:10 Advice 1: Listen to your inner voice - ignore the gradient
25:12 Advice 2: carve your own path
26:28 Advice 2: Measure passion not progress
28:10 Advice 4: work hard
29:5 Advice 5: forever oscillate between gratitude and dissatisfaction
31:10 Q&A: Meaning of life
33:11 Q&A: Simulation hypothesis
36:15 Q&A: How do you define greatness?

Whisper Transcript | Transcript Only Page

00:00:00.000 | It's wonderful to be here, wonderful to see so many faces
00:00:04.080 | that I've come to love over the years.
00:00:08.880 | My advisor, my family's here, my mom, brother.
00:00:13.880 | You know, I did ask security to make sure my dad doesn't,
00:00:20.200 | is not allowed in, but he somehow found his way in,
00:00:26.520 | so good job.
00:00:28.640 | (audience laughing)
00:00:30.040 | The topic of today's talk reminds me
00:00:33.640 | of something my dad once told me.
00:00:35.440 | I wrote it down.
00:00:37.340 | Fewer those who see with their own eyes
00:00:40.920 | and feel with their own hearts.
00:00:42.600 | No, wait, that actually was Albert Einstein,
00:00:46.840 | different Jew, similar haircut for those of you.
00:00:51.240 | (audience laughing)
00:00:52.440 | Similar.
00:00:53.280 | You know, there's a saying, there's an old saying
00:00:58.040 | that goes, "Give a man a fish and you feed him for a day,
00:01:02.120 | "teach a man to fish and you feed him for a lifetime."
00:01:05.040 | A little known fact, it actually goes on to say,
00:01:09.800 | "So that he may never discover how much he loves steak."
00:01:13.960 | Or vegetarian lasagna for those of you
00:01:16.120 | who are vegetarian in the audience.
00:01:18.680 | And the key there, the key idea is society tries to,
00:01:24.280 | impose lessons to teach, to drive the human being,
00:01:29.280 | each of us, but it's you discovering your own passion
00:01:33.240 | is the key, and that's what the talk
00:01:34.560 | I'd like to talk about today.
00:01:36.000 | And there'll be a lot of poems throughout.
00:01:39.000 | And the central poem by Shel Silverstein called "The Voice"
00:01:43.560 | is one I think that will resonate throughout the talk.
00:01:48.080 | There's a voice that's in the air,
00:01:50.040 | it's a voice that's in the air, it's a voice that's in the air.
00:01:55.040 | It's a voice that's in the air, it's a voice that's in the air.
00:02:00.040 | It's a voice that's in the air, it's a voice that's in the air.
00:02:05.200 | It's a voice that's in the air, it's a voice that's in the air.
00:02:10.280 | It's a voice that's in the air, it's a voice that's in the air.
00:02:15.760 | And that's the poem we have together over two small topics,
00:02:20.760 | life and artificial intelligence.
00:02:23.840 | Now, from an optimization perspective,
00:02:29.000 | and one of my co-advisors has always told me
00:02:32.880 | when you show a plot, you have to describe the X axis
00:02:36.040 | and the Y axis as a good engineer.
00:02:38.800 | There you go, that's lesson number one.
00:02:41.040 | The X axis is competence, the Y axis is confidence.
00:02:45.520 | And there's something called the Dunning-Kruger effect,
00:02:51.240 | which is captured by this plot.
00:02:53.320 | And that is at the beginning of your journey of competence,
00:02:57.080 | when you're not very good at something,
00:02:58.560 | when you're first taking the first steps
00:03:01.000 | of learning something, as some of you here are
00:03:03.440 | in the engineering fields, you're overly confident.
00:03:06.720 | It's the peak of confidence,
00:03:08.480 | and you're at the lowest stage of actually
00:03:11.440 | of your abilities, of your expertise.
00:03:14.280 | And it's funny that I'm speaking here before you today
00:03:19.280 | in a place of a complete sort of self-doubt and despair
00:03:27.640 | and not knowing what I'm doing at all.
00:03:30.680 | And I feel like I have zero expertise to impart on you.
00:03:33.960 | And so in that sense, it's a funny position
00:03:38.120 | to be speaking with, especially some of the lessons,
00:03:41.040 | some of the advice I'll try to give.
00:03:42.920 | So take that with a grain of salt.
00:03:44.720 | And some of you sitting in the audience today
00:03:48.600 | may be at the very peak, especially if you're
00:03:50.720 | at the beginning of the college journey, university journey.
00:03:53.880 | And I'd say to me, the biggest positive,
00:03:57.840 | the biggest impact of college and university education
00:04:01.580 | is the dismantling of the ego that's involved
00:04:07.440 | in going from that peak overconfidence
00:04:10.680 | to the valley of despair that I'm currently in.
00:04:13.220 | Oh, and I should mention that this is also the time for me
00:04:17.880 | and perhaps for you where folks like Dostoevsky
00:04:21.800 | start making a lot of sense,
00:04:23.760 | talking about suffering and pain
00:04:27.120 | and how the really great men and women must,
00:04:31.400 | I think as he says, have great sadness on earth.
00:04:35.160 | This resonates with everybody
00:04:36.880 | in their undergraduate years in engineering.
00:04:39.080 | Now, the real thing I'd like to talk about
00:04:42.480 | is the broader optimization problem
00:04:44.680 | formed by the Dunning-Kruger effect,
00:04:48.480 | which is after the peak of confidence
00:04:52.120 | and the valley of despair, there's a gradient
00:04:55.320 | provided to you by your advisors, by your parents,
00:04:58.720 | by your friends, your loved ones, society in general.
00:05:03.040 | The gradient over which you're optimized
00:05:05.880 | to achieve some definition of success.
00:05:10.280 | This is what I call the local optimum.
00:05:12.360 | What everybody else tells you you're supposed to do.
00:05:15.020 | What everybody else at the small scale, on a daily scale,
00:05:18.280 | and on the weekly scale, monthly, yearly,
00:05:20.720 | and for the rest of your life
00:05:22.120 | tells you what the definition of success is.
00:05:24.620 | That's the local optimum.
00:05:26.240 | What I'd like to argue is some ideas
00:05:29.400 | of how to break out of that convention,
00:05:32.000 | of how to listen just enough to hear the lessons
00:05:37.000 | in society, advisors, friends, and parents,
00:05:40.720 | but for the rest of it, ignore their voices
00:05:43.480 | and only listen to your own voice.
00:05:45.260 | And I'll tell you through my own story here.
00:05:50.420 | So I was introduced as a research scientist at MIT.
00:05:53.180 | And very recently, I decided to step down from MIT
00:05:57.560 | to do my own startup.
00:05:59.640 | I'm still affiliated there,
00:06:00.880 | but sort of give up the salary, give up everything,
00:06:04.640 | give up what I'm supposed to be,
00:06:06.840 | the definition under academic colleagues of what success is,
00:06:11.360 | of what the pursuit of the academic life is,
00:06:15.500 | because I'm listening to the voice inside.
00:06:17.600 | And so I'm speaking to you
00:06:20.700 | at the very beginning of this journey,
00:06:22.880 | again, full of self-doubt.
00:06:24.940 | And so take with a grain of salt,
00:06:26.680 | but perhaps it's interesting to speak from this position,
00:06:30.240 | 'cause I would argue it's the most beautiful position
00:06:33.320 | to be in in life.
00:06:34.640 | The opportunity, the freedom in the struggles
00:06:39.280 | that I'm undergoing now is really a gift
00:06:43.100 | that comes at the end of this journey of college.
00:06:46.060 | Now, who am I?
00:06:49.400 | And what is the dream that I mentioned there at the end?
00:06:53.160 | The global optimum.
00:06:54.500 | For me, that's understanding the human mind
00:06:58.080 | and engineering artificial intelligence systems.
00:07:01.520 | Visualized on the left here is just 3% of the neurons
00:07:07.120 | in the human brain.
00:07:08.160 | It's a mysterious, beautiful thing.
00:07:10.020 | It's easy to forget how little we know about this mystery
00:07:14.720 | that's just between our two ears.
00:07:16.760 | And engineering machines that can reason,
00:07:20.160 | that can think, that can perceive the world
00:07:22.680 | is one of the ways we can understand
00:07:26.960 | this mysterious, beautiful thing that brings to life
00:07:30.680 | everything around us.
00:07:31.820 | And the dream of creating intelligence systems,
00:07:36.800 | companions, ones that you can have a deep connection with.
00:07:40.040 | That's what drives me.
00:07:41.340 | That's my startup work.
00:07:43.040 | That's what my entrepreneurship work,
00:07:45.200 | that's my research work is focused on.
00:07:47.320 | Most of the work at MIT and before that
00:07:49.720 | has been on robotics and autonomous vehicles.
00:07:52.260 | But now the dream is to create a system that you can love
00:07:57.020 | and it can love you back.
00:07:58.300 | A brief history of artificial intelligence
00:08:02.540 | to give you a sense, to give you a quick review
00:08:05.260 | if this is a totally new field.
00:08:07.140 | Again, if you're undergraduate,
00:08:09.460 | perhaps this is a field that you want to,
00:08:11.900 | that you want to take on as your journey.
00:08:17.180 | So it started on the theoretical end with Alan Turing
00:08:21.020 | and many of the ideas from philosophy to mathematics
00:08:24.140 | that he presented and from whom the field was born.
00:08:27.680 | And on the engineering side, Frank Crosonblatt
00:08:31.620 | in building the Perceptron, the first machine.
00:08:34.220 | So engineering machines that can do some aspect of learning,
00:08:37.420 | some aspect of search that we associate
00:08:39.580 | with artificial intelligence.
00:08:41.020 | And then there's been accomplishments throughout,
00:08:44.220 | none greater, at least to me, than in this,
00:08:47.400 | at least for now in a span of games.
00:08:49.920 | There's been two branches of artificial intelligence
00:08:52.900 | that have dominated the field.
00:08:54.740 | The early days have been, you can think of a search,
00:08:58.380 | as brute force search.
00:08:59.740 | It's not quite as captivating to our imagination.
00:09:03.740 | It doesn't quite feel like intelligence
00:09:05.780 | because it's brute force searching through possible answers
00:09:08.940 | until you find one that's optimal.
00:09:11.460 | It's converting every single problem to a search problem
00:09:14.220 | and then bringing computational power to it
00:09:16.840 | to try to solve it.
00:09:18.340 | But nevertheless, the peak of that,
00:09:21.040 | especially for those who play chess,
00:09:22.440 | especially for those who might be a Russian,
00:09:24.040 | is when IBM D-Blue defeated Garry Kasparov in 1997.
00:09:29.040 | This is a seminal moment in artificial intelligence
00:09:31.940 | where the game that was associated with thought,
00:09:34.760 | with intelligence, with reason, was overcome,
00:09:38.320 | was the greatest champion and human champion
00:09:42.440 | was defeated by a machine.
00:09:44.240 | And the seminal moment on the second branch
00:09:47.160 | of artificial intelligence, which is learning systems,
00:09:49.960 | systems that learn from scratch,
00:09:51.460 | knowing nothing, with zero human assistance,
00:09:55.640 | was able to defeat the greatest player in the world.
00:10:01.560 | Little side note, the first moment did have human assistance
00:10:06.200 | in the AlphaGo system from DeepMind and Google DeepMind.
00:10:09.520 | And then the follow on a few months later,
00:10:11.900 | the system called AlphaZero was able to learn from scratch
00:10:15.540 | by playing itself.
00:10:16.800 | This is, to me, the greatest accomplishment
00:10:19.160 | of artificial intelligence.
00:10:20.520 | And I'll mention when I discuss it
00:10:22.320 | about open problems in the field.
00:10:24.320 | And then in a real world application,
00:10:27.140 | like I said, I worked a lot in autonomous vehicles.
00:10:29.200 | This is one of the most exciting applications
00:10:31.240 | with autonomous and semi-autonomous vehicles.
00:10:33.600 | There's been deployments, lessons, explorations,
00:10:38.340 | a lot of different debates.
00:10:39.840 | This is the most exciting space of artificial intelligence.
00:10:42.400 | If you wanna have an impact as an engineer,
00:10:44.720 | autonomous vehicles is the space you will do so
00:10:47.160 | in the next, in the 2020s.
00:10:50.480 | And a quick whirlwind overview of key ideas
00:10:54.920 | in artificial intelligence that were key breakthroughs.
00:10:58.040 | So neural networks and Perceptron, like I said,
00:11:00.120 | was born in the '40s, '50s, and '60s.
00:11:03.160 | With the algorithms that dominate today's world
00:11:05.280 | of deep learning and machine learning
00:11:07.120 | have been invented in many, many decades ago,
00:11:10.840 | in the '70s and '80s,
00:11:13.080 | with convolutional for the computer vision aspect
00:11:16.040 | of things in the '80s and '90s,
00:11:20.000 | with LSTM, recurrent neural networks,
00:11:23.400 | they work with language, work with sequence of data,
00:11:26.440 | were developed in the '90s and proven out in the aughts.
00:11:31.000 | And then the deep learning quote unquote revolution,
00:11:33.640 | the term and the ideas of large-scale machine learning
00:11:36.960 | using neural networks was reborn in 2006
00:11:41.760 | in the early aughts,
00:11:43.160 | and then proven out in the seminal ImageNet moment
00:11:46.560 | when computer vision systems were able to,
00:11:49.920 | in the challenge of object recognition,
00:11:54.040 | image recognition, and the ImageNet data set,
00:11:56.840 | and the ImageNet challenge,
00:11:58.560 | neural networks were able to far outperform the competition
00:12:03.560 | and do so easily from just learning from data.
00:12:07.120 | And a few other developments.
00:12:10.880 | There's a lot of unsupervised learning,
00:12:12.760 | self-supervised learning ideas
00:12:14.200 | that were born in the '14, '15, '16, just a few years ago,
00:12:18.040 | and a lot of exciting ideas in the past few years.
00:12:20.760 | The past few years have been dominated
00:12:22.440 | by ideas in natural language processing
00:12:24.480 | with ideas of transformers.
00:12:26.120 | Anyway, this might be outside the scope
00:12:28.000 | of what you're familiar with.
00:12:29.840 | I encourage you to look into it.
00:12:31.480 | Transformers in particular, with natural languages,
00:12:34.880 | some of the most beautiful and exciting ideas
00:12:36.960 | that without any human supervision,
00:12:39.000 | you can learn to model language sufficiently well
00:12:42.920 | to outperform anything we've done previously,
00:12:46.920 | to do things like machine translation
00:12:49.000 | to a level that's unprecedented.
00:12:51.480 | It's really exciting.
00:12:52.840 | And especially exciting is that bigger is better,
00:12:56.760 | meaning that as long as we can scale compute,
00:12:59.320 | we can perform better and better and better.
00:13:01.480 | And it's a totally open question
00:13:03.360 | how, what the ceiling of that is.
00:13:05.560 | And finally, the most exciting thing
00:13:07.680 | in artificial intelligence is the idea,
00:13:11.240 | there's a concept of Big Bang for the start of the universe,
00:13:16.320 | a silly name for one of the most incredible mysteries
00:13:20.480 | of our human existence.
00:13:22.320 | Same way, self-play is one of the silliest names
00:13:25.240 | for one of the most powerful ideas
00:13:27.160 | in artificial intelligence.
00:13:28.640 | It's the mechanism behind alpha zero.
00:13:30.960 | It's a system playing against itself
00:13:32.960 | to improve continuously without any human supervision.
00:13:36.360 | That is the most exciting aspect,
00:13:38.440 | the most exciting area that I'm excited
00:13:41.480 | and I recommend if you love learning that you explore.
00:13:44.880 | So the open problems in artificial intelligence
00:13:49.680 | and possible solutions.
00:13:50.920 | And one of the things, and I'll focus on number four,
00:13:53.160 | which is something that is my dream,
00:13:57.160 | that is sort of my life aspiration,
00:14:02.160 | but I'll give a whirlwind introduction.
00:14:04.040 | Learning to understand, learning to act, reason,
00:14:08.360 | and a deep connection between humans and AI systems.
00:14:11.880 | So learning to understand,
00:14:13.040 | there's a lot of exciting possibilities here.
00:14:15.480 | This is a lot of the breakthroughs in machine learning
00:14:18.320 | have been in something called supervised learning,
00:14:21.360 | where you have a set of data and you have a neural network
00:14:24.640 | or a model that's able to learn from that data
00:14:28.000 | in order to generalize sufficiently to infer
00:14:31.080 | on cases it hasn't seen before.
00:14:33.600 | You could recognize cat versus dog.
00:14:35.960 | In the case of domain,
00:14:37.720 | in the domains of like autonomous driving,
00:14:39.760 | you can recognize lane markings,
00:14:41.920 | you could recognize other vehicles, pedestrians,
00:14:44.320 | all the different subtasks involved
00:14:47.040 | in solving a particular problem.
00:14:49.360 | Now that's all good, but to solve real world problems,
00:14:53.000 | you have to actually, you have to deal
00:14:55.640 | with endless edge cases that we human beings
00:14:58.640 | effortlessly take care of,
00:15:01.760 | that our ability to do reasoning and common sense reasoning
00:15:05.040 | effortlessly takes care of.
00:15:06.520 | So to be able to learn over those edge cases,
00:15:08.320 | you have to do much larger scale learning.
00:15:10.920 | And for that, you have to be much more selective
00:15:13.600 | and clever about which data you annotate with human beings.
00:15:16.720 | And that's the idea of active learning.
00:15:18.680 | Same way with, as children, we explore the world,
00:15:22.600 | we interact with the world to pick up the lessons from it.
00:15:25.520 | The same way you can interact with a dataset
00:15:28.560 | to select only small parts of it to learn from.
00:15:31.520 | And I'll take Tesla, which is a car company
00:15:34.000 | that's using autonomous driving and its system autopilot
00:15:38.240 | that uses deep learning to learn
00:15:40.560 | how to solve all these different problems.
00:15:43.400 | I'll use them as a case study.
00:15:45.200 | What they're doing is quite interesting
00:15:47.040 | in the space of active learning.
00:15:49.120 | They're creating a pipeline for each individual task.
00:15:51.880 | They take the task of driving and break it apart
00:15:54.560 | into now over a hundred different subtasks.
00:15:59.200 | Each subtask gets its own pipeline, its own dataset.
00:16:04.200 | And there's a machine learning system
00:16:06.240 | that learns from that dataset
00:16:07.840 | and is then deployed back into the vehicles.
00:16:12.000 | And when the vehicle fails in a particular case,
00:16:15.440 | that's an edge case that's marked for the system
00:16:19.400 | and is brought back to the pipeline to annotate.
00:16:22.800 | So there's ongoing pipeline that continuously goes on.
00:16:27.320 | The system is not very good in the beginning,
00:16:29.280 | but the whole purpose of it is to discover edge cases.
00:16:32.920 | In the same way that us humans learn something,
00:16:37.160 | and you can think of our actually existence in the world
00:16:40.880 | as an edge case discovery mechanism.
00:16:44.600 | So you learn something,
00:16:46.080 | you construct a mental model of the world,
00:16:48.280 | and you move about the world
00:16:49.640 | until you run up against a case,
00:16:52.680 | a situation that you totally didn't expect.
00:16:55.200 | And we do that thousands of times a day still,
00:16:59.080 | and we learn from those.
00:17:00.200 | And that pipeline of active learning
00:17:02.960 | is a really exciting area
00:17:04.760 | that very few people are working on,
00:17:06.400 | especially in the space of research.
00:17:08.320 | To me, that's the most exciting
00:17:11.400 | in terms of scale impact area in the next few years.
00:17:14.840 | Learning to act,
00:17:17.880 | the second set of open problems in artificial intelligence.
00:17:20.920 | This is where the idea of self-play comes in,
00:17:24.440 | is learning to build systems,
00:17:27.560 | whether through a reinforcement learning mechanism
00:17:29.720 | or otherwise, that are actually acting in the world.
00:17:33.920 | In the case of self-play,
00:17:36.000 | the idea is that you have a really dumb system
00:17:38.880 | in the beginning that knows nothing.
00:17:40.600 | Again, no human supervision.
00:17:43.120 | And through randomization,
00:17:46.280 | you have other systems that also know nothing,
00:17:48.560 | but know a different set of nothing.
00:17:50.840 | And they compete against each other.
00:17:52.360 | So you formulate the problem as a competitive setting.
00:17:56.120 | And when you have two dumb systems
00:17:58.680 | that compete against each other,
00:18:00.440 | a magical thing happens.
00:18:02.440 | The one that's slightly less dumb starts winning.
00:18:05.520 | And this little incremental step
00:18:11.280 | can be repeated arbitrarily
00:18:13.320 | and without any constraints
00:18:16.680 | on human supervision, annotation costs,
00:18:20.240 | without any constraints on having to
00:18:22.320 | sort of bring the human in the loop
00:18:24.920 | or bring the physical world in the loop.
00:18:26.280 | It can all be done in computation in a distributed sense.
00:18:29.240 | So you can, in a matter of hours
00:18:32.400 | on a distributed compute setting,
00:18:34.160 | create a system that beats the world champion at go.
00:18:37.480 | And in fact, with DeepMind and all the games
00:18:39.840 | that have they've defeated the world champion in chess,
00:18:42.720 | not just the world champion,
00:18:44.440 | is the best chess playing program, Stockfish,
00:18:48.800 | in a matter of hours of training.
00:18:51.520 | And the ceiling hasn't yet been reached.
00:18:54.680 | This is both the exciting and the scary thing
00:18:56.720 | about self-play is very few times
00:19:00.160 | is the ceiling ever reached.
00:19:02.840 | What we hit is the limits of our computational power,
00:19:06.720 | which is computation power,
00:19:08.840 | especially the kind of mechanisms that are happening now,
00:19:11.880 | developments happening now.
00:19:13.880 | The Moore's law is continuing in many ways.
00:19:16.440 | So computation, if you just wait a few years,
00:19:18.400 | computation is increasing.
00:19:19.480 | So we were yet to see the ceiling of the capabilities
00:19:23.960 | that these approaches are able to achieve.
00:19:26.400 | This should be both exciting and terrifying.
00:19:29.320 | Okay, the total biggest open problem
00:19:32.160 | that nobody even knows how to do.
00:19:34.080 | This is an example of a state-of-the-art
00:19:40.800 | dog intelligence system solving a particular problem.
00:19:48.120 | So we know nothing how to do reasoning systems
00:19:53.120 | in artificial intelligence.
00:19:54.840 | This is the actually not very often talked about area
00:19:58.760 | because nobody knows what to do about it.
00:20:01.760 | There's been subsets called program synthesis,
00:20:06.120 | communities that kind of try to formulate a subset
00:20:10.120 | of the reasoning problem and try to solve it,
00:20:11.920 | but we don't know much to do,
00:20:13.720 | particularly common sense reasoning,
00:20:16.320 | how to formulate enough about the world
00:20:18.480 | to be able to reason about the physics of the world,
00:20:21.160 | about the basic, especially with human beings,
00:20:24.120 | human to human, human to physical world dynamics.
00:20:27.600 | Just there's millions of facts seemingly
00:20:30.080 | that are intricately connected that we learn
00:20:35.080 | and we accumulate in a knowledge base.
00:20:38.480 | This process is a really exciting area of research
00:20:42.560 | that nobody knows what to do with.
00:20:45.640 | The things I've described previously
00:20:47.360 | don't really have anything to do with humans necessarily.
00:20:51.160 | The by-passion in my interest is that space
00:20:56.160 | between machine and human.
00:20:58.800 | The community broadly could be called
00:21:01.320 | human-robot interaction,
00:21:03.120 | but there's a lot of different areas
00:21:05.160 | in which there's a deep connection
00:21:09.880 | between the human and machine
00:21:11.120 | that you all experience every day.
00:21:13.200 | So recommender systems from Netflix
00:21:15.800 | to much more importantly, social networks,
00:21:18.760 | the recommendation engines behind social networks,
00:21:21.320 | recommending what you see next
00:21:23.000 | in terms of both advertisement
00:21:24.280 | and about the content of your friends that you see,
00:21:29.280 | which friends you get to see more from.
00:21:33.280 | The personalization of IOT, of smart systems,
00:21:37.600 | semi-autonomous systems like Tesla Autopilot
00:21:41.600 | and different semi-autonomous vehicles
00:21:44.240 | like the Cadillac Super Cruise systems.
00:21:47.000 | Whenever you have AI systems between you and a machine.
00:21:51.600 | So there's a machine that does,
00:21:53.160 | that automates some particular task.
00:21:55.000 | There's you human that are tasked with sitting there
00:21:58.440 | and supervising the machine.
00:21:59.960 | And there is an AI system in the middle that manages that.
00:22:03.680 | It manages the tension, the dance, the uncertainty,
00:22:05.880 | the human, all the T word, the trust,
00:22:09.360 | all the mess of human beings, it manages that.
00:22:12.680 | That's a really exciting space
00:22:15.680 | that is in the very early days.
00:22:18.520 | What I show there is where my sense is, where we stand.
00:22:23.520 | In 1998, there was a lot of search engines.
00:22:28.720 | Some of you may even be old enough to have used them.
00:22:31.720 | AltaVista, Excite, AskG is like us and so on.
00:22:35.360 | Then Google came along, the Google search engine
00:22:38.680 | and blew them all out of the water.
00:22:40.440 | They were all working on a very interesting,
00:22:43.240 | very important problem, but the approach
00:22:46.960 | and the fundamental ideas behind their approach was flawed.
00:22:50.000 | I believe that personal assistance
00:22:54.820 | and a personal deep, meaningful connection
00:22:57.760 | between an AI system and a human being
00:23:01.520 | that's exactly where we're at.
00:23:03.200 | Many people have in their home an Alexa device,
00:23:05.920 | a Google home device.
00:23:08.280 | But most people don't use it for almost anything
00:23:11.600 | except to play music or check the weather.
00:23:13.700 | Many of you use Twitter and social networks,
00:23:18.720 | but artificial intelligence plays a minimal role
00:23:21.640 | and understands almost nothing about you
00:23:24.360 | in recommending how you interact with the platform
00:23:27.640 | or the advertisements you see.
00:23:29.940 | And autonomous vehicles, robotics platforms
00:23:33.400 | know almost nothing about you.
00:23:36.280 | So shown there is the Tesla vehicle.
00:23:38.920 | It knows almost nothing about you
00:23:41.860 | except whether your hands are on the steering wheel or not.
00:23:44.800 | I believe it'll be obvious in retrospect
00:23:50.000 | how much opportunity there is to learn about human beings
00:23:54.160 | from the devices and from that
00:23:56.280 | to form a deep, meaningful connection.
00:23:58.180 | So now to return to my valley of despair
00:24:03.000 | to give some words of advice.
00:24:06.120 | And again, take them with a grain of salt.
00:24:08.760 | So in this context, in this optimization context,
00:24:14.800 | my first piece of advice is to listen to your inner voice.
00:24:19.240 | I think a lot of people,
00:24:20.880 | including a lot of very smart professors, advisors,
00:24:24.200 | parents, friends, significant others,
00:24:29.200 | have in them a kind of mutually agreed upon gradient
00:24:34.560 | along which they push you.
00:24:36.060 | It's so difficult for me to articulate this in a clear way.
00:24:42.320 | But early on, I heard within myself
00:24:46.960 | a silly sounding, crazy voice that told me to do things.
00:24:52.340 | One of which was to try to put a robot in every home.
00:24:57.680 | There's dreams that are difficult for me to articulate.
00:25:00.840 | But if you allow your mind to be quiet enough,
00:25:03.840 | you'll hear such voices, you'll hear such dreams.
00:25:07.480 | And it's important to really listen and to pursue them.
00:25:12.480 | Advice number two is carve your own path.
00:25:17.840 | And if that means taking a few detours, take the detours.
00:25:23.880 | Again, this is coming from the valley of despair.
00:25:27.780 | (audience laughing)
00:25:31.200 | So I hope this pans out in the end.
00:25:34.700 | But I had many detours.
00:25:36.640 | In music, I was in a band, I had long hair.
00:25:39.400 | I gave a lot of myself to the practice of martial arts.
00:25:48.440 | And both music and martial arts have given me,
00:25:53.360 | again, very difficult to put into words,
00:25:55.140 | but it have given me something quite profound.
00:25:59.060 | It gave flavor and color to the pursuit of that dream
00:26:03.480 | that's hard to articulate.
00:26:04.960 | It's because I listened to my instinct,
00:26:07.280 | listened to my heart in pursuing these detours.
00:26:09.600 | From poetry to excessive reading, like I mentioned,
00:26:13.860 | I took a James Joyce course here.
00:26:16.400 | So pursuing these avenues of knowledge
00:26:19.840 | through philosophy and history
00:26:21.020 | that seemingly have nothing to do with the main pursuit.
00:26:24.200 | And starting the silliest of pursuits, starting a podcast.
00:26:29.200 | Advice number three is to measure passion, not progress.
00:26:35.780 | So most of us get an average of about 27,000 days of life.
00:26:42.280 | I think a good metric by which you should live
00:26:47.000 | is to maximize the number of those days
00:26:49.800 | that are filled with a passionate pursuit of something.
00:26:53.400 | Not by how much you've progressed
00:26:55.780 | towards a particular goal.
00:26:58.020 | Because goals are grounded in your comparison
00:27:00.040 | to other human beings,
00:27:01.640 | to something that's already been done before.
00:27:04.880 | Passionate pursuit of something
00:27:08.320 | is the way you achieve something totally new.
00:27:10.740 | And a quick warning about passion.
00:27:17.380 | Again, I'm a little bit of Russian,
00:27:19.760 | so maybe I romanticize this whole suffering
00:27:21.720 | and passion thing.
00:27:22.920 | (audience laughing)
00:27:24.960 | But the people who love you,
00:27:26.960 | the people who care for you,
00:27:28.640 | like I mentioned, your friends, your family,
00:27:32.680 | should not be trusted.
00:27:35.200 | Accept their love, but not their advice.
00:27:41.160 | Parents and significant others
00:27:43.960 | will tell you to find a secure job
00:27:45.880 | because passion looks dangerous.
00:27:48.780 | It looks insecure.
00:27:51.720 | Advisors, colleagues will tell you to be pragmatic
00:27:54.600 | because passion looks like a distraction
00:27:57.120 | from the main effort that you should be focusing on.
00:28:00.440 | And society will tell you to find balance,
00:28:05.260 | work-life balance in your life
00:28:07.660 | because passion looks unhealthy.
00:28:10.800 | Advice number four,
00:28:14.640 | continuing on the unhealthy part,
00:28:18.180 | is work hard.
00:28:19.920 | Make a habit of working hard every day,
00:28:23.960 | putting in the hours.
00:28:26.560 | There's a lot of books and a lot of advice
00:28:30.560 | that have been written on working smart
00:28:33.440 | and not working hard.
00:28:34.700 | I'm yet to meet anyone
00:28:38.320 | who has not truly worked hard for thousands of hours
00:28:42.160 | in order to accomplish something great.
00:28:44.120 | In order to work smart,
00:28:47.640 | you first have to put in those few tens of thousands
00:28:50.160 | of hours of really dumb, brute force,
00:28:52.400 | hard work of all-nighters.
00:28:54.000 | The key there is to minimize stress,
00:28:58.480 | not to minimize the amount of hours of work.
00:29:02.000 | And to do that is you have to love what you do.
00:29:07.280 | And the final piece of advice,
00:29:09.000 | I love that picture, okay,
00:29:12.120 | is to look up to the stars
00:29:13.840 | and appreciate every single moment you're alive.
00:29:17.400 | At the mystery of this world,
00:29:19.240 | at the beauty of this world.
00:29:21.100 | Again, this is my perspective,
00:29:24.200 | take it with a grain of salt,
00:29:26.100 | but I advise to forever oscillate
00:29:28.200 | between deep, profound doubt and self-dissatisfaction
00:29:33.200 | and a deep gratitude for the moment,
00:29:37.360 | for just being alive,
00:29:38.960 | for all the people around you that give you their love,
00:29:42.160 | with whom you get to share those moments
00:29:44.280 | and share the love.
00:29:46.280 | A poem by Stephen Crane
00:29:47.800 | that I especially like in the desert.
00:29:49.680 | In the desert, I saw a creature, a naked bestial,
00:29:54.000 | who squatting up on the ground,
00:29:55.480 | held his heart in his hands and ate of it.
00:29:57.720 | I said, "Is it good, friend?"
00:30:01.640 | "It is bitter."
00:30:04.000 | "Bitter," he answered.
00:30:05.900 | "But I like it, because it is bitter,
00:30:09.280 | "and because it is my heart."
00:30:11.040 | So I would say the bitter is the self-dissatisfaction,
00:30:15.360 | and that's the restless energy that drives us forward.
00:30:19.120 | And then enjoying that bitterness
00:30:22.120 | and enjoying the moment
00:30:24.400 | and enjoying the sweetness that comes
00:30:27.000 | from eating your own heart in this poem
00:30:31.600 | is a thing that makes life worthwhile.
00:30:37.600 | And that is, to me, happiness.
00:30:44.060 | So with those silly few pieces of advice,
00:30:47.960 | I'd like to continue on the gratitude and say thank you.
00:30:52.160 | Thank you to my advisor.
00:30:55.440 | Thank you to this university
00:30:57.840 | for giving me a helping hand.
00:31:00.320 | There you go.
00:31:01.680 | And thank you to my family
00:31:03.400 | and all the friends that I've had along the way.
00:31:06.120 | Thank you for their love.
00:31:07.200 | I appreciate it.
00:31:08.040 | (audience applauding)
00:31:12.720 | I've never been introduced with this much energy.
00:31:14.580 | I really appreciate it.
00:31:15.680 | (audience laughing)
00:31:18.260 | - You're hanging out at the wrong places, man.
00:31:20.060 | (laughing)
00:31:21.420 | - Yes.
00:31:22.260 | - First of all, great to see you in person, Dr. Kristine.
00:31:25.820 | Big fan of your lectures,
00:31:27.100 | big fan of your show, "Tell Me" podcast.
00:31:29.420 | Just listening to your conversation
00:31:30.580 | on these phones this morning,
00:31:31.680 | my way to my phone.
00:31:33.700 | My question for you was,
00:31:35.140 | is your perspective in any way influenced
00:31:38.460 | by the ultimate being-less-ness of it all?
00:31:40.700 | (laughing)
00:31:42.380 | - By the way, thank you for that question.
00:31:44.500 | How is your daily life affected
00:31:47.000 | by the meaninglessness of it all?
00:31:48.940 | (audience laughing)
00:31:51.940 | So the answer is yes.
00:31:57.500 | And it's hard to use reason to justify
00:32:00.700 | that life is meaningful.
00:32:02.100 | I think you have to listen to,
00:32:03.740 | there's something in you that makes life beautiful.
00:32:09.500 | So if you look at somebody like Elon Musk,
00:32:11.580 | he believes that interplanetary,
00:32:14.900 | so colonizing Mars,
00:32:16.460 | that's one of the most exciting things
00:32:21.820 | we human beings can do.
00:32:23.320 | And so if you allow yourself to think,
00:32:26.180 | what is the most exciting thing
00:32:28.320 | that we human beings can do?
00:32:29.940 | And see that the work you're doing is part of that.
00:32:35.460 | For me, if I were to psychoanalyze myself,
00:32:40.260 | there's something in me that's deeply fulfilling
00:32:43.660 | about creating intelligent systems.
00:32:47.300 | That's so exciting to me,
00:32:50.180 | that we human beings can create intelligent systems.
00:32:53.140 | I see artificial intelligence
00:32:54.580 | as the next evolution of human civilization.
00:32:57.900 | And to me, that makes it somehow deeply exciting,
00:33:01.820 | even though eventually the whole universe
00:33:05.300 | will collapse on itself
00:33:06.340 | or the other cold death of the universe.
00:33:10.620 | There's something within that that's so exciting.
00:33:13.060 | - There was an interview with Elon Musk
00:33:16.860 | and he basically said that we're in a civilization,
00:33:20.660 | so this might not be actual reality.
00:33:23.980 | What's your take on that?
00:33:25.340 | - So my first take is,
00:33:27.220 | I love it how much fellow colleagues
00:33:30.460 | and scientists are uncomfortable with this question.
00:33:33.820 | So I love it.
00:33:34.900 | I love to ask it just 'cause it makes them uncomfortable.
00:33:37.700 | (audience laughing)
00:33:39.900 | Yeah, I appreciate it.
00:33:41.780 | It's a good, I don't know, maybe in French cuisine,
00:33:48.660 | you have to cleanse the palate.
00:33:50.860 | It's a good question to ask.
00:33:52.660 | We're not now talking about the latest paper.
00:33:55.980 | We're now talking about the bigger questions of life.
00:33:58.620 | The simulation question is a nice one to do that.
00:34:02.680 | In terms of actually practically,
00:34:05.180 | I think there's two interesting things to say.
00:34:09.500 | So one, it's interesting to me,
00:34:12.620 | I'm a big fan of virtual reality.
00:34:14.260 | I love entering worlds,
00:34:19.140 | even primitive as they are now that are virtual.
00:34:21.580 | I can already imagine that more and more people
00:34:24.420 | would wanna live in those worlds.
00:34:25.940 | It's an interesting question to me,
00:34:27.820 | how real do those worlds need to become
00:34:30.300 | in order for you to wanna stay there
00:34:32.100 | and not return to the real world?
00:34:34.120 | So the question of the simulation is,
00:34:36.080 | how real do we need to simulate the world
00:34:38.720 | in order for you to enjoy it better than this one?
00:34:41.220 | That's a computer science question.
00:34:43.660 | That's really interesting.
00:34:44.500 | That's a, it's a,
00:34:45.980 | it's like practical engineering question
00:34:48.100 | 'cause you can create virtual reality systems
00:34:49.820 | that'll make a lot of money,
00:34:51.880 | perhaps have a detrimental effect on society
00:34:53.860 | by having people wanna stay in the virtual worlds.
00:34:56.860 | And then the other question is the physics question
00:34:59.580 | of quantum mechanics of,
00:35:01.380 | like what is the fundamental fabric of reality?
00:35:04.300 | And is it,
00:35:05.220 | what does it take to simulate that reality?
00:35:10.340 | And that's like a physics question.
00:35:12.500 | How, is it finite, is it infinite?
00:35:15.000 | What are the mechanisms, the underlying mechanisms?
00:35:18.220 | Does it go as low as string theory?
00:35:19.620 | Does it go below string theory?
00:35:21.420 | And there's actually people that written papers
00:35:24.220 | on how big a computer needs to be
00:35:26.180 | in order to simulate that kind of system.
00:35:28.860 | And now quantum computers are coming forward,
00:35:31.100 | which is one of the exciting applications
00:35:33.060 | of quantum computing is to be able
00:35:34.860 | to simulate quantum mechanical systems.
00:35:37.780 | And this is the question,
00:35:38.780 | how big does a quantum computer have to be
00:35:40.420 | to simulate the universe?
00:35:41.680 | It's a fun, but a real physics question,
00:35:46.180 | way out of reach of our engineering capabilities.
00:35:48.780 | But it's just, it's a nice party over the beer,
00:35:51.340 | over beers thing to bring up with scientists.
00:35:55.140 | There's two things that make scientists uncomfortable
00:35:59.740 | that I love bringing up.
00:36:00.940 | One is the simulation question.
00:36:03.820 | And the other is, what do you think about the idea
00:36:06.940 | that's become popular recently
00:36:08.540 | that the earth might be flat?
00:36:10.280 | (audience laughing)
00:36:12.980 | They get really, they get angry actually.
00:36:17.540 | - I wanna say, I appreciate your work
00:36:20.180 | and I love the podcast and stuff like that.
00:36:23.660 | So people talk about athletes and academics
00:36:26.660 | being the greatest of their field.
00:36:28.420 | People consider Jesse Ellens
00:36:29.780 | to be one of the greatest runners of all time,
00:36:32.100 | even though he's quite outpaced by the runners today.
00:36:34.900 | People consider scientists like Isaac Newton
00:36:37.420 | one of the greatest science evers
00:36:38.900 | because of his advancements in classical mechanics
00:36:40.980 | and calculus, which is considered
00:36:42.900 | pretty basic physics nowadays.
00:36:44.420 | What do you define greatness as
00:36:47.260 | when it comes to the pursuit of an endeavor?
00:36:49.620 | Does it involve looking for the most advancements
00:36:52.320 | in the field given your starting point?
00:36:54.940 | Does it come from the journey and the work associated
00:36:57.820 | or the destination?
00:36:59.660 | Is it a personal concept
00:37:01.420 | or is it something you understand across humanity?
00:37:04.300 | - So thank you for that question.
00:37:07.380 | Very well written out and thought out.
00:37:10.580 | There's a personal greatness
00:37:13.020 | from the perspective of the individual for me.
00:37:15.540 | Like for me, greatness is doing what I love.
00:37:18.340 | That ignores the rest of society.
00:37:21.060 | It's just like, to me, I'm the greatest human
00:37:25.340 | to have ever lived in my own little world
00:37:28.500 | for having to do the things I love.
00:37:31.020 | And that's from my perspective.
00:37:33.340 | And I love the craftsmanship of it.
00:37:36.860 | Anything, it could be anything.
00:37:38.500 | It's just doing the skill.
00:37:40.940 | So that's not about accomplishment.
00:37:42.380 | That's not about anything.
00:37:43.220 | That's about just doing the things you love.
00:37:45.540 | From the perspective of society,
00:37:47.100 | they tend to then tell stories about these pursuits.
00:37:50.980 | And they like to, like greatness is something
00:37:54.980 | that people invent.
00:37:56.260 | They give Nobel prize, they give prizes to accomplishment.
00:37:59.540 | They kind of tell stories about human beings,
00:38:01.900 | about Steve Jobs, about different icons.
00:38:06.860 | And some are completely ignored through history.
00:38:09.460 | Some are glorified through history, like over glorified.
00:38:13.620 | I recently found out that the Pythagorean theorem
00:38:18.620 | was not developed by Pythagoras.
00:38:24.740 | But I read it on Wikipedia.
00:38:27.180 | I don't know if it's true.
00:38:28.540 | But that's an example of somebody I at least thought
00:38:32.140 | was kind of an actual entity, an actual human being
00:38:34.740 | that was great and associated with this idea.
00:38:37.660 | So to me, I think greatness is doing the things you love.
00:38:42.660 | And the rest is just luck,
00:38:45.740 | whether they tell a good story about you or not.
00:38:48.500 | - Give it up for our speaker, Dr. Leslie Kuhn.
00:38:50.740 | (audience applauding)
00:38:53.700 | (audience cheering)
00:38:56.700 | (audience cheering)
00:38:59.700 | (audience cheering)
00:39:02.700 | (audience cheering)
00:39:05.700 | (audience cheering)
00:39:08.700 | (audience cheering)
00:39:11.700 | [BLANK_AUDIO]