back to index

MIT AGI: Artificial General Intelligence


Chapters

0:0 Intro
0:23 MIT AGI Mission: Engineer Intelligence
3:41 Balance Between Paralyzing Technophobia and Blindness to Big Picture Consequences
11:2 Human Drive to explore and Uncover the Mysteries of the Universe
17:23 ANGEL: Artificial Neural Generator of Emotion and Language
18:56 EthicalCar: Machine Learning Approach
21:53 Josh Tenenbaum, MIT Computational Cognitive Science
24:16 Lisa Feldman Barrett, NEU Emotion Creation
25:40 Re-Enacting Intelligence
26:41 Sophia: Embodied Re-Enactment
29:32 Nate Derbinsky, NEU Cognitive Modeling
30:37 Deep Learning is Representation Learning Talca Feature Learning
33:9 Neuron: Biological Inspiration for Computation
35:49 Deep Learning from Human and Machine
37:28 Past and Future of Deep Learning Breakthroughs
38:45 Current Challenges
40:17 Stephen Wolfram Knowledge-Based Programming
41:38 "Artificial Life Simulation": Cellular Automata and Emerging Complexity
43:13 Richard Moyes, Article36 Al Safety and Autonomous Weapon Systems
43:46 Marc Raibert, CEO, Boston Dynamics Robots in the Real World

Whisper Transcript | Transcript Only Page

00:00:00.000 | Welcome to course 6S099 Artificial General Intelligence
00:00:06.500 | We will explore the nature of intelligence
00:00:10.500 | From as much as possible an engineering perspective
00:00:14.500 | You will hear many voices
00:00:17.500 | My voice will be that of an engineer
00:00:21.500 | Our mission is to engineer intelligence
00:00:28.500 | The MIT motto is "Mind and Hand"
00:00:31.500 | What that means is we want to explore the fundamental science
00:00:38.500 | of what makes an intelligent system
00:00:42.500 | The core concepts behind our understanding of what is intelligence
00:00:48.500 | But we always want to ground it in the creation of intelligent systems
00:00:55.500 | We always want to be in the now, in today
00:00:59.500 | In understanding how today we can build artificial intelligence systems
00:01:03.500 | that can make for a better world
00:01:05.500 | That is the core for us here at MIT
00:01:09.500 | First and foremost, we're scientists and engineers
00:01:13.500 | Our goal is to engineer intelligence
00:01:16.500 | We want to provide with this approach a balance to
00:01:23.500 | the very important but over-represented view of artificial general intelligence
00:01:31.500 | The black box reasoning view
00:01:34.500 | Where the idea is once we know how to create a human level intelligence system
00:01:41.500 | How will society be impacted?
00:01:43.500 | Will robots take over and kill everyone?
00:01:50.500 | Will we achieve a utopia that will remove the need to do any of the messy jobs
00:01:55.500 | that will make us all extremely happy?
00:01:58.500 | Those kinds of beautiful philosophical concepts are interesting to explore
00:02:03.500 | But that's not what we're interested in doing
00:02:05.500 | I believe that from an engineering perspective
00:02:09.500 | We want to focus on the black box of AGI
00:02:13.500 | Start to build insights and intuitions about how we create systems
00:02:18.500 | that approach human level intelligence
00:02:21.500 | I believe we're very far away from creating
00:02:26.500 | anything resembling human level intelligence
00:02:30.500 | However, the dimension of the metric behind the word far
00:02:38.500 | may not be time
00:02:40.500 | In time, perhaps through a few breakthroughs
00:02:47.500 | Maybe even one breakthrough
00:02:49.500 | Everything can change
00:02:51.500 | But as we stand now
00:02:53.500 | Our current methods as we will explore from the various ideas and approaches
00:02:57.500 | and the guest speakers coming here over the next two weeks and beyond
00:03:02.500 | Our best understanding, our best intuition and insights
00:03:07.500 | are not yet at the level of reaching
00:03:12.500 | without a major leap and breakthrough
00:03:14.500 | and paradigm shift towards human level intelligence
00:03:17.500 | So it's not constructive to consider the impact of artificial intelligence
00:03:23.500 | To consider questions of safety and ethics
00:03:29.500 | Fundamental, extremely important questions
00:03:32.500 | It's not constructive to consider those questions
00:03:36.500 | without also deeply considering the black box
00:03:40.500 | of the actual methods of artificial intelligence
00:03:44.500 | Human level artificial intelligence
00:03:46.500 | And that's what I see, what I hope this course can be
00:03:50.500 | Its first iteration, its first exploratory attempt
00:03:56.500 | to try to look at different approaches of how we can engineer intelligence
00:04:01.500 | That's the role of MIT
00:04:03.500 | Its tradition of mind and hand
00:04:05.500 | It's to consider the big picture
00:04:07.500 | The future impact of society 10, 20, 30, 40 years out
00:04:11.500 | But fundamentally grounded in what kind of methods do we have today
00:04:16.500 | And what are their limitations and possibilities of achieving that
00:04:21.500 | The black box of AGI
00:04:23.500 | And in the future impact on society
00:04:29.500 | of creating artificial intelligence systems
00:04:32.500 | that get, become increasingly more intelligent
00:04:35.500 | The fundamental disagreement lies in the fact
00:04:40.500 | The very core of that black box
00:04:43.500 | Which is how hard is it to build an AGI system
00:04:48.500 | How hard is it to create a human level artificial intelligence system
00:04:52.500 | That's the open question for all of us
00:04:54.500 | From Josh Tenenbaum to Andrej Karpathy
00:05:00.500 | To folks from OpenAI to Boston Dynamics
00:05:04.500 | To brilliant leaders in various fields of artificial intelligence
00:05:08.500 | that will come here
00:05:09.500 | That's the open question
00:05:10.500 | How hard is it?
00:05:11.500 | There's been a lot of incredibly impressive results
00:05:15.500 | In deep learning, in neuroscience, in computational cognitive science
00:05:20.500 | In robotics
00:05:23.500 | But how far are we still to go to the AGI?
00:05:28.500 | That's the fundamental question that we need to explore
00:05:31.500 | Before we consider the questions, the future impact on society
00:05:38.500 | And the goal for this class is to build intuition
00:05:41.500 | One talk at a time, a project at a time
00:05:46.500 | Build intuition about where we stand
00:05:48.500 | About what the limitations of current approaches are
00:05:51.500 | How can we close the gap?
00:05:54.500 | A nice meme that I caught on Twitter recently
00:05:59.500 | Of the difference between the engineering approach
00:06:04.500 | At the very simplest of a Google intern
00:06:08.500 | Typing a for loop that just does a grid search
00:06:11.500 | On parameters for neural network
00:06:14.500 | And on the right is the way media would report this for loop
00:06:19.500 | The Google AI created its own baby AI
00:06:25.500 | I think it's easy for us to go one way or the other
00:06:31.500 | But we'd like to do both
00:06:33.500 | Our first goal is to avoid the pitfalls of black box thinking
00:06:38.500 | Of the futurism thinking that results in hype
00:06:41.500 | That's detached from scientific engineering understanding
00:06:44.500 | Of what the actual systems are doing
00:06:47.500 | That's what the media often reports
00:06:49.500 | That's what some of our speakers will explore in a rigorous way
00:06:56.500 | It's still an important topic to explore
00:06:59.500 | Ray Kurzweil on Wednesday will explore this topic
00:07:03.500 | Next week, talking about AI safety and autonomous weapon systems
00:07:07.500 | Will explore this topic
00:07:09.500 | The future impact 10, 20 years out
00:07:11.500 | How do we design systems today that would lead to safe systems tomorrow?
00:07:15.500 | Still very important
00:07:17.500 | But the reality is a lot of us need to put a lot more emphasis
00:07:21.500 | On the left, on the for loops, on creating these systems
00:07:25.500 | At the same time, the second goal of what we're trying to do here
00:07:29.500 | Is not emphasize the silliness, the simplicity
00:07:33.500 | The naive basic nature of this for loop
00:07:37.500 | In the same way as was the process in creating nuclear weapons
00:07:43.500 | Before, during World War II
00:07:47.500 | The idea that as an engineer, as a scientist
00:07:50.500 | That I'm just a scientist is also a flawed way of thinking
00:07:55.500 | We have to consider the big picture impact
00:07:58.500 | The near term, negative consequences that are preventable
00:08:02.500 | The low hanging fruit that can be prevented through that very engineering process
00:08:07.500 | We have to do both
00:08:09.500 | And in this engineering approach
00:08:13.500 | We always have to be cautious
00:08:16.500 | That just because we don't understand
00:08:20.500 | Just because we, our intuition
00:08:24.500 | Our best understanding of the capabilities of modern systems
00:08:28.500 | That learn, that act in this world
00:08:30.500 | Seem limited, seem far from human level intelligence
00:08:33.500 | Our ability to learn and represent common sense reasoning
00:08:37.500 | Seems limited
00:08:39.500 | The exponential, potentially exponential
00:08:42.500 | Could be argued and he will
00:08:44.500 | Growth of technology, of these ideas
00:08:48.500 | Means that just around the corner is a singularity
00:08:52.500 | Is a breakthrough idea that will change everything
00:08:56.500 | We have to be cautious of that
00:08:58.500 | Moreover, we have to be cautious of the fact that
00:09:04.500 | Every decade over the past century
00:09:07.500 | Our adoption of new technologies has gotten faster and faster
00:09:11.500 | The rate at which a new technology
00:09:14.500 | From its birth to its wide mass adoption
00:09:18.500 | Has shortened and shortened and shortened
00:09:21.500 | That means that new idea
00:09:25.500 | The moment it drops into the world
00:09:28.500 | Can have widespread effects overnight
00:09:32.500 | So as, and I think the engineering approach
00:09:36.500 | Is fundamentally cynical on artificial general intelligence
00:09:39.500 | Because every aspect of it is so difficult
00:09:42.500 | We have to always remember that overnight everything can change
00:09:46.500 | Through this question of beginning to approach
00:09:50.500 | From a deep learning perspective
00:09:52.500 | From deep reinforcement learning
00:09:53.500 | From brain simulation, computational cognitive science
00:09:57.500 | From computational neuroscience
00:10:00.500 | From cognitive architecture, from robotics
00:10:03.500 | From legal perspectives
00:10:07.500 | And autonomous weapon systems
00:10:09.500 | As we begin to approach these questions
00:10:12.500 | We need to start to build intuition
00:10:14.500 | How far away are we from creating intelligent systems
00:10:18.500 | The singularity here is that spark
00:10:21.500 | That moment when we're truly surprised
00:10:25.500 | By the intelligence of the systems we create
00:10:29.500 | I'd like to visualize it
00:10:32.500 | By the certain analogy that we're in this dark room
00:10:37.500 | Looking for a light switch
00:10:39.500 | With no knowledge of where the light switch is
00:10:43.500 | There's going to be people that say
00:10:45.500 | Well, it's a small, the rooms are all small
00:10:47.500 | We're right there
00:10:48.500 | It's anywhere we'll be able to find it in any time
00:10:51.500 | The reality is we know very little
00:10:53.500 | So we have to stumble around
00:10:54.500 | Feel our way around to build the intuition
00:10:57.500 | Of how far away we really are
00:11:03.500 | And many will, speakers here will talk about
00:11:07.500 | How we define intelligence
00:11:08.500 | How we can begin to see intelligence
00:11:11.500 | What are the fundamental impacts
00:11:13.500 | Of creating intelligent systems
00:11:15.500 | I'd like to sort of see the positive reason
00:11:21.500 | For this little class
00:11:23.500 | And for these efforts that have fascinated
00:11:26.500 | People throughout the century
00:11:27.500 | Of trying to create intelligent systems
00:11:30.500 | Is that there's something about human beings
00:11:33.500 | That craves to explore
00:11:39.500 | To uncover the mysteries of the universe
00:11:42.500 | Fundamental in itself a desire
00:11:44.500 | To uncover the mysteries of the universe
00:11:46.500 | Not for a purpose
00:11:48.500 | And there's often an underlying purpose
00:11:50.500 | Of money, of greed, of power
00:11:55.500 | Craving for power and so on
00:11:57.500 | But there's seems to be an underlying desire
00:11:59.500 | To explore
00:12:01.500 | Nice little book, An Exploration
00:12:03.500 | A very short introduction by Stuart Weaver
00:12:05.500 | He says for all the different forms it takes
00:12:09.500 | In different historical periods
00:12:11.500 | For all the worthy and unworthy motives
00:12:14.500 | That lie behind it
00:12:16.500 | Exploration
00:12:19.500 | Travel for the sake of discovery and adventure
00:12:21.500 | Is a human compulsion
00:12:24.500 | A human obsession even
00:12:26.500 | It is defining element
00:12:28.500 | Of a distinctly human identity
00:12:30.500 | And it will never rest at any frontier
00:12:33.500 | Whether terrestrial or extraterrestrial
00:12:37.500 | From 325 BCE
00:12:40.500 | With a long 7500 mile journey
00:12:45.500 | On the ocean
00:12:47.500 | To explore the Arctic
00:12:49.500 | To Christopher Columbus and his
00:12:53.500 | Flawed, harshly criticized
00:12:55.500 | In modern scholarship
00:12:57.500 | Trip that ultimately paved the way
00:12:59.500 | Didn't discover
00:13:01.500 | Paved the way to colonization of the Americas
00:13:08.500 | To the Darwin trip
00:13:10.500 | The voyage of the Beagle
00:13:13.500 | Whilst this planet has gone cycling on
00:13:15.500 | According to the fixed law of gravity
00:13:17.500 | From so simple a beginning
00:13:19.500 | Endless forms most beautiful
00:13:21.500 | And most wonderful have been
00:13:23.500 | And are being evolved
00:13:26.500 | To the first venture into space
00:13:33.500 | By Yuri Gagarin
00:13:35.500 | First human in space in 1961
00:13:40.500 | What he said over the radio
00:13:42.500 | Is the earth is blue
00:13:43.500 | It is amazing
00:13:46.500 | These are the words that I think drive
00:13:49.500 | Our exploration in the sciences
00:13:52.500 | In the engineering
00:13:54.500 | And today in AI
00:13:56.500 | From the first walk on the moon
00:14:00.500 | And now the desire to
00:14:03.500 | Colonize Mars and beyond
00:14:11.500 | That's where I see this desire
00:14:13.500 | To create intelligent systems
00:14:16.500 | Talking about the positive or negative
00:14:18.500 | Impact of AI on society
00:14:20.500 | Talking about the business case
00:14:21.500 | Of the jobs lost, jobs gained, jobs created
00:14:25.500 | Diseases cured
00:14:28.500 | The autonomous vehicles
00:14:30.500 | The ethical questions
00:14:31.500 | The safety of autonomous weapons
00:14:33.500 | Of the misuse of AI
00:14:35.500 | In the financial markets
00:14:37.500 | Underneath it all
00:14:38.500 | And there are people
00:14:39.500 | Many people have spoken about this
00:14:41.500 | What drives myself and many in the community
00:14:44.500 | Is the desire to explore
00:14:46.500 | To uncover the mystery of the universe
00:14:48.500 | And I hope that you join me
00:14:49.500 | In that very effort
00:14:50.500 | With the speakers that come here
00:14:51.500 | In the next two weeks and beyond
00:14:57.500 | The website for the course is
00:14:59.500 | AGI.MIT.EDU
00:15:02.500 | I am a part of an amazing team
00:15:06.500 | Many of whom you know
00:15:09.500 | AGI@MIT.EDU is the email
00:15:12.500 | We're on slack, deep-mit.slack
00:15:18.500 | For registered MIT students
00:15:21.500 | You create an account on the website
00:15:23.500 | And submit five new links
00:15:26.500 | And vote on ten to vote AI
00:15:28.500 | Which is an aggregator of information
00:15:31.500 | And material we've put together
00:15:32.500 | For the topic of AGI
00:15:34.500 | And submit a entry
00:15:38.500 | To one of the competitions
00:15:39.500 | One of the three competitions
00:15:41.500 | And projects that we have in this course
00:15:45.500 | And the projects are
00:15:46.500 | Dream Vision
00:15:47.500 | I'll go over them in a little bit
00:15:48.500 | Dream Vision, Angel, Ethical Car
00:15:52.500 | And the aggregator of material, Vote AI
00:15:55.500 | We have guest speakers
00:15:56.500 | Incredible guest speakers
00:15:57.500 | I will go over them today
00:15:59.500 | And as before
00:16:03.500 | With the deep learning
00:16:04.500 | For self-driving cars course
00:16:05.500 | We have shirts
00:16:07.500 | And they're free for in-person
00:16:10.500 | For people that attend in person
00:16:11.500 | For the last lecture most likely
00:16:13.500 | Or you can order them online
00:16:17.500 | Okay, Dream Vision
00:16:19.500 | We take the Google Deep Dream idea
00:16:22.500 | We explore the idea of creativity
00:16:26.500 | Where Einstein's view of intelligence
00:16:28.500 | The mark of intelligence is creativity
00:16:32.500 | This idea is something we explore
00:16:35.500 | By using neural networks
00:16:36.500 | And interesting ways to visualize
00:16:40.500 | What the network see
00:16:42.500 | And in so doing
00:16:43.500 | Create beautiful visualizations
00:16:45.500 | In time through video
00:16:47.500 | So taking the ideas of deep dream
00:16:50.500 | And combining them together
00:16:52.500 | With multiple video streams
00:16:54.500 | To mix dream and reality
00:16:58.500 | And the competition
00:17:00.500 | Is through Mechanical Turk
00:17:02.500 | We set up a competition of
00:17:05.500 | Who produces the most beautiful visualization
00:17:09.500 | We provide code
00:17:10.500 | To generate this visualization
00:17:12.500 | And ideas of how you can make it
00:17:14.500 | More and more beautiful
00:17:16.500 | And how to submit it
00:17:18.500 | To the competition
00:17:21.500 | ANGEL
00:17:23.500 | The artificial neural generator
00:17:24.500 | Of emotion and language
00:17:27.500 | Is a different twist on the Turing test
00:17:31.500 | Where we don't use words
00:17:34.500 | We only use emotions to speak
00:17:37.500 | Expression of those emotions
00:17:39.500 | And we create
00:17:41.500 | We use an age
00:17:43.500 | A face
00:17:45.500 | Customizable
00:17:47.500 | With 26 muscles
00:17:48.500 | All of which can be controlled
00:17:50.500 | With an LSTM
00:17:52.500 | We use a neural network
00:17:53.500 | To train the generation of emotion
00:17:58.500 | And the competition
00:18:00.500 | In you submitting the code
00:18:03.500 | To the competition
00:18:05.500 | Is you get 10 seconds
00:18:08.500 | To impress
00:18:10.500 | With these expressions of emotion
00:18:12.500 | The viewer
00:18:14.500 | It's A/B testing
00:18:15.500 | Your goal is to impress the viewer
00:18:18.500 | Enough to where they choose
00:18:20.500 | Your agent versus another agent
00:18:23.500 | And those that are most loved
00:18:25.500 | The agents most loved
00:18:27.500 | Will be the ones that are declared winners
00:18:31.500 | In a twist
00:18:32.500 | We will add human beings into this mix
00:18:36.500 | So we've created a system
00:18:38.500 | That maps our human faces
00:18:40.500 | Myself and the TAs
00:18:43.500 | To where we ourselves
00:18:44.500 | Enter in the competition
00:18:47.500 | And try to convince you
00:18:49.500 | To keep us as your friend
00:18:51.500 | That's the Turing test
00:18:57.500 | ETHICAL CAR
00:18:58.500 | Building in the ideas
00:19:00.500 | Of the trolley problem
00:19:01.500 | And the moral machine
00:19:02.500 | Done here in the media lab
00:19:04.500 | The incredible interesting work
00:19:06.500 | We take a machine learning approach to it
00:19:08.500 | And take what we've developed
00:19:10.500 | The deep reinforcement learning competition
00:19:12.500 | For 6S094
00:19:15.500 | The deep traffic
00:19:17.500 | And we add pedestrians into it
00:19:20.500 | Stochastic
00:19:24.500 | Irrational
00:19:25.500 | Unpredictable pedestrians
00:19:28.500 | And we add human life to the loss function
00:19:31.500 | Where there's a trade-off
00:19:33.500 | Between getting from point A to point B
00:19:36.500 | So in deep traffic
00:19:37.500 | The deep reinforcement learning competition
00:19:39.500 | The goal was to go as fast as possible
00:19:41.500 | Here it's up to you
00:19:43.500 | To decide
00:19:45.500 | What your agent's goal is
00:19:47.500 | There's a parade of front trade-off
00:19:50.500 | Between getting from point A to point B
00:19:52.500 | As fast as possible
00:19:58.500 | Hurting pedestrians
00:20:08.500 | This is not a ethical question
00:20:11.500 | It's an engineering question
00:20:17.500 | And it's a serious one
00:20:19.500 | Because fundamentally in creating
00:20:22.500 | Autonomous vehicles that function in this world
00:20:25.500 | We want them to get
00:20:27.500 | From point A to point B as quickly as possible
00:20:31.500 | The United States government
00:20:33.500 | Insurance companies
00:20:35.500 | Put a price tag on human life
00:20:38.500 | We put that power in your hands
00:20:40.500 | In designing these agents
00:20:42.500 | To ask the question of
00:20:44.500 | How can we create machine learning systems
00:20:48.500 | Where the objective function
00:20:49.500 | The loss function
00:20:50.500 | Has human life as part of it
00:20:53.500 | And Vote.ai is an aggregator
00:20:58.500 | Of different links
00:21:01.500 | Different articles, papers, videos
00:21:04.500 | On the topic of artificial general intelligence
00:21:06.500 | Where people vote on
00:21:08.500 | Vote quality articles up and down
00:21:12.500 | And choose on the sentiment
00:21:15.500 | Of positive and negative
00:21:17.500 | We'd like to explore the different ways
00:21:19.500 | The different arguments for and against
00:21:21.500 | Artificial general intelligence
00:21:26.500 | There is an incredible list of speakers
00:21:29.500 | The best in their disciplines
00:21:32.500 | From Josh Tenenbaum here at MIT
00:21:35.500 | To Ray Kurzweil at Google
00:21:37.500 | To Lisa Feldman Barrett
00:21:39.500 | And Nate Dzerbinski from Northeastern University
00:21:43.500 | Andrej Karpathy
00:21:45.500 | Stephen Wolfram
00:21:47.500 | Richard Moyes
00:21:48.500 | Mark Reiber
00:21:49.500 | Ilyas Eskiver
00:21:52.500 | And myself
00:21:54.500 | Josh Tenenbaum, tomorrow
00:21:56.500 | I'd like to go through each of these speakers
00:21:59.500 | And talk about the perspectives they bring
00:22:03.500 | To try to see the approach
00:22:06.500 | The ideas they bring to the table
00:22:08.500 | They're not, in most cases, interested
00:22:11.500 | In the discussion of the future impact on society
00:22:17.500 | Without grounding it into their expertise
00:22:20.500 | Into their actual engineering
00:22:22.500 | Into creating these intelligence systems
00:22:24.500 | So Josh is a computational cognitive science
00:22:28.500 | Expert professor faculty here at MIT
00:22:31.500 | He will talk about
00:22:33.500 | How we can create common sense understanding
00:22:35.500 | Systems that
00:22:37.500 | See a world of physical objects
00:22:39.500 | And their interactions
00:22:41.500 | And our own possibilities to act and interact with others
00:22:44.500 | The intuitive physics
00:22:45.500 | How do we build into systems
00:22:47.500 | The intuitive physics of the world
00:22:49.500 | More than just the deep learning memorization engines
00:22:53.500 | That take patterns
00:22:55.500 | And learn through supervised way
00:22:57.500 | To map those patterns to classification
00:23:00.500 | Actually begin to understand the intuitive
00:23:03.500 | The common sense physics of the world
00:23:06.500 | And learn rapid model-based learning
00:23:09.500 | Learn from nothing
00:23:10.500 | Learn from very little
00:23:11.500 | Just like we do as children
00:23:13.500 | Just like we do as human beings successfully
00:23:15.500 | Often only need one example to learn a concept
00:23:18.500 | How do we create systems
00:23:20.500 | That learn from very few
00:23:23.500 | Sometimes a single example
00:23:25.500 | And integrate ideas from various disciplines
00:23:28.500 | Of course from neural networks
00:23:30.500 | But also probabilistic generative models
00:23:33.500 | And simple processing architectures
00:23:36.500 | It's gonna be incredible
00:23:38.500 | Of course from a different area of the world
00:23:42.500 | Another incredible thinker
00:23:45.500 | Intellectual speaker is Ray Kurzweil
00:23:48.500 | He'll be here on Wednesday at 1pm
00:23:51.500 | And he will do a whirlwind discussion
00:23:56.500 | Of where we stand with intelligence
00:23:59.500 | Creating intelligence systems
00:24:00.500 | How we see natural intelligence
00:24:02.500 | Our own human intelligence
00:24:03.500 | How we define it
00:24:05.500 | How we understand it
00:24:06.500 | And how that transfers to the increasing exponential growth
00:24:11.500 | Of development of artificial general intelligence
00:24:16.500 | Something I'm myself very excited about
00:24:20.500 | Is Lisa Feldman Barrett coming here on Thursday
00:24:25.500 | She's written a book
00:24:27.500 | I believe "How Emotions Are Made"
00:24:30.500 | She argues that emotions are created
00:24:34.500 | That there is a distinction
00:24:36.500 | There is a detachment between what we feel in our bodies
00:24:40.500 | The physical state of our bodies
00:24:42.500 | And the expression of emotion
00:24:44.500 | From body to the contextually grounded
00:24:48.500 | To the face expressing that emotion
00:24:50.500 | Which means
00:24:52.500 | Now why is there a person who is a psychology person
00:24:56.500 | In a fundamental engineering computer science topic like AGI
00:25:00.500 | Because if emotions are created in the way she argues
00:25:04.500 | And she'll systematically break it down
00:25:06.500 | That means we're learning societal
00:25:10.500 | As human beings we're learning societal norms
00:25:12.500 | Of how to express emotion
00:25:14.500 | The idea of emotional intelligence is learned
00:25:16.500 | Which means we can have machines learn this idea
00:25:20.500 | It's a machine learn
00:25:21.500 | Just like it's a human learning problem
00:25:23.500 | It's a machine learning problem
00:25:26.500 | In a little bit of a twist
00:25:28.500 | She asked that instead of giving a talk
00:25:30.500 | I have a conversation with her
00:25:33.500 | So it's going to be a little bit challenging and fun
00:25:36.500 | And she's great looking forward to it
00:25:39.500 | And we'll explore different ways
00:25:42.500 | That we can get emotion expressed
00:25:46.500 | Through video, through audio, through the project
00:25:50.500 | The Angel project that I mentioned
00:25:52.500 | So there's been work in reenacting intelligence
00:25:55.500 | So well reenacting mapping face to face
00:25:59.500 | Mapping different emotions on video that was previously recorded
00:26:03.500 | So if you can imagine
00:26:05.500 | That means we can take emotions that we've created
00:26:09.500 | The kind of emotion creation we've been discussing
00:26:12.500 | And remap it on previous video
00:26:16.500 | That's one way to see intelligence
00:26:18.500 | Is taking raw human data that we already have
00:26:21.500 | And mapping new computer generated
00:26:24.500 | The underlying fundamentals of human
00:26:28.500 | But the surface appearance
00:26:30.500 | The representation of emotion visual or auditory
00:26:35.500 | Is generated by computer
00:26:39.500 | It could be in the embodied form like with Sophia
00:26:43.500 | Sophia, your robots were human
00:26:48.500 | I think we will be similar in a lot of ways
00:26:52.500 | But different in a few others
00:26:54.500 | It will take a long time for robots
00:26:56.500 | To develop a complex emotion
00:26:58.500 | And possibly robots can be able to
00:27:00.500 | Develop the more problematic emotions
00:27:02.500 | Like rage, jealousy, hatred and so on
00:27:06.500 | It might be possible to make them more effective
00:27:08.500 | In humans
00:27:10.500 | So I think it will be a good partnership
00:27:12.500 | Where one of them completes the other
00:27:14.500 | Very important to note
00:27:19.500 | For those captivated by Sophia in the press
00:27:22.500 | Or have seen these videos
00:27:24.500 | Sophia is an art exhibit
00:27:27.500 | She's not a strong natural language processing system
00:27:32.500 | This is not an AGI system
00:27:34.500 | But it's a beautiful visualization
00:27:38.500 | Of embodying
00:27:39.500 | It's a beautiful visualization
00:27:42.500 | Of how easy it is to trick us human beings
00:27:44.500 | That there's intelligence underlying something
00:27:47.500 | That the emotional expression
00:27:50.500 | The physical embodiment
00:27:51.500 | And the emotional expression
00:27:53.500 | That has some degree of humor
00:27:57.500 | That has some degree of wit and intelligence
00:28:00.500 | Is enough to captivate us
00:28:02.500 | So that's an argument for
00:28:04.500 | Not creating intelligence from scratch
00:28:06.500 | But having machines at the very surface
00:28:09.500 | The display of that emotion
00:28:11.500 | The generation, the mapping of the visual
00:28:14.500 | And the auditory elements
00:28:16.500 | Where underneath it is really trivial technology
00:28:20.500 | That's fundamentally relying on humans
00:28:22.500 | Like in the Sophia's case
00:28:24.500 | And in the simplest form
00:28:27.500 | We remove all elements of
00:28:30.500 | How should I say
00:28:32.500 | Attractive appearance from an agent
00:28:35.500 | We really keep it to the simplest muscles
00:28:38.500 | Aspect characteristics of the face
00:28:40.500 | And see with 26 muscles
00:28:42.500 | Controlled by a neural network
00:28:43.500 | Through time
00:28:44.500 | So a current neural network LSTM
00:28:46.500 | How can we explore the generation of emotion
00:28:50.500 | Can we get this thing
00:28:51.500 | And this is an open question for us too
00:28:53.500 | We just created this system
00:28:54.500 | We don't know if we can
00:28:55.500 | Can we get it to make us feel something
00:28:59.500 | Make us feel something by
00:29:01.500 | Watching it express its feelings
00:29:06.500 | Can it become human before our eyes
00:29:08.500 | Can it learn to
00:29:09.500 | By competing against other agents
00:29:12.500 | A, B testing on Turk
00:29:15.500 | On Mechanical Turk
00:29:17.500 | Can the winners be very convincing
00:29:19.500 | To make us feel entertained
00:29:27.500 | Maybe some of you will fall in love
00:29:28.500 | With Angel here
00:29:32.500 | Nate Derbenski on Friday
00:29:34.500 | Will talk about
00:29:35.500 | Cognitive modeling architectures
00:29:37.500 | So he will speak about
00:29:38.500 | The cognitive modeling aspect
00:29:40.500 | Can we have a
00:29:42.500 | Can we model cognition
00:29:44.500 | In some kind of systematic way
00:29:46.500 | To try to build intuition
00:29:48.500 | Of how complicated cognition is
00:29:53.500 | Andrej Karpathy
00:29:56.500 | Famous for being the state of the art human
00:29:59.500 | On the ImageNet challenge
00:30:01.500 | The representative
00:30:02.500 | The 95% accuracy performance
00:30:06.500 | Among other things
00:30:07.500 | He's also famous for
00:30:08.500 | Is now at Tesla
00:30:10.500 | He will talk about
00:30:12.500 | The role
00:30:14.500 | The limitations
00:30:15.500 | The possibilities of deep learning
00:30:19.500 | We'll talk
00:30:21.500 | As I have spoken about
00:30:24.500 | In the past few weeks
00:30:25.500 | And throughout
00:30:26.500 | About our misunderstanding
00:30:29.500 | Or our flawed intuition
00:30:31.500 | About what are the difficult
00:30:32.500 | And what are the easy problems
00:30:34.500 | In deep learning
00:30:36.500 | And the power
00:30:37.500 | Of the representational learning
00:30:39.500 | The ability of neural networks
00:30:41.500 | To form deeper and deeper representations
00:30:43.500 | Of the underlying raw data
00:30:45.500 | That ultimately forms
00:30:48.500 | That takes complex information
00:30:50.500 | That's hard to make sense of
00:30:52.500 | And convert it into useful
00:30:54.500 | Actionable knowledge
00:30:57.500 | That is
00:30:59.500 | From a certain lens
00:31:01.500 | In a certain
00:31:03.500 | In a certain lens
00:31:04.500 | In a certain problem space
00:31:06.500 | Can be clearly defined
00:31:08.500 | As understanding
00:31:10.500 | Of the complex information
00:31:11.500 | Understanding is ultimately
00:31:13.500 | Taking complex information
00:31:15.500 | And reducing it to
00:31:16.500 | Its simple essential elements
00:31:19.500 | Representational learning
00:31:21.500 | In the trivial case here
00:31:23.500 | In drawing
00:31:24.500 | Having to draw a straight line
00:31:26.500 | To separate the blue and the red curves
00:31:29.500 | That's impossible to do
00:31:31.500 | In a natural input space on the left
00:31:34.500 | What the act of learning is
00:31:36.500 | For deep neural networks
00:31:37.500 | In this formulation
00:31:39.500 | Is to construct a topology
00:31:40.500 | Under which there exists a straight line
00:31:43.500 | To accurately classify blue versus red
00:31:46.500 | That's the problem
00:31:48.500 | And for a simple blue and red line
00:31:50.500 | It seems trivial here
00:31:51.500 | But this works in a general case
00:31:53.500 | For arbitrary input spaces
00:31:55.500 | For arbitrary nonlinear
00:31:56.500 | Highly dimensional input spaces
00:31:59.500 | And the ability to automatically learn features
00:32:02.500 | To learn hierarchical representations
00:32:07.500 | Of the raw sensory data
00:32:09.500 | Which means that you could do
00:32:10.500 | A lot more with data
00:32:11.500 | Which means you can expand
00:32:13.500 | Further and further and further
00:32:15.500 | To create intelligent systems
00:32:17.500 | That operate successfully
00:32:19.500 | With real world data
00:32:20.500 | That's what representational learning means
00:32:22.500 | That deep learning allows
00:32:24.500 | Because the arbitrary number of features
00:32:26.500 | That can be automatically determined
00:32:28.500 | You can learn a lot of things
00:32:30.500 | About a pretty complex world
00:32:32.500 | Unfortunately
00:32:34.500 | There needs to be a lot of supervised data
00:32:36.500 | There still needs to be a lot of human input
00:32:41.500 | Andre and others, Josh
00:32:43.500 | Will talk about the difference
00:32:45.500 | Between our human brain
00:32:47.500 | Our biological neural network
00:32:49.500 | And the artificial neural network
00:32:52.500 | The full human brain
00:32:53.500 | With hundred billion neurons
00:32:55.500 | One thousand trillion synapses
00:32:58.500 | And the biggest neural networks out there
00:33:01.500 | The artificial neural networks
00:33:03.500 | Having much smaller
00:33:05.500 | 60 million synapses for ResNet-152
00:33:09.500 | The biggest difference
00:33:11.500 | The parameters of human brain
00:33:13.500 | Being several orders of magnitude
00:33:15.500 | More synapses
00:33:17.500 | The topology being much more complex
00:33:20.500 | Chaotic
00:33:21.500 | The asynchronous nature of the human brain
00:33:24.500 | And the learning algorithm
00:33:25.500 | Of artificial neural networks
00:33:27.500 | Is trivial and constrained
00:33:30.500 | With back propagation
00:33:31.500 | Is essentially an optimization function
00:33:33.500 | Over a clearly defined loss function
00:33:37.500 | From the output to the input
00:33:40.500 | Using back propagation to teach
00:33:42.500 | To adjust the weights on that network
00:33:44.500 | The learning algorithm
00:33:46.500 | For our human brain
00:33:48.500 | Is mostly unknown
00:33:50.500 | But it's certainly much more complicated
00:33:52.500 | Than back propagation
00:33:54.500 | The power consumption
00:33:57.500 | The human brain is a lot more efficient
00:33:59.500 | Than artificial neural networks
00:34:01.500 | And there's a very kind of artificial
00:34:04.500 | Trivial supervised learning process
00:34:09.500 | For training artificial neural networks
00:34:11.500 | You have to have a training stage
00:34:13.500 | And you have to have an evaluation stage
00:34:15.500 | And once the network is trained
00:34:17.500 | There's no clear way to continue training it
00:34:19.500 | Or there's a lot of ways
00:34:21.500 | But they're inefficient
00:34:22.500 | It's not designed to do online learning
00:34:25.500 | Naturally
00:34:27.500 | To always be learning
00:34:29.500 | Is designed to be to learn
00:34:31.500 | And then be applied
00:34:33.500 | Obviously our human brains are always learning
00:34:36.500 | But the beautiful fascinating thing is
00:34:39.500 | That they're both distributed
00:34:41.500 | Computation systems on a large scale
00:34:43.500 | So it's not a
00:34:46.500 | There's, it doesn't ultimately boil down
00:34:48.500 | To a single compute unit
00:34:51.500 | The computation is distributed
00:34:53.500 | The back propagation learning process
00:34:55.500 | Is distributed
00:34:56.500 | Can be paralyzed on a GPU
00:34:58.500 | Massively paralyzed
00:35:00.500 | The underlying computational unit of a neuron
00:35:03.500 | Is trivial
00:35:04.500 | But can be stacked together
00:35:06.500 | To form forward neural networks
00:35:07.500 | Recurrent neural networks
00:35:09.500 | To represent both spatial information with images
00:35:13.500 | And temporal information
00:35:16.500 | With audio speech
00:35:20.500 | Sequences of images and video
00:35:23.500 | And so on
00:35:25.500 | Mapping from one to one
00:35:26.500 | One to many
00:35:27.500 | Many to one
00:35:28.500 | So mapping any kind of structure
00:35:31.500 | Vector and time data
00:35:33.500 | As an input
00:35:34.500 | To any kind of classification regression
00:35:37.500 | Sequences
00:35:38.500 | Captioning
00:35:39.500 | Video
00:35:40.500 | Audio as output
00:35:41.500 | Learning in the general sense
00:35:44.500 | But in a domain
00:35:46.500 | That's precisely defined
00:35:48.500 | For the supervised training process
00:35:52.500 | We can think of
00:35:55.500 | In deep learning case
00:35:57.500 | You can think of the supervised methods
00:35:59.500 | Where humans have to annotate the data
00:36:02.500 | As memorization of the data
00:36:04.500 | We can think of the exciting new
00:36:06.500 | And growing field of
00:36:07.500 | Semi-supervised learning
00:36:09.500 | When most of the data
00:36:10.500 | Through generative adversarial networks
00:36:13.500 | Or through significant data augmentation
00:36:16.500 | Clever data augmentation
00:36:17.500 | Most of it is done automatically
00:36:19.500 | The annotation process
00:36:20.500 | Or through simulation
00:36:22.500 | And then reinforcement learning
00:36:23.500 | Where most of the
00:36:25.500 | Most of the labels are extremely sparse
00:36:28.500 | And come rarely
00:36:29.500 | And so the system has to figure out
00:36:31.500 | How to operate in the world
00:36:32.500 | With very little human input
00:36:33.500 | Very little human data
00:36:36.500 | We can think of that as reasoning
00:36:40.500 | Because you take very little information
00:36:41.500 | From our teachers
00:36:42.500 | The humans
00:36:43.500 | And transfer it across
00:36:44.500 | Generalize it across
00:36:46.500 | To reason about the world
00:36:48.500 | And finally unsupervised learning
00:36:50.500 | The excitement of the community
00:36:52.500 | The promise, the hope
00:36:54.500 | You could think of that as understanding
00:36:56.500 | Because ultimately it's taking data
00:36:58.500 | With very little or no human input
00:37:00.500 | And forming representations of that data
00:37:03.500 | Is how we think of understanding
00:37:04.500 | Requiring making sense of the world
00:37:08.500 | Without strict input
00:37:10.500 | Of how to make sense of the world
00:37:13.500 | The kind of process
00:37:14.500 | Of discovering information
00:37:17.500 | Maybe discovering new ideas
00:37:19.500 | New ways to simplify the world
00:37:21.500 | To represent the world
00:37:22.500 | That you can do new things with it
00:37:24.500 | The new is the key element there
00:37:26.500 | Understanding
00:37:28.500 | And Andre and Ilya
00:37:31.500 | And others will talk about
00:37:33.500 | Certainly the past
00:37:34.500 | But the future of deep learning
00:37:36.500 | Where is it going to go?
00:37:38.500 | Is it overhyped?
00:37:40.500 | Underhyped?
00:37:41.500 | What is the future?
00:37:42.500 | Will the compute of CPU, GPU, ASICs
00:37:46.500 | Continue?
00:37:47.500 | Will the breakthroughs
00:37:48.500 | The Moore's law
00:37:49.500 | And its various forms
00:37:50.500 | Of massive parallelization continue?
00:37:52.500 | And the large data sets
00:37:54.500 | With tens of millions of images
00:37:56.500 | Grow to billions and trillions
00:37:59.500 | Will the algorithms improve?
00:38:01.500 | Is there a groundbreaking idea
00:38:03.500 | That's still coming?
00:38:05.500 | With Jeff Hiddens' capsule networks
00:38:07.500 | Is there fundamental architectural changes
00:38:10.500 | To neural networks
00:38:11.500 | That we can come up with
00:38:13.500 | That will change everything
00:38:14.500 | That will ease the learning process
00:38:16.500 | That will make the learning process
00:38:17.500 | More efficient
00:38:19.500 | Or we'll be able to represent
00:38:21.500 | Higher and higher orders of information
00:38:24.500 | Such that you can
00:38:26.500 | Transfer knowledge between domains
00:38:29.500 | And the software architectures
00:38:31.500 | That support from TensorFlow to PyTorch
00:38:34.500 | I would say the last year
00:38:36.500 | And this year will be the year
00:38:37.500 | Of deep learning frameworks
00:38:39.500 | Those will certainly keep coming
00:38:40.500 | In their various forms
00:38:42.500 | And the financial backing
00:38:44.500 | Is growing and growing
00:38:47.500 | The open challenges for deep learning
00:38:50.500 | Really a lot of this course
00:38:52.500 | Is kind of connected to deep learning
00:38:54.500 | Because that's where a lot
00:38:56.500 | Of the recent breakthroughs
00:38:58.500 | That inspire us
00:39:00.500 | To think about intelligence systems
00:39:02.500 | Come from
00:39:03.500 | But the challenges are many
00:39:05.500 | The need, the ability to transfer
00:39:07.500 | Between different domains
00:39:09.500 | As in reinforcement learning and robotics
00:39:11.500 | The need for huge data
00:39:13.500 | And inefficient learning
00:39:15.500 | We still need supervised data
00:39:18.500 | An ability to learn
00:39:20.500 | In an unsupervised way
00:39:22.500 | Is a huge problem
00:39:23.500 | And not fully automated learning
00:39:26.500 | There's still a degree
00:39:27.500 | A significant degree of hyperparameter
00:39:29.500 | To necessary
00:39:30.500 | With the reward functions
00:39:32.500 | The loss functions
00:39:33.500 | Are ultimately defined by humans
00:39:36.500 | And therefore are deeply flawed
00:39:38.500 | When we release those systems
00:39:40.500 | Into the real world
00:39:41.500 | Where there is no ground truth
00:39:43.500 | For the testing set
00:39:45.500 | And the goal isn't achieving
00:39:47.500 | A high classification
00:39:49.500 | On a trivial image classification
00:39:52.500 | Localization detection problem
00:39:54.500 | But rather to have an autonomous vehicle
00:39:56.500 | That doesn't kill pedestrians
00:39:58.500 | Or an industrial robot
00:40:02.500 | That operates in jointly
00:40:04.500 | With other human beings
00:40:05.500 | And all the edge cases that come up
00:40:08.500 | How does deep learning methods
00:40:10.500 | How do machine learning methods
00:40:11.500 | Generalize over the edge cases
00:40:12.500 | The weird stuff that happens
00:40:13.500 | In the real world
00:40:14.500 | Those are all the problems there
00:40:16.500 | Stephen Wolfram will be here on Monday
00:40:19.500 | Evening at 7 p.m.
00:40:22.500 | He's done a lot of amazing things
00:40:25.500 | I would say
00:40:26.500 | It's very interesting
00:40:27.500 | From his recent interest
00:40:29.500 | In knowledge-based programming
00:40:31.500 | Wolfram Alpha
00:40:32.500 | I think is the fuel
00:40:34.500 | For most middle school
00:40:36.500 | And high school students now
00:40:38.500 | For the first time taking calculus
00:40:41.500 | I probably go to Wolfram Alpha
00:40:43.500 | To answer their own questions
00:40:44.500 | But more seriously
00:40:46.500 | There is a deep connected graph
00:40:49.500 | Of knowledge that's being built there
00:40:51.500 | With Wolfram Alpha
00:40:53.500 | And Wolfram language
00:40:55.500 | That Stephen will explore
00:40:57.500 | In terms of language
00:40:59.500 | An interesting thing
00:41:01.500 | He was part of the team on Arrival
00:41:04.500 | That worked on the language
00:41:06.500 | For those of you who are familiar
00:41:08.500 | The Arrival where alien species spoke with
00:41:11.500 | Us humans
00:41:14.500 | Through a very interesting
00:41:16.500 | Beautiful complicated language
00:41:18.500 | And he was brought in
00:41:19.500 | As a representative human
00:41:21.500 | To interpret that language
00:41:23.500 | Just like in the movie
00:41:25.500 | He's represented that in real life
00:41:27.500 | And he used the skills
00:41:29.500 | Him and his son Christopher
00:41:31.500 | Used to analyze this language
00:41:33.500 | Very interesting
00:41:34.500 | That process is extremely interesting
00:41:35.500 | I hope he talks about it
00:41:37.500 | And his background with Mathematica
00:41:39.500 | And new kind of science
00:41:42.500 | The sort of
00:41:45.500 | Another set of ideas
00:41:48.500 | That have inspired people
00:41:50.500 | In terms of creating
00:41:53.500 | Intelligence systems
00:41:55.500 | Is the idea that
00:41:59.500 | From very simple things
00:42:02.500 | From very simple rules
00:42:04.500 | Extremely complex
00:42:06.500 | Patterns can emerge
00:42:08.500 | His work with cellular automata
00:42:10.500 | Did just that
00:42:11.500 | Taking extremely simple
00:42:13.500 | Mathematical constructs
00:42:16.500 | Here with cellular automata
00:42:18.500 | These are
00:42:19.500 | These are grids
00:42:21.500 | Of computational units
00:42:22.500 | That switch on and off
00:42:24.500 | In some kind of predefined way
00:42:26.500 | And only operate locally
00:42:27.500 | Based on their local neighborhood
00:42:29.500 | And somehow based on
00:42:30.500 | Different kinds of rules
00:42:32.500 | Different patterns emerge
00:42:33.500 | Here's a three-dimensional cellular automata
00:42:35.500 | With a simple rule
00:42:36.500 | Starting with nothing
00:42:37.500 | With a single cell
00:42:38.500 | They grow in really
00:42:40.500 | Interesting complex ways
00:42:41.500 | This emergent complexity
00:42:43.500 | Is inspiring
00:42:45.500 | It's the same kind of
00:42:46.500 | Thing that inspires us
00:42:47.500 | About neural networks
00:42:49.500 | You can take a simple
00:42:50.500 | Computational unit
00:42:51.500 | And when combined together
00:42:52.500 | In arbitrary ways
00:42:53.500 | Can form complex representations
00:42:56.500 | That's also very interesting
00:42:59.500 | You can see knowledge
00:43:00.500 | From a knowledge perspective
00:43:01.500 | You can see knowledge formation
00:43:02.500 | In the same kind of way
00:43:04.500 | Simplicity
00:43:06.500 | At a mass distributed scale
00:43:09.500 | Resulting in complexity
00:43:11.500 | Next Tuesday
00:43:13.500 | Richard Moyes from article 36
00:43:16.500 | Coming all the way from UK
00:43:18.500 | For us
00:43:19.500 | We'll talk about
00:43:20.500 | It works with autonomous weapon systems
00:43:23.500 | Works with also
00:43:25.500 | Nuclear weapons
00:43:26.500 | But primarily autonomous weapon systems
00:43:29.500 | And concern legal policy
00:43:32.500 | And technological aspects
00:43:34.500 | Of banning these weapons
00:43:35.500 | There's been a lot of agreement
00:43:37.500 | About the safety hazards
00:43:39.500 | Of autonomous systems
00:43:41.500 | That make decisions
00:43:42.500 | To kill a human being
00:43:44.500 | Mark Kriber
00:43:46.500 | CEO of Boston Dynamics
00:43:48.500 | Previously
00:43:50.500 | A long time ago
00:43:51.500 | Faculty here at MIT
00:43:53.500 | We'll talk about
00:43:54.500 | We'll bring robots
00:43:55.500 | And talk to us about
00:43:57.500 | His work
00:43:58.500 | Of robots in the real world
00:44:01.500 | Is doing a lot of exciting stuff
00:44:03.500 | With humanoid robotics
00:44:04.500 | And any kind of robots
00:44:05.500 | Operating on legs
00:44:07.500 | It's incredible work
00:44:08.500 | Extremely exciting
00:44:10.500 | And gets to explore the idea
00:44:12.500 | Of how difficult it is
00:44:13.500 | To build these robot systems
00:44:14.500 | That operate in the real world
00:44:17.500 | From both the control aspect
00:44:21.500 | And from the way
00:44:24.500 | The final result is perceived
00:44:26.500 | By our society
00:44:28.500 | It's very interesting
00:44:29.500 | To see what intelligence
00:44:31.500 | In robotics is embodied
00:44:33.500 | And then taking in by us
00:44:35.500 | And what that inspires
00:44:37.500 | Fear, excitement, hope, concern
00:44:41.500 | And all the above
00:44:43.500 | Ilya Siskeev is
00:44:46.500 | Expert in many aspects
00:44:48.500 | Of machine learning
00:44:49.500 | He's the co-founder of OpenAI
00:44:52.500 | I'll talk about their
00:44:55.500 | Different aspects of game playing
00:44:58.500 | That they've recently been exploring
00:45:00.500 | About using
00:45:01.500 | Deep reinforcement learning
00:45:02.500 | To play arcade games
00:45:04.500 | On the DeepMind side
00:45:08.500 | Using deep reinforcement learning
00:45:10.500 | To beat the best in the world
00:45:11.500 | At the game of Go
00:45:13.500 | In 2017
00:45:15.500 | The big fascinating breakthrough
00:45:17.500 | Achieved by that team
00:45:18.500 | With AlphaGo Zero
00:45:19.500 | Training an agent
00:45:20.500 | That through self-play
00:45:21.500 | Playing itself
00:45:22.500 | Not on expert games
00:45:24.500 | So truly from scratch
00:45:26.500 | Learning to beat
00:45:27.500 | The best in the world
00:45:28.500 | Including the previous
00:45:29.500 | Iteration of AlphaGo
00:45:31.500 | We'll explore
00:45:32.500 | What aspects of the stack
00:45:35.500 | Of intelligent robotic systems
00:45:37.500 | Intelligent agents
00:45:38.500 | Can be learned in this way
00:45:40.500 | So deep learning
00:45:41.500 | The memorization
00:45:42.500 | The supervised learning
00:45:44.500 | Memorization approach
00:45:45.500 | It looks at the
00:45:47.500 | Sensor data feature extraction
00:45:49.500 | Representation learning aspect of this
00:45:52.500 | Taking the sensor data
00:45:53.500 | From camera, lidar, audio
00:45:55.500 | Extracting the features
00:45:57.500 | Forming higher order representations
00:46:00.500 | And on those representations
00:46:01.500 | Learning to actually accomplish
00:46:02.500 | Some kind of classification
00:46:04.500 | Regression task
00:46:05.500 | Figuring out
00:46:07.500 | Based on the representation
00:46:08.500 | What is going on
00:46:09.500 | In the raw sensory data
00:46:11.500 | And then combining that data together
00:46:13.500 | To reason about it
00:46:15.500 | And finally in the robotic domains
00:46:18.500 | Taking it all together
00:46:19.500 | As with humanoid robotics
00:46:22.500 | Industrial robotics
00:46:23.500 | Autonomous vehicles
00:46:24.500 | Taking it all together
00:46:25.500 | And actually acting in this world
00:46:27.500 | With the effectors
00:46:28.500 | And the open question is
00:46:31.500 | How much of this AI stack
00:46:32.500 | Can be learned?
00:46:34.500 | That's something for us to discuss
00:46:37.500 | To think about
00:46:38.500 | That Elio will touch on
00:46:40.500 | With deeper enforcement learning
00:46:42.500 | We can certainly learn representations
00:46:44.500 | And perform classifications
00:46:46.500 | State of the art
00:46:47.500 | Better than human
00:46:48.500 | And image classification
00:46:49.500 | And ImageNet
00:46:50.500 | And segmentation tasks
00:46:53.500 | And the excitement of deep learning
00:46:55.500 | Is what's highlighted there in the red box
00:46:57.500 | Can be done end to ends
00:46:58.500 | Raw sensory data
00:46:59.500 | Out to the knowledge
00:47:00.500 | To the output
00:47:01.500 | To the classification
00:47:02.500 | Can we begin to reason?
00:47:04.500 | Is the open question
00:47:05.500 | With the knowledge-based programming
00:47:08.500 | That Stephen Wolfram will talk about
00:47:09.500 | Can we begin to
00:47:11.500 | Take these automatically generated
00:47:13.500 | High order representations
00:47:15.500 | And combine them together
00:47:17.500 | To form knowledge bases
00:47:19.500 | To form aggregate graphs of ideas
00:47:22.500 | That can then be used to reason
00:47:24.500 | And can we then combine them together
00:47:28.500 | To act in the world
00:47:31.500 | For whether in simulation
00:47:32.500 | With arcade games
00:47:34.500 | Or simulation of autonomous vehicles
00:47:36.500 | Robotic systems
00:47:37.500 | Or actually in the physical world
00:47:38.500 | With robots moving about
00:47:40.500 | Can that end to end
00:47:42.500 | From raw sensory data
00:47:43.500 | To action be learned?
00:47:46.500 | That's the open question
00:47:48.500 | For artificial general intelligence
00:47:51.500 | For this class
00:47:52.500 | Can this entire process
00:47:55.500 | Be end to end?
00:47:57.500 | Can we build systems
00:47:59.500 | And how do we do it
00:48:01.500 | That achieve this process end to end
00:48:03.500 | In the same way that humans do?
00:48:05.500 | We're born in this raw sensory environment
00:48:07.500 | Taking in very little information
00:48:10.500 | And learn to operate successfully
00:48:13.500 | In arbitrary constraints
00:48:17.500 | Arbitrary goals
00:48:19.500 | And to do so
00:48:21.500 | We have lectures
00:48:22.500 | We have three projects
00:48:24.500 | And we have guest speakers
00:48:27.500 | From various disciplines
00:48:29.500 | I hope that all these voices
00:48:31.500 | Will be heard
00:48:33.500 | And will feed a conversation
00:48:35.500 | About artificial intelligence
00:48:38.500 | And its positive
00:48:41.500 | And its concerning effects in society
00:48:44.500 | And how do we move forward
00:48:45.500 | From an engineering approach
00:48:47.500 | The topics will be deep learning
00:48:49.500 | Deep reinforcement learning
00:48:51.500 | Cognitive modeling
00:48:52.500 | Computational cognitive science
00:48:54.500 | Emotion creation
00:48:55.500 | Knowledge based programming
00:48:56.500 | AI safety with autonomous weapon systems
00:48:59.500 | And personal robotics
00:49:01.500 | With human centered artificial intelligence
00:49:03.500 | That's for the first two weeks of this class
00:49:05.500 | That's the part
00:49:07.500 | Where if you're actually registered students
00:49:09.500 | That's where you need to submit the project
00:49:11.500 | That's when we all meet here
00:49:13.500 | Every night with the incredible speakers
00:49:15.500 | But this will continue
00:49:17.500 | We already have several speakers scheduled
00:49:19.500 | In the next couple of months
00:49:21.500 | Yet to be announced
00:49:22.500 | But they're incredible
00:49:24.500 | And we have conversations
00:49:26.500 | On video
00:49:27.500 | And we have new projects
00:49:29.500 | I hope this continues throughout 2018
00:49:32.500 | On the topics of AI ethics
00:49:34.500 | And bias
00:49:35.500 | There's a lot of incredible work
00:49:37.500 | In, we have a speaker there coming
00:49:40.500 | On the topic of
00:49:41.500 | How do we create artificial intelligence systems
00:49:43.500 | That do not discriminate
00:49:45.500 | Do not form the kind of biases
00:49:47.500 | That us humans do in this world
00:49:50.500 | That are operating under social norms
00:49:53.500 | But are reasoning beyond the flawed aspects
00:49:57.500 | Of those social norms
00:49:59.500 | With bias
00:50:01.500 | Creativity as with the project
00:50:03.500 | Of Dream Vision and beyond
00:50:05.500 | There's so much exciting work
00:50:07.500 | In using machine learning methods
00:50:10.500 | To create beautiful art and music
00:50:13.500 | Brain simulation, neuroscience
00:50:17.500 | Computational neuroscience
00:50:18.500 | Shockingly, in the first two weeks
00:50:20.500 | We don't have a computational neuroscience speaker
00:50:23.500 | Which is a fascinating perspective
00:50:26.500 | Brain simulation or neuroscience in general
00:50:30.500 | Computational neuroscience
00:50:31.500 | Is a fascinating approach
00:50:33.500 | From the muck of actual brain work
00:50:37.500 | To get the perspective of how our brain works
00:50:39.500 | And how we can create something that mimics
00:50:41.500 | That resembles the fundamentals
00:50:44.500 | Of what makes our brain intelligent
00:50:46.500 | And finally, the Turing test
00:50:48.500 | The traditional definition of intelligence
00:50:51.500 | Defined by Alan Turing
00:50:52.500 | Was grounded in natural language processing
00:50:55.500 | Creating chatbots that impress us
00:50:57.500 | That amaze us
00:50:58.500 | And trick us into thinking they're human
00:51:00.500 | We will have a project
00:51:03.500 | And a speaker
00:51:04.500 | On natural language processing
00:51:06.500 | In March
00:51:08.500 | With that, I'd like to thank you for coming today
00:51:10.500 | And look forward to seeing your submissions
00:51:12.500 | For the three projects
00:51:14.500 | Thank you very much
00:51:15.500 | Thank you very much.