Back to Index

MIT AGI: Artificial General Intelligence


Chapters

0:0 Intro
0:23 MIT AGI Mission: Engineer Intelligence
3:41 Balance Between Paralyzing Technophobia and Blindness to Big Picture Consequences
11:2 Human Drive to explore and Uncover the Mysteries of the Universe
17:23 ANGEL: Artificial Neural Generator of Emotion and Language
18:56 EthicalCar: Machine Learning Approach
21:53 Josh Tenenbaum, MIT Computational Cognitive Science
24:16 Lisa Feldman Barrett, NEU Emotion Creation
25:40 Re-Enacting Intelligence
26:41 Sophia: Embodied Re-Enactment
29:32 Nate Derbinsky, NEU Cognitive Modeling
30:37 Deep Learning is Representation Learning Talca Feature Learning
33:9 Neuron: Biological Inspiration for Computation
35:49 Deep Learning from Human and Machine
37:28 Past and Future of Deep Learning Breakthroughs
38:45 Current Challenges
40:17 Stephen Wolfram Knowledge-Based Programming
41:38 "Artificial Life Simulation": Cellular Automata and Emerging Complexity
43:13 Richard Moyes, Article36 Al Safety and Autonomous Weapon Systems
43:46 Marc Raibert, CEO, Boston Dynamics Robots in the Real World

Transcript

Welcome to course 6S099 Artificial General Intelligence We will explore the nature of intelligence From as much as possible an engineering perspective You will hear many voices My voice will be that of an engineer Our mission is to engineer intelligence The MIT motto is "Mind and Hand" What that means is we want to explore the fundamental science of what makes an intelligent system The core concepts behind our understanding of what is intelligence But we always want to ground it in the creation of intelligent systems We always want to be in the now, in today In understanding how today we can build artificial intelligence systems that can make for a better world That is the core for us here at MIT First and foremost, we're scientists and engineers Our goal is to engineer intelligence We want to provide with this approach a balance to the very important but over-represented view of artificial general intelligence The black box reasoning view Where the idea is once we know how to create a human level intelligence system How will society be impacted? Will robots take over and kill everyone? Will we achieve a utopia that will remove the need to do any of the messy jobs that will make us all extremely happy? Those kinds of beautiful philosophical concepts are interesting to explore But that's not what we're interested in doing I believe that from an engineering perspective We want to focus on the black box of AGI Start to build insights and intuitions about how we create systems that approach human level intelligence I believe we're very far away from creating anything resembling human level intelligence However, the dimension of the metric behind the word far may not be time In time, perhaps through a few breakthroughs Maybe even one breakthrough Everything can change But as we stand now Our current methods as we will explore from the various ideas and approaches and the guest speakers coming here over the next two weeks and beyond Our best understanding, our best intuition and insights are not yet at the level of reaching without a major leap and breakthrough and paradigm shift towards human level intelligence So it's not constructive to consider the impact of artificial intelligence To consider questions of safety and ethics Fundamental, extremely important questions It's not constructive to consider those questions without also deeply considering the black box of the actual methods of artificial intelligence Human level artificial intelligence And that's what I see, what I hope this course can be Its first iteration, its first exploratory attempt to try to look at different approaches of how we can engineer intelligence That's the role of MIT Its tradition of mind and hand It's to consider the big picture The future impact of society 10, 20, 30, 40 years out But fundamentally grounded in what kind of methods do we have today And what are their limitations and possibilities of achieving that The black box of AGI And in the future impact on society of creating artificial intelligence systems that get, become increasingly more intelligent The fundamental disagreement lies in the fact The very core of that black box Which is how hard is it to build an AGI system How hard is it to create a human level artificial intelligence system That's the open question for all of us From Josh Tenenbaum to Andrej Karpathy To folks from OpenAI to Boston Dynamics To brilliant leaders in various fields of artificial intelligence that will come here That's the open question How hard is it? There's been a lot of incredibly impressive results In deep learning, in neuroscience, in computational cognitive science In robotics But how far are we still to go to the AGI? That's the fundamental question that we need to explore Before we consider the questions, the future impact on society And the goal for this class is to build intuition One talk at a time, a project at a time Build intuition about where we stand About what the limitations of current approaches are How can we close the gap? A nice meme that I caught on Twitter recently Of the difference between the engineering approach At the very simplest of a Google intern Typing a for loop that just does a grid search On parameters for neural network And on the right is the way media would report this for loop The Google AI created its own baby AI I think it's easy for us to go one way or the other But we'd like to do both Our first goal is to avoid the pitfalls of black box thinking Of the futurism thinking that results in hype That's detached from scientific engineering understanding Of what the actual systems are doing That's what the media often reports That's what some of our speakers will explore in a rigorous way It's still an important topic to explore Ray Kurzweil on Wednesday will explore this topic Next week, talking about AI safety and autonomous weapon systems Will explore this topic The future impact 10, 20 years out How do we design systems today that would lead to safe systems tomorrow? Still very important But the reality is a lot of us need to put a lot more emphasis On the left, on the for loops, on creating these systems At the same time, the second goal of what we're trying to do here Is not emphasize the silliness, the simplicity The naive basic nature of this for loop In the same way as was the process in creating nuclear weapons Before, during World War II The idea that as an engineer, as a scientist That I'm just a scientist is also a flawed way of thinking We have to consider the big picture impact The near term, negative consequences that are preventable The low hanging fruit that can be prevented through that very engineering process We have to do both And in this engineering approach We always have to be cautious That just because we don't understand Just because we, our intuition Our best understanding of the capabilities of modern systems That learn, that act in this world Seem limited, seem far from human level intelligence Our ability to learn and represent common sense reasoning Seems limited The exponential, potentially exponential Could be argued and he will Growth of technology, of these ideas Means that just around the corner is a singularity Is a breakthrough idea that will change everything We have to be cautious of that Moreover, we have to be cautious of the fact that Every decade over the past century Our adoption of new technologies has gotten faster and faster The rate at which a new technology From its birth to its wide mass adoption Has shortened and shortened and shortened That means that new idea The moment it drops into the world Can have widespread effects overnight So as, and I think the engineering approach Is fundamentally cynical on artificial general intelligence Because every aspect of it is so difficult We have to always remember that overnight everything can change Through this question of beginning to approach From a deep learning perspective From deep reinforcement learning From brain simulation, computational cognitive science From computational neuroscience From cognitive architecture, from robotics From legal perspectives And autonomous weapon systems As we begin to approach these questions We need to start to build intuition How far away are we from creating intelligent systems The singularity here is that spark That moment when we're truly surprised By the intelligence of the systems we create I'd like to visualize it By the certain analogy that we're in this dark room Looking for a light switch With no knowledge of where the light switch is There's going to be people that say Well, it's a small, the rooms are all small We're right there It's anywhere we'll be able to find it in any time The reality is we know very little So we have to stumble around Feel our way around to build the intuition Of how far away we really are And many will, speakers here will talk about How we define intelligence How we can begin to see intelligence What are the fundamental impacts Of creating intelligent systems I'd like to sort of see the positive reason For this little class And for these efforts that have fascinated People throughout the century Of trying to create intelligent systems Is that there's something about human beings That craves to explore To uncover the mysteries of the universe Fundamental in itself a desire To uncover the mysteries of the universe Not for a purpose And there's often an underlying purpose Of money, of greed, of power Craving for power and so on But there's seems to be an underlying desire To explore Nice little book, An Exploration A very short introduction by Stuart Weaver He says for all the different forms it takes In different historical periods For all the worthy and unworthy motives That lie behind it Exploration Travel for the sake of discovery and adventure Is a human compulsion A human obsession even It is defining element Of a distinctly human identity And it will never rest at any frontier Whether terrestrial or extraterrestrial From 325 BCE With a long 7500 mile journey On the ocean To explore the Arctic To Christopher Columbus and his Flawed, harshly criticized In modern scholarship Trip that ultimately paved the way Didn't discover Paved the way to colonization of the Americas To the Darwin trip The voyage of the Beagle Whilst this planet has gone cycling on According to the fixed law of gravity From so simple a beginning Endless forms most beautiful And most wonderful have been And are being evolved To the first venture into space By Yuri Gagarin First human in space in 1961 What he said over the radio Is the earth is blue It is amazing These are the words that I think drive Our exploration in the sciences In the engineering And today in AI From the first walk on the moon And now the desire to Colonize Mars and beyond That's where I see this desire To create intelligent systems Talking about the positive or negative Impact of AI on society Talking about the business case Of the jobs lost, jobs gained, jobs created Diseases cured The autonomous vehicles The ethical questions The safety of autonomous weapons Of the misuse of AI In the financial markets Underneath it all And there are people Many people have spoken about this What drives myself and many in the community Is the desire to explore To uncover the mystery of the universe And I hope that you join me In that very effort With the speakers that come here In the next two weeks and beyond The website for the course is AGI.MIT.EDU I am a part of an amazing team Many of whom you know AGI@MIT.EDU is the email We're on slack, deep-mit.slack For registered MIT students You create an account on the website And submit five new links And vote on ten to vote AI Which is an aggregator of information And material we've put together For the topic of AGI And submit a entry To one of the competitions One of the three competitions And projects that we have in this course And the projects are Dream Vision I'll go over them in a little bit Dream Vision, Angel, Ethical Car And the aggregator of material, Vote AI We have guest speakers Incredible guest speakers I will go over them today And as before With the deep learning For self-driving cars course We have shirts And they're free for in-person For people that attend in person For the last lecture most likely Or you can order them online Okay, Dream Vision We take the Google Deep Dream idea We explore the idea of creativity Where Einstein's view of intelligence The mark of intelligence is creativity This idea is something we explore By using neural networks And interesting ways to visualize What the network see And in so doing Create beautiful visualizations In time through video So taking the ideas of deep dream And combining them together With multiple video streams To mix dream and reality And the competition Is through Mechanical Turk We set up a competition of Who produces the most beautiful visualization We provide code To generate this visualization And ideas of how you can make it More and more beautiful And how to submit it To the competition ANGEL The artificial neural generator Of emotion and language Is a different twist on the Turing test Where we don't use words We only use emotions to speak Expression of those emotions And we create We use an age A face Customizable With 26 muscles All of which can be controlled With an LSTM We use a neural network To train the generation of emotion And the competition In you submitting the code To the competition Is you get 10 seconds To impress With these expressions of emotion The viewer It's A/B testing Your goal is to impress the viewer Enough to where they choose Your agent versus another agent And those that are most loved The agents most loved Will be the ones that are declared winners In a twist We will add human beings into this mix So we've created a system That maps our human faces Myself and the TAs To where we ourselves Enter in the competition And try to convince you To keep us as your friend That's the Turing test ETHICAL CAR Building in the ideas Of the trolley problem And the moral machine Done here in the media lab The incredible interesting work We take a machine learning approach to it And take what we've developed The deep reinforcement learning competition For 6S094 The deep traffic And we add pedestrians into it Stochastic Irrational Unpredictable pedestrians And we add human life to the loss function Where there's a trade-off Between getting from point A to point B So in deep traffic The deep reinforcement learning competition The goal was to go as fast as possible Here it's up to you To decide What your agent's goal is There's a parade of front trade-off Between getting from point A to point B As fast as possible And Hurting pedestrians This is not a ethical question It's an engineering question And it's a serious one Because fundamentally in creating Autonomous vehicles that function in this world We want them to get From point A to point B as quickly as possible The United States government Insurance companies Put a price tag on human life We put that power in your hands In designing these agents To ask the question of How can we create machine learning systems Where the objective function The loss function Has human life as part of it And Vote.ai is an aggregator Of different links Different articles, papers, videos On the topic of artificial general intelligence Where people vote on Vote quality articles up and down And choose on the sentiment Of positive and negative We'd like to explore the different ways The different arguments for and against Artificial general intelligence There is an incredible list of speakers The best in their disciplines From Josh Tenenbaum here at MIT To Ray Kurzweil at Google To Lisa Feldman Barrett And Nate Dzerbinski from Northeastern University Andrej Karpathy Stephen Wolfram Richard Moyes Mark Reiber Ilyas Eskiver And myself Josh Tenenbaum, tomorrow I'd like to go through each of these speakers And talk about the perspectives they bring To try to see the approach The ideas they bring to the table They're not, in most cases, interested In the discussion of the future impact on society Without grounding it into their expertise Into their actual engineering Into creating these intelligence systems So Josh is a computational cognitive science Expert professor faculty here at MIT He will talk about How we can create common sense understanding Systems that See a world of physical objects And their interactions And our own possibilities to act and interact with others The intuitive physics How do we build into systems The intuitive physics of the world More than just the deep learning memorization engines That take patterns And learn through supervised way To map those patterns to classification Actually begin to understand the intuitive The common sense physics of the world And learn rapid model-based learning Learn from nothing Learn from very little Just like we do as children Just like we do as human beings successfully Often only need one example to learn a concept How do we create systems That learn from very few Sometimes a single example And integrate ideas from various disciplines Of course from neural networks But also probabilistic generative models And simple processing architectures It's gonna be incredible Of course from a different area of the world Another incredible thinker Intellectual speaker is Ray Kurzweil He'll be here on Wednesday at 1pm And he will do a whirlwind discussion Of where we stand with intelligence Creating intelligence systems How we see natural intelligence Our own human intelligence How we define it How we understand it And how that transfers to the increasing exponential growth Of development of artificial general intelligence Something I'm myself very excited about Is Lisa Feldman Barrett coming here on Thursday She's written a book I believe "How Emotions Are Made" She argues that emotions are created That there is a distinction There is a detachment between what we feel in our bodies The physical state of our bodies And the expression of emotion From body to the contextually grounded To the face expressing that emotion Which means Now why is there a person who is a psychology person In a fundamental engineering computer science topic like AGI Because if emotions are created in the way she argues And she'll systematically break it down That means we're learning societal As human beings we're learning societal norms Of how to express emotion The idea of emotional intelligence is learned Which means we can have machines learn this idea It's a machine learn Just like it's a human learning problem It's a machine learning problem In a little bit of a twist She asked that instead of giving a talk I have a conversation with her So it's going to be a little bit challenging and fun And she's great looking forward to it And we'll explore different ways That we can get emotion expressed Through video, through audio, through the project The Angel project that I mentioned So there's been work in reenacting intelligence So well reenacting mapping face to face Mapping different emotions on video that was previously recorded So if you can imagine That means we can take emotions that we've created The kind of emotion creation we've been discussing And remap it on previous video That's one way to see intelligence Is taking raw human data that we already have And mapping new computer generated The underlying fundamentals of human But the surface appearance The representation of emotion visual or auditory Is generated by computer It could be in the embodied form like with Sophia Sophia, your robots were human I think we will be similar in a lot of ways But different in a few others It will take a long time for robots To develop a complex emotion And possibly robots can be able to Develop the more problematic emotions Like rage, jealousy, hatred and so on It might be possible to make them more effective In humans So I think it will be a good partnership Where one of them completes the other Very important to note For those captivated by Sophia in the press Or have seen these videos Sophia is an art exhibit She's not a strong natural language processing system This is not an AGI system But it's a beautiful visualization Of embodying It's a beautiful visualization Of how easy it is to trick us human beings That there's intelligence underlying something That the emotional expression The physical embodiment And the emotional expression That has some degree of humor That has some degree of wit and intelligence Is enough to captivate us So that's an argument for Not creating intelligence from scratch But having machines at the very surface The display of that emotion The generation, the mapping of the visual And the auditory elements Where underneath it is really trivial technology That's fundamentally relying on humans Like in the Sophia's case And in the simplest form We remove all elements of How should I say Attractive appearance from an agent We really keep it to the simplest muscles Aspect characteristics of the face And see with 26 muscles Controlled by a neural network Through time So a current neural network LSTM How can we explore the generation of emotion Can we get this thing And this is an open question for us too We just created this system We don't know if we can Can we get it to make us feel something Make us feel something by Watching it express its feelings Can it become human before our eyes Can it learn to By competing against other agents A, B testing on Turk On Mechanical Turk Can the winners be very convincing To make us feel entertained Pity Love Maybe some of you will fall in love With Angel here Nate Derbenski on Friday Will talk about Cognitive modeling architectures So he will speak about The cognitive modeling aspect Can we have a Can we model cognition In some kind of systematic way To try to build intuition Of how complicated cognition is Andrej Karpathy Famous for being the state of the art human On the ImageNet challenge The representative The 95% accuracy performance Among other things He's also famous for Is now at Tesla He will talk about The role The limitations The possibilities of deep learning We'll talk As I have spoken about In the past few weeks And throughout About our misunderstanding Or our flawed intuition About what are the difficult And what are the easy problems In deep learning And the power Of the representational learning The ability of neural networks To form deeper and deeper representations Of the underlying raw data That ultimately forms That takes complex information That's hard to make sense of And convert it into useful Actionable knowledge That is From a certain lens In a certain In a certain lens In a certain problem space Can be clearly defined As understanding Of the complex information Understanding is ultimately Taking complex information And reducing it to Its simple essential elements Representational learning In the trivial case here In drawing Having to draw a straight line To separate the blue and the red curves That's impossible to do In a natural input space on the left What the act of learning is For deep neural networks In this formulation Is to construct a topology Under which there exists a straight line To accurately classify blue versus red That's the problem And for a simple blue and red line It seems trivial here But this works in a general case For arbitrary input spaces For arbitrary nonlinear Highly dimensional input spaces And the ability to automatically learn features To learn hierarchical representations Of the raw sensory data Which means that you could do A lot more with data Which means you can expand Further and further and further To create intelligent systems That operate successfully With real world data That's what representational learning means That deep learning allows Because the arbitrary number of features That can be automatically determined You can learn a lot of things About a pretty complex world Unfortunately There needs to be a lot of supervised data There still needs to be a lot of human input Andre and others, Josh Will talk about the difference Between our human brain Our biological neural network And the artificial neural network The full human brain With hundred billion neurons One thousand trillion synapses And the biggest neural networks out there The artificial neural networks Having much smaller 60 million synapses for ResNet-152 The biggest difference The parameters of human brain Being several orders of magnitude More synapses The topology being much more complex Chaotic The asynchronous nature of the human brain And the learning algorithm Of artificial neural networks Is trivial and constrained With back propagation Is essentially an optimization function Over a clearly defined loss function From the output to the input Using back propagation to teach To adjust the weights on that network The learning algorithm For our human brain Is mostly unknown But it's certainly much more complicated Than back propagation The power consumption The human brain is a lot more efficient Than artificial neural networks And there's a very kind of artificial Trivial supervised learning process For training artificial neural networks You have to have a training stage And you have to have an evaluation stage And once the network is trained There's no clear way to continue training it Or there's a lot of ways But they're inefficient It's not designed to do online learning Naturally To always be learning Is designed to be to learn And then be applied Obviously our human brains are always learning But the beautiful fascinating thing is That they're both distributed Computation systems on a large scale So it's not a There's, it doesn't ultimately boil down To a single compute unit The computation is distributed The back propagation learning process Is distributed Can be paralyzed on a GPU Massively paralyzed The underlying computational unit of a neuron Is trivial But can be stacked together To form forward neural networks Recurrent neural networks To represent both spatial information with images And temporal information With audio speech Text Sequences of images and video And so on Mapping from one to one One to many Many to one So mapping any kind of structure Vector and time data As an input To any kind of classification regression Sequences Captioning Video Audio as output Learning in the general sense But in a domain That's precisely defined For the supervised training process We can think of In deep learning case You can think of the supervised methods Where humans have to annotate the data As memorization of the data We can think of the exciting new And growing field of Semi-supervised learning When most of the data Through generative adversarial networks Or through significant data augmentation Clever data augmentation Most of it is done automatically The annotation process Or through simulation And then reinforcement learning Where most of the Most of the labels are extremely sparse And come rarely And so the system has to figure out How to operate in the world With very little human input Very little human data We can think of that as reasoning Because you take very little information From our teachers The humans And transfer it across Generalize it across To reason about the world And finally unsupervised learning The excitement of the community The promise, the hope You could think of that as understanding Because ultimately it's taking data With very little or no human input And forming representations of that data Is how we think of understanding Requiring making sense of the world Without strict input Of how to make sense of the world The kind of process Of discovering information Maybe discovering new ideas New ways to simplify the world To represent the world That you can do new things with it The new is the key element there Understanding And Andre and Ilya And others will talk about Certainly the past But the future of deep learning Where is it going to go? Is it overhyped? Underhyped? What is the future? Will the compute of CPU, GPU, ASICs Continue? Will the breakthroughs The Moore's law And its various forms Of massive parallelization continue? And the large data sets With tens of millions of images Grow to billions and trillions Will the algorithms improve? Is there a groundbreaking idea That's still coming? With Jeff Hiddens' capsule networks Is there fundamental architectural changes To neural networks That we can come up with That will change everything That will ease the learning process That will make the learning process More efficient Or we'll be able to represent Higher and higher orders of information Such that you can Transfer knowledge between domains And the software architectures That support from TensorFlow to PyTorch I would say the last year And this year will be the year Of deep learning frameworks Those will certainly keep coming In their various forms And the financial backing Is growing and growing The open challenges for deep learning Really a lot of this course Is kind of connected to deep learning Because that's where a lot Of the recent breakthroughs That inspire us To think about intelligence systems Come from But the challenges are many The need, the ability to transfer Between different domains As in reinforcement learning and robotics The need for huge data And inefficient learning We still need supervised data An ability to learn In an unsupervised way Is a huge problem And not fully automated learning There's still a degree A significant degree of hyperparameter To necessary With the reward functions The loss functions Are ultimately defined by humans And therefore are deeply flawed When we release those systems Into the real world Where there is no ground truth For the testing set And the goal isn't achieving A high classification On a trivial image classification Localization detection problem But rather to have an autonomous vehicle That doesn't kill pedestrians Or an industrial robot That operates in jointly With other human beings And all the edge cases that come up How does deep learning methods How do machine learning methods Generalize over the edge cases The weird stuff that happens In the real world Those are all the problems there Stephen Wolfram will be here on Monday Evening at 7 p.m. He's done a lot of amazing things I would say It's very interesting From his recent interest In knowledge-based programming Wolfram Alpha I think is the fuel For most middle school And high school students now For the first time taking calculus I probably go to Wolfram Alpha To answer their own questions But more seriously There is a deep connected graph Of knowledge that's being built there With Wolfram Alpha And Wolfram language That Stephen will explore In terms of language An interesting thing He was part of the team on Arrival That worked on the language For those of you who are familiar The Arrival where alien species spoke with Us humans Through a very interesting Beautiful complicated language And he was brought in As a representative human To interpret that language Just like in the movie He's represented that in real life And he used the skills Him and his son Christopher Used to analyze this language Very interesting That process is extremely interesting I hope he talks about it And his background with Mathematica And new kind of science The sort of Another set of ideas That have inspired people In terms of creating Intelligence systems Is the idea that From very simple things From very simple rules Extremely complex Patterns can emerge His work with cellular automata Did just that Taking extremely simple Mathematical constructs Here with cellular automata These are These are grids Of computational units That switch on and off In some kind of predefined way And only operate locally Based on their local neighborhood And somehow based on Different kinds of rules Different patterns emerge Here's a three-dimensional cellular automata With a simple rule Starting with nothing With a single cell They grow in really Interesting complex ways This emergent complexity Is inspiring It's the same kind of Thing that inspires us About neural networks You can take a simple Computational unit And when combined together In arbitrary ways Can form complex representations That's also very interesting You can see knowledge From a knowledge perspective You can see knowledge formation In the same kind of way Simplicity At a mass distributed scale Resulting in complexity Next Tuesday Richard Moyes from article 36 Coming all the way from UK For us We'll talk about It works with autonomous weapon systems Works with also Nuclear weapons But primarily autonomous weapon systems And concern legal policy And technological aspects Of banning these weapons There's been a lot of agreement About the safety hazards Of autonomous systems That make decisions To kill a human being Mark Kriber CEO of Boston Dynamics Previously A long time ago Faculty here at MIT We'll talk about We'll bring robots And talk to us about His work Of robots in the real world Is doing a lot of exciting stuff With humanoid robotics And any kind of robots Operating on legs It's incredible work Extremely exciting And gets to explore the idea Of how difficult it is To build these robot systems That operate in the real world From both the control aspect And from the way The final result is perceived By our society It's very interesting To see what intelligence In robotics is embodied And then taking in by us And what that inspires Fear, excitement, hope, concern And all the above Ilya Siskeev is Expert in many aspects Of machine learning He's the co-founder of OpenAI I'll talk about their Different aspects of game playing That they've recently been exploring About using Deep reinforcement learning To play arcade games On the DeepMind side Using deep reinforcement learning To beat the best in the world At the game of Go In 2017 The big fascinating breakthrough Achieved by that team With AlphaGo Zero Training an agent That through self-play Playing itself Not on expert games So truly from scratch Learning to beat The best in the world Including the previous Iteration of AlphaGo We'll explore What aspects of the stack Of intelligent robotic systems Intelligent agents Can be learned in this way So deep learning The memorization The supervised learning Memorization approach It looks at the Sensor data feature extraction Representation learning aspect of this Taking the sensor data From camera, lidar, audio Extracting the features Forming higher order representations And on those representations Learning to actually accomplish Some kind of classification Regression task Figuring out Based on the representation What is going on In the raw sensory data And then combining that data together To reason about it And finally in the robotic domains Taking it all together As with humanoid robotics Industrial robotics Autonomous vehicles Taking it all together And actually acting in this world With the effectors And the open question is How much of this AI stack Can be learned? That's something for us to discuss To think about That Elio will touch on With deeper enforcement learning We can certainly learn representations And perform classifications State of the art Better than human And image classification And ImageNet And segmentation tasks And the excitement of deep learning Is what's highlighted there in the red box Can be done end to ends Raw sensory data Out to the knowledge To the output To the classification Can we begin to reason? Is the open question With the knowledge-based programming That Stephen Wolfram will talk about Can we begin to Take these automatically generated High order representations And combine them together To form knowledge bases To form aggregate graphs of ideas That can then be used to reason And can we then combine them together To act in the world For whether in simulation With arcade games Or simulation of autonomous vehicles Robotic systems Or actually in the physical world With robots moving about Can that end to end From raw sensory data To action be learned? That's the open question For artificial general intelligence For this class Can this entire process Be end to end? Can we build systems And how do we do it That achieve this process end to end In the same way that humans do? We're born in this raw sensory environment Taking in very little information And learn to operate successfully In arbitrary constraints Arbitrary goals And to do so We have lectures We have three projects And we have guest speakers From various disciplines I hope that all these voices Will be heard And will feed a conversation About artificial intelligence And its positive And its concerning effects in society And how do we move forward From an engineering approach The topics will be deep learning Deep reinforcement learning Cognitive modeling Computational cognitive science Emotion creation Knowledge based programming AI safety with autonomous weapon systems And personal robotics With human centered artificial intelligence That's for the first two weeks of this class That's the part Where if you're actually registered students That's where you need to submit the project That's when we all meet here Every night with the incredible speakers But this will continue We already have several speakers scheduled In the next couple of months Yet to be announced But they're incredible And we have conversations On video And we have new projects I hope this continues throughout 2018 On the topics of AI ethics And bias There's a lot of incredible work In, we have a speaker there coming On the topic of How do we create artificial intelligence systems That do not discriminate Do not form the kind of biases That us humans do in this world That are operating under social norms But are reasoning beyond the flawed aspects Of those social norms With bias Creativity as with the project Of Dream Vision and beyond There's so much exciting work In using machine learning methods To create beautiful art and music Brain simulation, neuroscience Computational neuroscience Shockingly, in the first two weeks We don't have a computational neuroscience speaker Which is a fascinating perspective Brain simulation or neuroscience in general Computational neuroscience Is a fascinating approach From the muck of actual brain work To get the perspective of how our brain works And how we can create something that mimics That resembles the fundamentals Of what makes our brain intelligent And finally, the Turing test The traditional definition of intelligence Defined by Alan Turing Was grounded in natural language processing Creating chatbots that impress us That amaze us And trick us into thinking they're human We will have a project And a speaker On natural language processing In March With that, I'd like to thank you for coming today And look forward to seeing your submissions For the three projects Thank you very much Thank you very much.