Back to Index

Deep Learning State of the Art (2019) - MIT


Chapters

0:0 Introduction
2:0 BERT and Natural Language Processing
14:0 Tesla Autopilot Hardware v2+: Neural Networks at Scale
16:25 AdaNet: AutoML with Ensembles
18:32 AutoAugment: Deep RL Data Augmentation
22:53 Training Deep Networks with Synthetic Data
24:37 Segmentation Annotation with Polygon-RNN
26:39 DAWNBench: Training Fast and Cheap
29:6 BigGAN: State of the Art in Image Synthesis
30:14 Video-to-Video Synthesis
32:12 Semantic Segmentation
36:3 AlphaZero & OpenAI Five
43:34 Deep Learning Frameworks
44:40 2019 and beyond

Transcript

The thing I would very much like to talk about today is the state of the art in deep learning. Here we stand in 2019 really at the height of some of the great accomplishments that have happened but also stand at the beginning. And it's up to us to define where this incredible data-driven technology takes us.

And so I'd like to talk a little bit about the breakthroughs that happened in 2017 and 2018 that take us to this point. So this lecture is not on the state of the art results on main machine learning benchmarks. So the various image classification, object detection, or the NLP benchmarks, or the GAN benchmarks.

This isn't about the cutting edge algorithm that's available on GitHub that performs best on a particular benchmark. This is about ideas. Ideas and developments that are at the cutting edge of what defines this exciting field of deep learning. And so I'd like to go through a bunch of different areas that I think are really exciting.

Now of course this is also not a lecture that's complete. There's other things that I may be totally missing that happened in 2017 and 2018 that are particularly exciting to people here, people beyond. For example, medical applications of deep learning is something I totally don't touch on. And protein folding and all kinds of applications that there has been some exciting developments from DeepMind and so on that don't touch on.

So forgive me if your favorite developments are missing, but hopefully this encompasses some of the really fundamental things that have happened, both on the theory side, on the application side, and on the community side of all of us being able to work together on these kinds of technologies. I think 2018, in terms of deep learning, is the year of natural language processing.

Many have described this year as the ImageNet moment in 2012 for computer vision when AlexNet was the first neural network that really gave that big jump in performance on computer vision. It started to inspire people what's possible with deep learning, with purely learning-based methods. In the same way, there's been a series of developments from 2016, '17, and led up to '18, with the development of BERT that has made on benchmarks and in our ability to apply NLP to solve various NLP tasks, natural language processing tasks, a total leap.

So let's tell the story of what takes us there. There's a few developments. I've mentioned a little bit on Monday about the encoder-decoder recurrent neural networks. So this idea of recurrent neural networks encode sequences of data and output something. Output either a single prediction or another sequence. When the input sequence and the output sequence are not necessarily the same size, they're like in machine translation.

We have to translate from one language to another. The encoder-decoder architecture takes the following process. It takes in the sequence of words or the sequence of samples as the input and uses the recurrent units, whether it's LSTM or GRUs or beyond, and encodes that sentence into a single vector.

So forms an embedding of that sentence of what it represents, a representation of that sentence. And then feeds that representation in the decoder recurrent neural network that then generates the sequence of words that form the sentence in the language that's being translated to. So first you encode by taking a sequence and mapping it to a fixed size vector representation and then you decode by taking that fixed size vector representation and unrolling it into the sentence that can be of different length than the input sentence.

Okay, that's the encoder-decoder structure for recurrent neural networks. It's been very effective for machine translation and dealing with arbitrary length input sequences, arbitrary length output sequences. Next step, attention. What is attention? Well, it's the next step beyond. It's an improvement on the encoder-decoder architecture. It provides a mechanism that allows to look back at the input sequence.

So as opposed to saying that you have a sequence that's the input sentence and that all gets collapsed into a single vector representation, you're allowed to look back at the particular samples from the input sequence as part of the decoding process. That's attention. And you can also learn which aspects are important for which aspects of the decoding process, which aspects of the input sequence are important to the output sequence.

Visualize another way. And there's a few visualizations here that are quite incredible that are done by Jay Alomar. I highly recommend you follow the links and look at the further details of these visualizations of attention. So if we look at neural machine translation, the encoder RNN takes a sequence of words and throughout, after every sequence, forms a set of hidden representations, a hidden state that captures the representation of the words that followed.

And those sets of hidden representations as opposed to being collapsed to a single fixed size vector are then all pushed forward to the decoder that are then used by the decoder to translate, but in a selective way where the decoder, here visualized on the y-axis, the input language and on the x, the output language.

The decoder weighs the different parts of the input sequence, differently in order to determine how to best translate, generate the word that forms a translation in the full output sentence. Okay, that's attention. Allowing, expanding the encoder-decoder architecture to allow for selective attention to the input sequence as opposed to collapsing everything down into a fixed representation.

Okay, next step, self-attention. In the encoding process, allowing the encoder to also selectively look in forming the hidden representations at other parts of the input sequence in order to form those representations. It allows you to determine for certain words what are the important relevant aspects of the input sequence that can help you encode that word the best.

So it improves the encoder process by allowing it to look at the entirety of the context. That's self-attention. Building on that, transformer. It's using the self-attention mechanism in the encoder to form these sets of representations on the input sequence. And then as part of the decoding process, follow the same but in reverse with a bunch of self-attention that's able to look back again.

So it's self-attention on the encoder, attention on the decoder, and that's where the magic, that's where the entirety of the magic is that's able to capture the rich context of the input sequence in order to generate in a contextual way the output sequence. So let's take a step back then and look at what is critical to natural language in order to be able to reason about words, construct a language model and be able to reason about the words in order to classify a sentence, to translate a sentence or compare two sentences and so on.

The sentences are collections of words or characters and those characters and words have to have an efficient representation that's meaningful for that kind of understanding. And that's what the process of embedding is. We talked a little bit about it on Monday and so the traditional Word2Vec process of embedding is you use some kind of trick in an unsupervised way to map words into a compressed representation.

So language modeling is the process of determining which words follow each other usually. So one way you can use it in a skip-gram model taking huge data sets of words, you know there's writing all over the place, taking those data sets and feeding a neural network that in a supervised way looks at which words are usually follow the input.

So the input is a word, the output is which word are statistically likely to follow that word and the same with the preceding word. And doing this kind of unsupervised learning which is what Word2Vec does, if you throw away the output and the input and just take in the hidden representation form in the middle that's how you form this compressed embedding a meaningful representation that when two words are related in a language modeling sense two words are related they're going to be in that representation close to each other and when they're totally unrelated have nothing to do with each other they're far away.

ELMo is the approach of using bi-directional LSTMs to learn that representation. And what bi-directional, bi-directionally so looking not just the sequence that led up to the word but in both directions the sequence that followed the sequence that before. And that allows you to learn the rich full context of the word.

In learning the rich full context of the word you're forming representations that are much better able to represent the statistical language model behind the kind of corpus of language that you're looking at. And this has taken a big leap in ability to then further algorithms that then with the language model a reasoning about doing things like sentence classification, sentence comparison, so on translation, that representation is much more effective for working with language.

The idea of the OpenAI transformer is the next step forward is taking the same transformer that I mentioned previously the encoder with self-attention decoder with attention looking back at the input sequence and using it taking the language learned by the decoder and using that as a language model and then chopping off layers and training on a specific language task like sentence classification.

Now BERT is the thing that that did the big leap in performance. With the transformer formulation there's always there's no bi-directional element there is, it's always moving forward so the encoding step and the decoding step with BERT is, it's richly bi-directional it takes in the full sequence of the sentence and masks out some percentage of the words 15% of the words 15% of the samples, the tokens from the sequence and tasks the entire encoding self-attention mechanism to predict the words that are missing.

That construct and then you stack a ton of them together a ton of those encoders self-attention feed-forward network self-attention feed-forward network together and that allows you to learn the rich context of the language to then at the end perform all kinds of tasks. You can create first of all like ELMo and like Word2Vec create rich contextual embeddings take a set of words and represent them in the space that's very efficient to reason with.

You can do language classification you can do sentence pair classification you could do the similarity of two sentences multiple choice question answering general question answering tagging of sentences. Okay, I lingered on that one a little bit too long but it is also the one I'm really excited about and really if there's a breakthrough this year it's thanks to BERT.

The other thing I'm very excited about is totally jumping away from the NeurIPS, the theory those kind of academic developments in deep learning and into the world of applied deep learning. So Tesla has a system called Autopilot where the hardware version 2 of that system is a implementation of the NVIDIA Drive PX2 system which runs a ton of neural networks.

There's eight cameras on the car and a variant of the Inception network is now taking in all eight cameras at different resolutions as input and performing various tasks like drivable area segmentation like object detection and some basic localization tasks. So you have now a huge fleet of vehicles where it's not engineers some I'm sure are engineers but it's really regular consumers people that have purchased the car have no understanding in many cases of what a neural networks limitations and capabilities are so on.

Now it has a neural network is controlling the well-being its decisions, its perceptions and the control decision based on those perceptions are controlling the life of a human being. And that to me is one of the great sort of breakthroughs of 17 and 18 in terms of the development of what AI can do in a practical sense in impacting the world.

And so one billion miles over one billion miles have been driven in autopilot. Now there's two types of systems currently operating in Tesla's there's hardware version one hardware version two. Hardware version one was Intel Mobileye monocular camera perception system as far as we know that was not using a neural network and it was a fixed system that wasn't learning at least online learning in the Tesla's.

The other is hardware version two and it's about half and half now in terms of the miles driven. The hardware version two has a neural network that's always learning. There's weekly updates. It's always improving the model shipping new weights and so on. That's the exciting set of breakthroughs. In terms of AutoML the dream of automating some aspects or all aspects or as many aspects as possible of the machine learning process where you can just drop in a data set that you're working on and the system will automatically determine all the parameters from the details of the architectures the size of the architecture the different modules in that architecture the hyper parameters used for training the architecture running that, doing inference, everything.

All is done for you all you just feed it is data. So that's been the success of the neural architecture search in '16 and '17 and there's been a few ideas with Google AutoML that's really trying to almost create an API where you just drop in your data set and it's using reinforcement learning and recurring neural networks to given a few modules stitch them together in such a way where the objective function is optimizing the performance of the overall system and they showed a lot of exciting results Google showed and others that outperform state of the art systems both in terms of efficiency and in terms of accuracy.

Now in '18 there have been a few improvements on this direction and one of them is Attenet where it's now using the same reinforcement learning AutoML formulation to build ensembles on neural networks so in many cases state of the art performance can be achieved by as opposed to taking a single architecture is building up a multitude an ensemble, a collection of architectures and that's what is doing here is given a candidate architectures stitching them together to form an ensemble to get state of the art performance now that state of the art performance is not a leap a breakthrough leap forward but it's nevertheless a step forward and it's a very exciting field that's going to be receiving more and more attention there's an area of machine learning that's heavily understudied and I think it's extremely exciting area and if you look at 2012 with AlexNet achieving the breakthrough performance of showing that what deep learning networks are capable of from that point on from 2012 to today there's been non-stop extremely active developments of different architectures that even on just ImageNet alone on doing the image classification task have improved performance over and over and over with totally new ideas now on the other side on the data side there's been very few ideas about how to do data augmentation so data augmentation is the process of you know it's what kids always do when you learn about an object right is you look at an object and you kind of like twist it around is taking the raw data and messing it with such a way that it can give you much richer representation of what this data can look like in other forms in other contexts in the real world there's been very few developments I think still and there's this auto-augment is just a step a tiny step into that direction that I hope that we as a community invest a lot of effort in so what auto-augment does is it says okay so there's these data augmentation methods like translating the image sharing the image doing color manipulation like color inversion let's take those as basic actions you can take and then use reinforcement learning and an RNN again construct to stitch those actions together in such a way that can augment data like on ImageNet to when you train on that data it gets state-of-the-art performance so mess with the data in a way that optimizes the way you mess with the data so and then they've also showed that given that the set of data augmentation policies that are learned to optimize for example for ImageNet given some kind of architecture you can take that learned set of policies for data augmentation and apply it to a totally different data set so there's the process of transfer learning so what is transfer learning?

So we talked about transfer learning you have a neural network that learns to do cat versus dog or no learns to do a thousand class classification problem on ImageNet and then you transfer you chop off few layers and you transfer on the task of your own data set of cat versus dog what you're transferring is the weights that are learned on the ImageNet classification task and now you're then fine-tuning those weights on the specific personal cat versus dog data set you have now you could do the same thing here you can transfer as part of the transfer learning process take the data augmentation policies learned on ImageNet and transfer those you can transfer both the weights and the policies that's a really super exciting idea I think it wasn't quite demonstrated extremely well here in terms of performance so it got an improvement in performance and so on but it kind of inspired an idea that's something that we need to really think about how to augment data in an interesting way such that given just a few samples of data we can generate huge data sets in a way that you can then form meaningful, complex, rich representations from I think that's really exciting and one of the ways that you break open the problem of how do we learn a lot from a little training deep neural networks with synthetic data this also really an exciting topic that a few groups but especially NVIDIA has invested a lot in and here's a from a CVPR 2018 probably my favorite work on this topic is they really went crazy and said okay let's mess with synthetic data in every way we could possibly can so on the left there shown a set of backgrounds then there's also set of artificial objects and you have a car or some kind of object that you're trying to classify so let's take that car and mess with it with every way possible apply lighting variation to it with every way possible rotate everything that is crazy so what NVIDIA is really good at is creating realistic scenes and they said okay let's create realistic scenes but let's also go way above board and not do realistic at all do things that can't possibly happen in reality and so generally these huge data sets I want you to train and again achieve quite interesting quite good performance on image classification of course they're trying to apply to ImageNet and so on these kinds of tasks you're not going to outperform networks that were trained on ImageNet but they show that with just a small sample from from those real images they can fine tune this network train on synthetic images totally fake images to achieve state of the art performance again another way to generate to get to learn a lot from very little by generating fake worlds synthetically the process of annotation which for supervised learning is what you need to do in order to train the network you need to be able to provide ground truth you need to be able to label whatever the entity that is being learned and so for image classification that's saying what is going on in the image and part of that was done on ImageNet by doing a Google search for creating candidates now saying what's going on in the image is a pretty easy task then there is the object detection task of detecting the bounty box and so saying drawing the actual bounty box is a little bit more difficult but it's a couple of clicks and so on then if we take the final the probably one of the higher complexity tasks of perception of image understanding is segmentation is actually drawing either pixel level or polygons the outline of a particular object now if you have to annotate that that's extremely costly so the work with polygon RNN is to use recurring neural networks to make suggestions for polygons it's really interesting there's a few tricks to form these high resolution polygons so the idea is it drops in a single point you draw a you draw a bounty box around an object you use convolution neural networks to drop the first point and then you use recurring neural networks to draw around it and the performance is really good there's a few tricks and this tool is available online it's a really interesting idea again the dream with AutoML is to remove the human from the picture as much as possible with data augmentation remove the human from the picture as much as possible for menial data automate the boring stuff and in this case the act of drawing a polygon try to automate it as much as possible the interesting other dimension along which deep learning is recently been trying to be optimized is how do we make deep learning accessible fast cheap accessible so the Dawn Bench from Stanford the benchmark the Dawn Bench benchmark from Stanford asked formulated an interesting competition which got a lot of attention and a lot of progress it's saying if we want to achieve 93% accuracy on ImageNet and 94% on CIFAR-10 let's now compete that's like the requirement let's now compete how you can do it in the least amount of time and for the least amount of dollars do the training in the least amount of time and the training in the least amount of dollars like literally dollars you're allowed to spend to do this and Fast.ai you know it's a renegade awesome renegade group of deep learning researchers have been able to train on ImageNet in 3 hours so this is for training process for 25 bucks so training a network that achieves 93% accuracy for 25 bucks and 94% accuracy for 26 cents on CIFAR-10 so the key idea that they were playing with is quite simple but really it boils down to messing with the learning rate throughout the process of training so the learning rate is how much you based on the loss based on the error the neural network observes how much do you adjust the weights so they found that if they crank up the learning rate while decreasing the momentum which is a parameter of the optimization process where they do it that jointly they're able to make the network learn really fast that's really exciting and the benchmark itself is also really exciting because that's exactly for people sitting in this room that opens up the door to doing all kinds of fundamental deep learning problems without the resources of Google DeepMind or OpenAI or Facebook or so on without computational resources that's important for academia that's important for independent researchers and so on so GANs there's been a lot of work on generative adversarial neural networks and in some ways there's not been breakthrough ideas in GANs for quite a bit and I think BigGAN from from Google DeepMind the ability to generate incredibly high resolution images and it's the same GAN technique so in terms of breakthroughs innovations but scaled so increase the model capacity and increase the batch size the number of images that are fed to the network it produces incredible images I encourage you to go online and look at them it's hard to believe that they're generated so that was 2018 for GANs was a year of scaling and parameter tuning as opposed to breakthrough new ideas Video to Video Synthesis this work is from NVIDIA is looking at the problem so there's been a lot of work on going from image to image so from a particular image generating another image so whether it's colorizing an image or just traditionally defined GANs the idea with Video to Video Synthesis that a few people have been working on but NVIDIA took a good step forward is to make the video to make the temporal consistency, the temporal dynamics part of the optimization process so make it look not jumpy so if you look here at the comparison for this particular so the input is the labels in the top left and the output of the NVIDIA approach is on the bottom right see it's very temporally consistent.

If you look at the image to image mapping that's state of the art, pix2pix HD it's very jumpy it's not temporally consistent at all and there's some naive approaches for trying to maintain temporal consistency that's in the bottom left so you can apply this to all kinds of tasks, all kinds of video to video mapping.

Here is mapping it to face edges edge detection on faces, mapping it to faces generating faces from just edges you can look at body pose to actual images as input to the network you can take the pose of the person and generate the video of the person okay, semantic segmentation the problem of perception began with AlexNet and ImageNet and then further and further developments where the input, the problem is of basic image classification where the input is an image and the output is a classification of what's going on in that image and the fundamental architecture can be reused for more complex tasks like detection like segmentation and so on interpreting what's going on in the image so these large networks from VGGNet, GoogleNet, ResNet, SCNet, DenseNet all these networks are forming rich representation that can then be used for all kinds of tasks, whether that task is object detection, this here shown is the region based methods where the neural network is tasked, the convolutional layers make region proposals so a bunch of candidates to be considered and then there's a step that's determining what's in those different regions and forming bounding boxes around them in a for loop way and then there is the one shot method single shot method where in a single pass all of the bounding boxes in their classes are generated and there has been a tremendous amount of work in the space of object detection some are single shot methods some are region based methods and there's been a lot of exciting work but not I would say breakthrough ideas and then we take it to the highest level of perception which is semantic segmentation there's also been a lot of work there, the state of the art performance is at least for the open source systems is DeepLab V3+ on the PASCAL VLC challenge so semantic segmentation to catch it all up started in 2014 with fully convolutional neural networks chopping off the fully connected layers and then outputting the heat map, very grainy very low resolution then improving that with segnet performing max pooling with a breakthrough idea that's reused in a lot of cases is dilated convolution atrius convolutions having some spacing which increases the field of view of the convolutional filter, the key idea behind DeepLab V3+ that I is the state of the art is the multi-scale processing without increasing the parameters, the multi-scales achieved by quote-unquote the atrius rate, so taking those atrius convolutions and increasing the spacing and you can think of increasing that spacing by enlarging the model's field of view, so you can consider all these different scales of processing looking at the layers of features so allowing you to be able to grasp the greater context as part of the upsampling deconvolutional step and that's what's producing the state of the art performances and that's where we have the notebook tutorial on github showing this DeepLab architecture trained on Cityscapes, so Cityscapes is a driving segmentation data set that is that is one of the most commonly used for the task of driving scene segmentation Okay, on the deep reinforcement learning front So this is touching a bit a bit on the 2017 but I think the excitement really settled in in 2018 is the work from Google and from OpenAI DeepMind, so it started in DQN paper from Google DeepMind where they beat a bunch of a bunch of Atari games achieving superhuman performance with deep reinforcement learning methods that are taking in just the raw pixels of the game, so this same kind of architecture is able to learn how to beat these games super exciting idea that kind of has echoes of what general intelligence is taking in the raw information and being able to understand the game the sort of physics of the game sufficiently to be able to beat it.

Then in 2016 AlphaGo with some supervision and some playing against itself self play some supervised learning on expert world champ players and some self play where it plays against itself was able to beat the top of the world champion at Go and then 2017 AlphaGo Zero a specialized version of AlphaZero was able to beat the AlphaGo with just a few days of training and zero supervision from expert games so through the process of self play again this is kind of getting the human out of the picture more and more and more which is why AlphaZero is probably or this AlphaGo Zero was a demonstration of the cleanest demonstration of all the nice progress in deep reinforcement learning and I think if you look at the history of AI when you're sitting on a porch 100 years from now sort of reminiscing back AlphaZero will be a thing that people will remember as an interesting moment in time as a key moment in time and AlphaZero was applied in 2017 to beat AlphaZero paper was in 2017 and it was this year played Stockfish in chess which is the best engine chess playing engines was able to beat it with just 4 hours of training of course the 4 hours is caveats because 4 hours for Google DeepMind is highly distributed training so it's not 4 hours for an undergraduate student sitting in their dorm room but meaning it was able through self play to very quickly learn to beat the state of the art chess engine and learn to beat the state of the art shogi engine ELMO and the interesting thing here is with perfect information games like chess you have a tree and you have all the decisions you could possibly make and so the farther along you look at along that tree presumably the better you do that's how Deep Blue beat Kasparov in the 90's is you just look as far as possible down the tree to determine which is the action is the most optimal if you look at the way human grandmasters think it certainly doesn't feel like they're like looking down a tree there's something like creative intuition there's something like you could see the patterns in the board you can do a few calculations but really it's in the order of hundreds it's not on the order of millions or billions which is kind of the the stockfish the state of the art chess engine approach and AlphaZero is moving closer and closer and closer towards the human grandmaster concerning very few future moves it's able through the neural network estimator that's estimating the quality of the move and the quality of the different the quality of the current quality of the board and the quality of the moves that follow it's able to do much much less look ahead so the neural network learns the fundamental information just like when a grandmaster looks at a board they can tell how good that is that's again interesting it's a step towards at least echoes of what human intelligence is in this very structured formal constrained world of chess and Go and Shogi and then there's the other side of the world that's messy it's still games it's still constrained in that way but OpenAI has taken on the challenge of playing games that are much messier that have this semblance of the real world and the fact that you have to do teamwork you have to look at long time horizons with huge amounts of imperfect information hidden information uncertainty so within that world they've taken on the challenge of a popular game Dota 2 on the human side of that there's the competition the international hosted every year where in 2018 the winning team gets 11 million dollars it's a very popular very active competition that's been going on for a few years they've been improving and achieved a lot of interesting milestones in 2017 they were 1v1 bot beat the top professional Dota 2 player the way you achieve great things is you try in 2018 they tried to go 5v5 the OpenAI 5 team lost 2 games against the top 2 the top Dota 2 players at the 2018 international and of course their ranking here the MMR ranking in Dota 2 has been increasing over and over but there's a lot of challenges here that make it extremely difficult to beat the human players and this is you know in every story Rocky or whatever you think about losing is an essential element of a story that leads to then a movie and a book and greatness so you better believe that they're coming back next year and there's going to be a lot of exciting developments there it also Dota 2 this particular video game makes it currently there's really 2 games that have the public eye in terms of AI taking on as benchmarks so we solved GO incredible accomplishment but what's next so last year the associate with the best paper in Europe's was the heads up Texas no limit hold'em AI was able to beat the top level players what's completely current well not completely but currently out of reach is the general not heads up 1 vs 1 but the general team Texas no limit hold'em and on the gaming side this dream of Dota 2 now that's the benchmark that everybody's targeting and it's actually incredibly difficult one and some people think it'll be a long time before we can win and on the more practical side of things the 2018 starting 2017 has been a year of the frameworks growing up of maturing and creating ecosystems around them with TensorFlow with the history there dating back a few years has really with TensorFlow 1.0 as come to be sort of a mature framework PyTorch 1.0 came out in 2018 is matured as well and now the really exciting developments in TensorFlow with eager execution and beyond that's coming out in TensorFlow 2.0 in 2019 so really those two those two players have made incredible leaps in standardizing deep learning and the fact that a lot of the ideas I talked about today and Monday and we'll keep talking about are all have a GitHub repository with implementations in TensorFlow and PyTorch making it extremely accessible and that's really exciting it's probably best to quote Jeff Hinton the quote unquote godfather of deep learning one of the key people behind backpropagation said recently of backpropagation is my view is throw it all away and start again he believes backpropagation is totally broken and an idea that is ancient and needs to be completely revolutionized and the practical protocol for doing that is he said the future depends on some graduate student who's deeply suspicious of everything I've said that's probably a good way to to end the discussion about what the state of the art in deep learning holds because everything we're doing is fundamentally based on ideas from the 60s and the 80s and really in terms of new ideas there's not been many new ideas especially the state of the art results that I've mentioned are all based on fundamentally on stochastic gradient descent and backpropagation it's ripe for totally new ideas so it's up to us to define the real breakthroughs and the real state of the art 2019 and beyond.

So with that I'd like to thank you and the stuff is on the website deeplearning.mit.edu