Hi everybody, today we are covering lesson 23 and we're here with Jono and Tanishk. How are you guys both doing? Doing well, excited for another lesson. Yeah likewise. Great, I shamefully have to start with admitting to a bug which actually is rather, well I don't know, it kind of messed up things in a sense but I kind of I think it's really interesting actually what happened.
The bug, it was in notebook 23, the Keras notebook and it's about the measure measuring the FID. So to recall FID measures how similar a bunch of samples are from a model to a bunch of samples of real images and that similarity is defined in this kind of like some kind of distance between the distributions of the features in a classifier or some kind of model.
So that means that to get FID we have to load a model and we have to pass it some data loaders so that it can calculate what the samples look like from real images. Now the problem is that the data loaders I was passing actually had images that the pixels were between negative 0.5 and positive 0.5 but you might recall this model that I trained has pixels between negative 1 and 1.
So what this image eval class would have seen and specifically this this C model which we are putting which we are getting the features from is it would have seen a whole bunch of unusually low contrast images so they wouldn't really have looked like many things in the data set because in fact in the data set I think particularly for fashion MNIST things are pretty consistently you know normalized in terms of going all the way from 0 to 1 or negative 1 to 1 I guess 0 to 255 in the original.
And so as a result I think what would have happened is that the features that came out of this would have been kind of weird and they might not have necessarily consistently said oh these are t-shirt features and these are shoe features but they would have said oh this is a weird low contrast low contrast image feature.
And so then the shame continues in that I added another bug on top of this bug which is when I then did the sampling I didn't I didn't multiply by two and the data that I trained it on was actually the same data loaders or that well the specifically the same transform the same Noisify transform well where did it come from it's the same yeah the same transform I not Noisify the same transform I which yeah previously was point from negative 0.5 to 0.5 so I trained the model using this restricted input space as well and therefore it was spitting out things that were between negative 0.5 and 0.5.
And so the FID then said wow these are so similar the samples are consistently spitting out features of low contrast things and all of the real samples are low contrast things so those are really similar and that's how we got really low numbers. So those low numbers are wrong so I was a bit surprised I guess that that the Keras model was doing so much better and it certainly it made me a big believer in the Keras model but actually it's not doing so much better so once we fix that the FID's are actually around five six five and the reals are two and a half.
So to compare we were getting some pretty good results in cosine. So cosine yeah we were getting three to four depending on how many steps we were doing DDIM. So the result of this is that this somewhat odd situation where the cosine model where we scaled it accidentally to be negative 0.5 to 0.5 and then post sampling multiplied by two so we're not cheating like the Keras one used to be is working better than Keras which yeah it's a surprise to me because I was thinking Keras was kind of like in theory optimally scaling things but I guess the truth is it was scaling things to unit variance but there's nothing particularly to say that's optimally scaling things and so empirically we've found kind of accidentally a better way to scale things and also our dependent variable is different you know our dependent variable is not that Keras you know C-mix combination but our dependent variable is just the noise the zero one noise you know the noise before it's multiplied by alpha.
Okay so that's that's the bug. Anyway I promised last time we would stop looking at fashion MNIST for a while so let's move on to tiny image net. So and the reason we're going to do this is because we want to I want to show an example of we're going to try and create units today and I wanted to show an example of a of a nice unit we can create that combines a lot of the ideas we've been looking at it's going to be a super resolution unit and doing super resolution on fashion MNIST isn't going to be very interesting because the maximum training size we have is 28 by 28.
So so I thought we'd go a little bit bigger than that to tiny image net which is 64 by 64. I found it quite difficult actually to find tiny image net data but eventually I discovered that it's still on the Stanford servers where it was originally created it's just not linked to anywhere.
So we'll try to if this disappears we will we will keep our forum and website up to date with other places to find it. Anyway so for now we can grab the URL from there and unpack it. So SHUtil is a very handy little library inside the Python standard library and one of the things it has is a very handy unpack archives which can handle zip files and it's going to put it in our data directory.
So I yeah just you know there's a few different ways we could process this and I thought we might experiment some things but I thought yeah it wouldn't be a bad idea to try doing things the reasonably kind of manual way just to see you know what that looks like and often this is the easiest way to do things because you know that's a very well-defined set of steps right.
So step one is to create a data set. So a data set is just literally something that has a length and that you can index into it so it has to have these two things to find. You don't have to inherit from anything you just have to define these two things.
Broadly speaking in Python you generally don't have to inherit from things you just have to provide the methods that are expected. So our data set is in a directory called tiny image net 200 and then there's a train directory and a val directory for the training and the validation set and then the train directory this is pretty classic normal thing each category so this is a category has images in a separate folder and specifically they're in images subfolder.
So what I wanted to do was to just grab start with grab all of the files in path slash train or the image files so the python standard library has a glob function which searches recursively if asked to for everything that matches this well this specification. So this specification is path slash star dot jpeg and then this star star here I don't know why we need to do it twice it's a bit weird it also that you also need that to be recursive.
So to be recursive you both have to say recursive true here and also put star star before the slash here. So that's going to give us a list of all files inside path train and so then if we index into that training data set with zero that will call get item passing an i of zero and so we will then return a tuple.
One is the thing in self dot files i which is this file and then the label for it and the label is that so it's the parent's parent's name parent's parent's name and so that's the name. Okay so there's a data set that returns two strings when you index into it a couple of two strings the first is the name of the image file the so the path of the image file and the second is the name of the category it's in.
These weird names are called wordnet categories they're like codes that indicate concepts basically in English. So one of the reasons I actually used this particular data set is because it's going to force us to do some more data processing which I think is good practice and that's because weirdly in the validation set although it's in tiny image net 200 slash val which is the not weird part the weird part is that they are not then in subdirectories organized by label instead there is a separate val annotations dot text file which looks like this so it says for each file name what category is it it's also got the like the bounding box of whereabouts that is but we're not going to be using that today.
So I decided to create a dictionary that would tell us for each file what category is it in so that means that I want to create a in this case here I'm doing something exactly like a list comprehension but because it's not in square brackets it's a generator comprehension so it'll generate it kind of stream out the the results and we're going to go through each line in this file and we're going to split on tab so that's going to give us this and then this and then this and then we're going to grab the first two and if you basically pass a list of lists or list of tuples or whatever to dict it will create a dictionary using these pairs as key values so if we have a look there it is that's quite a nice neat way to do it and if you're not sure you can just click type dict type open brackets and then hit shift tab a couple of times and it'll show you the various options and you can see here I'm doing dict iterable because my generator it is iterable and it says oh that's exactly as if you created a dictionary and then gone 4 kv in iterable dk equals v so there's a nice little trick okay now we need a data set that works just like tiny data set but the get items are going to label things differently so I just inherited from tiny data set so that means we don't need to do init or len again and then get item again it's going to turn the i-th file this time the label will not be the parent parent name but we will look up in the annotations dictionary the name of the file and so that works we can check the length works so then um a fairly generally useful thing that I thought we'll then create is something that lets us transform any data set so here's a class you can pass it a data set and you can pass it a transformation for the x or the independent variable and you can pass the transformation from the y and both of them default to no op that is no operation so it just doesn't change it at all so a transform data set the length of it is just the length of the original data set but when we call get item it'll grab the tuple from the data set we passed in and it will return that tuple but with transform x and transform y applied to it does that make sense so far great okay so I don't like working with these n 0 3 0 things but the data set luckily has a word net ids file in it so if I just open it up oh sorry this one actually is not quite going to help us this is just a list of all of the word net ids that they have images for we could have actually got this by simply grabbing um by listing this directory it would have told us all the ids but they've got they've also got just a the text file containing all of them so we can see that there are 200 categories okay um and that's useful because we're going to want to change n 0 3 0 etc into an int and the way we can change it into an int is by simply saying oh we'll call we'll call this one zero and this one one and so forth right so the kind of the int to string or id to string version of this is literally this list so zero will be there that but the string to int version where you do this all the time is basically enumerate so that gives us the index and the value for everything in the list so those are going to be our keys and values but actually we're going to invert it to become value colon key and that's what's true to id will be so note here that we have a dictionary comprehension you can tell because it's got curly brackets and a colon and so here's our dictionary comprehension so we could have used that uh for this as well we could have done a dictionary comprehension instead but um yeah so there's lots of ways of doing things none of them's any better or worse than any other um okay so that's the uh the one those word tags whatever do we have the the names for them or is that something yes the names i'm going to get to yes shortly there's a word dot text so yeah all right i grabbed one batch of data and grabbed its main and standard deviation and so then i've just copied and pasted them in here for normalizing um so my my transform x is going to be i'm going to read the image um if you read it as RGB that's going to force it to be three channels because actually some of them are only one channel uh divided by 255 so it'll be between zero and one and then we will normalize and then for our y's we will go through strata id to get the id and just use that as our tensor so it's you know doing it manually is actually pretty straightforward right because now we just pass those to our tufim ds our transformed data set and we can check that you know you can see yi is a tensor but we can look it up to get its value and xi is an image tensor with three channels so channel by height by width has is normal for pytorch um so for showing images it's nice to denormalize them so that's just denormalizing and so if we show the image that we just grabbed it's a water jug i guess all right so now we can create a data loader for our training set so it's going to contain our transformed training data set and pass in a batch size this one has to be shuffled not sure why i put num workers equals zero there generally eight's pretty good if you've got at least eight cores yeah so we can now grab an x batch and a y batch and take a look at a denormalized image from there so there we've got a nice little kitty cat so i think this is already looking better than fashion emnest yeah so there's this thing words.txt that they've also provided and this is actually a list of the entire wordnet hierarchy so the top of the hierarchy is entity and one of the entity types is a physical entity or an abstract entity entities can be things and so forth so this is how wordnet is yeah handled so this is quite a big file actually so if we go through each item of that file and again split on tabs because split on tabs that's what backslash t means is going to give us the wordnet id and then the name of it so now we can go through all of those they call them sin sets and if the key is in our list of the 200 that we want we'll keep it and we don't really want like causal agent comma cause comma causal agency the first one generally seems to be the most normal so i just split on comma and grab the first one um all right so that's um so we could then go through our y batch and just turn each of those numbers into strings and then look at each of those up in our sin sets and join them up and then use those as titles to see our Egyptian cat and our cliff and our guacamole it's a monarch butterfly and so forth and you can see that this is going to be quite tricky because like a cliff this is a cliff dwelling for instance could be quite you know complicated um i have a feeling for this they intentionally like a hundred of the categories might have come from the normal image net and i think they might have then picked a hundred that are designed to be particularly difficult or something if memory serves correctly um all right so then we could define a transform batch function with the same basic idea and that's just gonna yeah transform the x and the y in a batch um oh yes we're about to use that i should move that down a bit because we're not quite there yet okay so before that we can create our data loaders we created a get dls back in an earlier lesson which simply turns that into a data loader and that into a data loader and this one gets shuffled and that one doesn't and so forth um oh i see this is where we do our num workers cool um all right so then oh yeah so then we want to add um our data augmentation so i i noticed that um training a tiny image net model i mean it's it's a much harder thing to do than fashion feminist um and um overfitting was actually a real challenge um and i guess it's because 64 by 64 isn't that many pixels um um so yeah so i found i really needed data augmentation to make much progress at all now um very common data augmentation is called random resource crop which is basically to pick like one area inside and then zoom into it and make that your image but for such low resolution images that tends to work really poorly because it's going to introduce a lot of kind of blurring artifacts so instead for small images i think it's better to add a bit of padding around them and then randomly pick a 64 by 64 area from that padded area so it's just going to shift them slightly it's not a lot of augmentation but it's something and then we do our random horizontal flips and then we'll use that random arrays thing that we created earlier um this is just something i was experimenting with so yeah so now we can use that batch transform callback using transform batch passing in those transforms so um with um torch vision transforms so this capital t is torch vision transforms um yeah because these are all um nn dot modules you can pass them to nn dot sequential to just have each of them called one at a time in a row there's nothing magic about this it's just doing function composition we could easily create our own um in fact they're also the transforms.compose that does the same thing yeah i was going to say so we've got a fast um uh fast core dot compose which uh as you can see basically it just says for f in funcs x equals f of x um yeah i don't know is there is there's a yeah torch torch vision compose i think might be the kind of the old way to do it is that right i'm not sure i have a feeling maybe this is considered the better way now because it's kind of scriptable i'm not promising that though um but yeah it does basically the same thing okay so yeah we can now create um a model as usual um okay so basically um i copied the get model with dropout get drop model from our earlier tiny sorry our earlier fashion emnist um stuff um and i yeah started with uh kernel size five convolution and then yeah a bunch of res blocks um um yeah so this is um oh what we've used to seeing before um and so we can take a look in this case as it quite often seems to be the case we accidentally end up with no random erasing let's just run it again really doesn't want to do random erasing here we go so we can see it so um yeah there's this very small border you can hardly see sometimes and a bit of random erasing and it's been done um you know all of the batch is being transformed or augmented in the same way which is kind of okay um it's certainly faster um it can be a bit of a problem if you have like one batch that has lots and lots and lots of augmentation being done to it and it could be like really hard to recognize and that could cause the loss to be a lot in that batch and if you're like been training for ages that could kind of jump you out of the um you know the smooth part of the of the lost surface um that's that's the one downside of this so i'm not going to say it's always a good idea to do augmentation at batch level but it can certainly speed things up a lot if you don't have heaps of cpus um all right so you can use that summary thing we created there's our model um and yeah because we're increasing the doubling the number of channels as we're decreasing the grid size our number of mega flops per layer is constant so that's a pretty good sign that we're using compute throughout um so yeah then we can train it with adam w mixed precision um and our um augmentations so i then did the learning rate finder and trained it for 25 epochs and got nearly 60 59 percent and um yeah this took quite a while actually to get close to 60 percent i got to admit um it uh and you can see that the training sets already up to 91 so we're kind of on the verge of overfitting um um okay so then i thought all right um how do we do better and i wanted to have a sense of like how much better could we get and i kind of tend to like to look at papers with code which is a site that shows papers with their code and also like how good results did they get so this is the image classification on tiny image net um and at first i was like pretty disheartened to see all these like 90 plus things um but as i looked at the papers i realized something well the first thing is i noticed that these ticks here represent extra training data so these are actually pre-trained models that are only fine tuned on tiny image net so that's a total cheat and then i looked more closely at this one and actually these are also using pre-trained data so papers with code is actually incorrect um and so the first ones i could see which i could clearly kind of replicate and made sense of was this one so the the highest one that i'm confident of is this 72 um and so then i kind of wanted to get a sense of right how you know how how much work is there to get from like 60 to 70 and how good is this um so i opened up the paper and so here's tiny image net um and they've got like they're basically this paper turns out to be about a new type of mix up data augmentation this is the normal kind of mix up and this is their special kind of mix up and on a resnet 18 yeah i see they're getting like 63 64 65 with various different types of mix up uh and kind of 64 or 65 for their special one and then if they use much bigger models than we're using um they can get up to 66 ish so that kind of made me think okay this classifier is not not bad um but there's clearly room to improve it um and i can't help myself i always have to try to do better so this is a good opportunity to learn about a trick that is used in um real resnets which is in a real resnet we don't just say how many filters or channels or activations per layer and then just go through and do a you know try to conv each time um but instead um you can also say the number of res blocks per her kind of down sampling layer so this would say do three res blocks and you know then down sample or down sample and then do three res blocks or something like that or do three res blocks the first of which or the last of which is a down sample and then two res blocks uh with a down sample and then two res blocks with a down sample so this has got a total of one two three four five down samples but it's got it's rather than having one two three four five res blocks it's going to have three four five six seven eight nine res blocks so it's nearly twice as deep and so the way we do that is we just replace the places it was saying res block with res underscore blocks and that's just a sequential which goes through the number of blocks and creates a res block and you can do it a couple of ways in this case um I said if it's the last one then make it stride two otherwise stride one so it's going to be down sampling at the end of each set of res blocks um so that's the only thing I changed I changed res block to res blocks and passed in the number of blocks which is this okay so um so the number of megaflops is now 7 10 ish which is more than double right so should give should have more opportunity to learn stuff which also it could be more opportunity to overfit um so again we do our lr find and uh yeah so let's do 25 epox and I didn't actually add more augmentation um okay and that got up to nearly 62 so that was a good improvement um and you know interestingly it's not overfitting more it's actually if anything less which you know there's something about its ability to actually learn um this which is slowing it down or something um so I thought yeah it'd be nice to train it for longer so I decided to add more augmentation um and uh to do that um I decided to use something called trivial augment which is not a very well known approach but it deserves to be um and it comes from Frank Hutter's lab he's he's Frank Hutter is somebody who consistently creates extremely practical useful improvements um with much less of the nonsense that we often see from the some of the huge well-funded labs um and so this one's kind of a bit of a reaction to some previous approaches such as one called auto augment one called rand augment they might have both come from google brain I'm not quite sure where they kind of used lots of like you know many many thousands of tpu hours um to like optimize how every image is you know or how how each set of images is is augmented and um yeah what these guys did is they said well what if we don't do that but we just randomly pick a different augmentation for each image um and that's what they did they just uh they just said algorithm one is the procedure pick an augmentation pick an amount do it um I feel like they're almost kind of like trying to make a point about writing this algorithm here um um yeah and they basically find this is at least as good or often better actually than the incredibly resource intensive ones the incredibly resource intensive ones also kind of require a different version for every data set um which is why they describe this as a tuning free um so rather nicely and surprisingly for me it's actually built into pytorch so if we go to pytorch's website and go to trivial augment wide um yeah they show you some examples of trivial augment wide we can create our own as well now the thing is um I found um that doing this at a batch level worked poorly and I think the reason is what I described earlier I think sometimes it will pick a really challenging augmentation to see on you know and it all totally don't mess up the loss function and if every single image in the batch is like that then it'll shoot it off into the distant parts of the of the um weight area um which is a good excuse for me to show how to do augmentations um on a per item level um now um these actually require or some of them require um having a pil image the python imaging library image not a tensor so I had to change things around so we have to import image from pil um and we have to change our tofum x now and we're going to do the augmentations in there instead um for the training set um so for the training set we're going to set one fact for both so we're going to pass in something is just do you want to do augmentations so for the training set we're going to pass org equals true and for the validation set we won't um so yeah so we so image.open is how you create a pil image object um and then if we wanted augmentations then do these augmentations and then convert it into a tensor so a torch vision has a dot to tensor we can then call and then we can normalize it and actually I decided just to use torch visions normalize um I mean either is fine or this one works well and then again if you want augmentation then do your rand arrays and if you remember our rand arrays was designed to kind of use um zero one distributed gaussian noise so you want that to happen after normalization so that's why do this order so yeah so now we don't need to use the batch tofum thing we're just doing it all directly in the data set so you can see you know you can do data augmentation in very simple ways without almost any framework help here in fact we're really not we're not doing any and nothing's coming from a framework really it's just yeah it's just this little tofum ds we made um and so now yeah we just pass that into our data loaders get deals um and we don't need any augmentation callback um all right so now we can keep improving things by doing something called pre-activation resnets so if we go back to our original resnet you might recall that the way we did it we have this conv block which consists two convolutions in a row the second one has no activation and to remind you what conv is is that we first of all do a conv and then optionally we do a normalization and then optionally we do our activation function so we end up and then the second of those has act equals none so basically what this is saying is go convolution norm activation convolution norm that's what self.com is and then this is the identity path so this does nothing at all if there's no downsampling or no change of channels and then we apply the activation function the final activation function to the whole thing so that was how the um original res block was designed which is kind of a bit of an accident because i to be honest when i wrote that i didn't bother looking at the paper i just did whatever seemed reasonable in my head um but yeah then looking into it further i looked at this this slightly later paper by the same author as of the resnet paper chiming her um and um um timing her uh al drew um you know this uh this version here on the left as you can see it's conv norm relu conv norm add relu and um yeah he basically pointed out yeah you know what maybe that's not great because the relu is being applied to the addition so there isn't actually a really an identity path at all so wouldn't it be nice if we could have a pure identity path and so to do that he proposed reordering things to go norm relu conv norm relu conv add and so this is called a pre-act or pre-activation res block so that means i had to redefine conv to do norm then act and then conv so my sequential now has the activation in both places and so yeah other than that um oh and then of course there's no activation happening in the res block because it's all happening in the cons does that make sense yeah makes sense yeah cool um so this is now the site this is exactly the same except we now need to have an activation and a batch norm after all those blocks because previously it finished with an activation norm and activation now it starts with them so we have to put these at the end it also means we can't start with a res block anymore because if we started with a res block then it would have an activation function at the start which would throw away half of our data which would be a bad idea um so you've got to be a bit careful with some of the details um but yeah so now you can see that each image is getting its own augmentation and so this one's been shared looks like it's a door or something because it's really hard to tell what the hell it is it's been shared this one's been moved uh it looks like this one's also been shared um and you can also see they've got different amounts of random arrays on them um so yeah so i thought i'd try change training that for 50 epochs and that got us to 65 percent which um is you know as good as nearly as good as the you know normal mix up things that are getting even on a resonant 50s this is looking really good um so i won't spend time on this but i'll just mention i was kind of curious like i mean one of the things i should mention also is they trained all these for 400 epochs so i was kind of curious what would happen if we trained it a bit longer i wasn't patient enough to train it for 400 epochs but i thought i could do 200 epochs so i just duplicated that last one um um that made it 200 epochs and that got us to 67 and a half which yeah is better than any of their non-special mix ups so i think it just goes to show you can get you know genuinely state-of-the-art results so if we use their special mix up that would be interesting to try as well see if we can match their results there but you know we've we've built all this from scratch um we didn't do the data augmentation from scratch because it's not very interesting but uh yeah other than that um so i think that's really cool so i know that you did some other experiments with the the pre-activation oh right yeah um right when i saw that when i saw the pre-activation success i was quite enthusiastic about it so i actually thought like oh maybe you should go back and actually use it everywhere um but for but weirdly enough i think it's weird like it it was worse for fashion MNIST and worse for like less data augmentation um i mean maybe it's not that weird but because the idea of when et al introduced it they said this is to train deeper models you know there's a there's a more pure identity path um and so with that more pure identity path um that that should kind of let the gradients flow through it more easily and so there should be a smoother surface weight surface loss surface um so yeah i guess it makes sense that you don't really see the benefits on less deep models um the bit i'm surprised you elaborate because like it seems like that should be that that sort of uh justification should be true for smaller models right or well yeah it does but smaller models um are going to have a less bumpy surface anyway they've just got less dimensions to be bumpy on and um there's less more importantly they're less deep so there's less room for gradients to explode exponentially um so they're not as sensitive um but yeah i mean i can see why they don't necessarily help as much but i don't have any idea why they were worse and they were quite consistently worse yeah yeah i find it quite interesting too yeah yeah it's quite curious and it's interesting that when we do these like experiments on things that nowadays are considered pretty fundamental and foundational you kind of all the time discover things that nobody seems to have noticed or written about or there's plenty of room to as a kind of a more experimental researcher to do experiments and then go like oh that's interesting and then try and figure out what's going on yeah um i think a lot of researchers go in the opposite direction and they try to start with like theoretical assumptions and then test them um when i think about it i feel like uh maybe a lot of the more successful folks in terms of people who build stuff that actually get used a more experimental first maybe um okay so um shall we have a five minute break since we're kind of on the hour sure all right so let's now look at um notebook 25 super res uh i've just um copied a few things in the previous notebook some transforms and our data sets and our dnorm and our trifim batch and our trifim x let me show you we're using trifim batch here we're not even using trifim batch let's get rid of that because that's just confusing okay so it looks like we're doing the per uh let's figure this out so what are we doing here so we've got um what our two data sets all right so the goal of this is we're going to do super resolution not um classification so let's talk about what that means what we're going to do is the independent variable will be scaled down to a 32 by 32 pixel image and the dependent variable will be the original image and so to do random crop within a padded image and random flips both the independent and the dependent variable needs to have had exactly the same random cropping and exactly the same flipping otherwise you can't say oh this is how you do super res to go from the 32 by 32 to the 64 by 64 because it might be like oh it has to be flipped around and moved around so yes so for this kind of image reconstruction task um it's important to make sure that your um augmentation is done the same way on the independent the dependent variable so that's why we've put it into our data set um and so this is something people often get confused about and they don't know how to do it but it's actually pretty straightforward if we do it this way we just put it straight in the data set um and it doesn't require any framework fanciness um now then what i did do is i then added random erasing just to the training set and the reason for that is i wanted to make the super resolution task a bit more difficult which means sometimes it doesn't just do super resolution but it also has to like replace some of the deleted pixels with proper pixels and so it gives it a little bit more to do you know which um can be quite helpful it's kind of it's it's a it's a data augmentation technique and also something to give it like more of an opportunity to learn what the pictures really look like um okay so with that in case that though these are going to do the padding random cropping and flipping um the training set will also add random erasing and then we create data loaders from those would it make sense to use the trivial augment here the trivial augment did you say yeah um maybe yeah i don't particularly see a reason not to if um if if well only if you found that uh overfitting was a problem and if you did do it you would do it to both independent and dependent variables um so yeah here you can see an example the independent variables some of the in this case all of them actually have some random arrays the dependent doesn't so it has to figure out how to replace that with that and you can also see that this is very blocky and this is less blocky that's because this has been gone down to 32 by 32 pixels and this one's still at the 64 by 64 so in fact once you go down that far the cat's lost its eyes entirely so it's going to be quite challenging it's lost its lines entirely um so super resolution is quite a good task to try to get a model to learn what pictures look like because it has to yeah figure out like how to draw an eye and how to draw cat's whiskers and things like that um were you going to say something jon i'm sorry oh i was just going to point out that the um data sets are also simpler because you don't have to load the labels um so there's no difference between the train and the validation now it's just finding the images good point yeah because the the label you know is actually a dependent variable is just the picture um and so okay so because um turfum ds turfum ds has a turfum x which is only applied to the independent variable um the independent variable has applied to it this pair of resize to 32 by 32 and then interpolate and what that actually does is it ends up still with a 64 by 64 image but the the pixels in that image are all like doubled up and so that means that it's still doing super resolution but it's not actually going from 32 by 32 to by 64 by 64 but it's just going from the 64 by 64 where all of the pixels are like two by two pixels and it's just a little bit easier because that way um we could certainly create a unit that goes from 32 to 64 but if you have the input and output image the same size it can make code a little bit simpler um i originally started doing it by not doing this interpolate thing and then i decided i was just getting a little bit confusing and there's no reason not to do it this way frankly um okay so that's our task um and the idea is that then if it does a good job of this you know you could pass 64 by 64 images into it and hopefully it might turn them into 128 by 128 images um particularly if you trained it on a few different resolutions you'd expect it to get pretty good at you know resizing things to a bunch of different resolutions you could even call it multiple times um uh but anyway for this i was just kind of doing it to to demonstrate um but we have in previous courses trained you know bigger ones for longer with larger images and they actually do one of the interesting things is they tend to not only do super resolution but they often make the images look better because the kind of the pixels it fills in it kind of fills in with like what that image looks like on average which tends to kind of like average out imperfections so often these super resolution models actually improve image quality as well funnily enough okay so let's consider the dumb way to do things we've seen a kind of a dumb way to do things before which is an autoencoder so go in with low expectations here because we've done an autoencoder before it was so bad it actually inspired us to create the learner if you remember so that was back in notebook eight um and so basically what we're going to do is we're going to have a model which looks a lot like previous models it starts with a res block kernel size five and then it's got a bunch of res blocks of stride two um but then we're going to have an equal number of up blocks and what an up block is going to do is it's going to sequentially first of all it's going to do an up sampling nearest 2d which is actually identical to this right so it's going to just double all the pixels and then we're going to pass that through a res block so it's basically a res block with like a stride of a half if you like you know it's it's it's it's undoing a stride to it's up sampling rather than down sampling um okay so and then we'll have an extra res block at the end to get it down to three channels which is what we need um okay so we can do our learning learning uh learning rate finder on that and i just train it pretty briefly for five epochs um so so this model is basically um trying to take the image that we start up then kind of really squeeze it into i guess a small representation and then try to bring that small representation back up to then the full super res yeah exactly right tanish can and we could have done it without any of the stride too you know i guess we could have just had a whole bunch of stride one layers there's a few reasons not to do it that way though one is obviously just the computation requirements are very high because the convolution has to scan over the image and so when you keep it at 64 by 64 that's a lot of scanning um another is that um you're never kind of forcing it to learn higher level abstractions by recognizing how to kind of like you know use more channels on a smaller grid size to represent it um so yeah it's like the same reason that we in in classifiers we don't leave it it's tried one the whole time you know you end up with something that's inefficient and generally not as good um exactly yeah thanks for clarifying tanish um okay so the loss goes down and the loss function i'm using is just msc here right so it's how similar is each pixel to the pixel it's meant to be and so then i can call capture preds um to get their predictions and the targets and the inputs or probabilities targets and inputs i can't quite remember now uh so here's our input images so they're pretty low resolution and oh dear here's our predicted images so pretty terrible um so why is that well basically it's kind of like the problem we had with our earlier auto encoder it's really difficult to go from a like a two by two or four by four or whatever image into a 64 by 64 image you know um we're asking it to do something that's just really challenging and so that would require a much bigger model trained for a much longer amount of time i'm sure it's possible um and in fact you know latent diffusion as we've talked about has a model that kind of does exactly that um um but in our case there's no need to make it so complicated we can actually do something dramatically easier um which is um we can um create a a unit so units were originally developed in 2015 and they originally developed for medical imaging um but they've been used very very widely since um and i was involved in medical imaging at the time they came out and certainly they quite quickly got recognized in medical imaging they took a little bit longer to get recognized elsewhere but nowadays they're pretty universal and they are used in stable diffusion and basically um some of the details don't matter here this is like the original paper um so let's focus on the kind of the broad idea this thing here is called that we're going to call it the downsampling path so in this case they started with 572 by 572 images it looks like they started with one channel images and then they you know as we've seen then they took them down to 284 by 284 by 128 and then down to 140 by 140 by 256 and then down to 68 by 68 by 512 32 by 32 by 1024 so here's this downsampling path right and then the upsampling path is exactly what we've seen before right so we up sample and have some i mean in the original thing they didn't use res nets or res blocks um they just use comms but the idea is the same um but the trick is these extra things across here these arrows um which is copy and crop what we can do is we can take so during the upsampling we've got a 512 by 512 here sorry a 512 channel thing here we can up sample to a 512 channel thing we can then put it through a conf to make it into a 256 channel thing and then what we can do is we can copy across the activations from here now they actually do things in a slightly weird way where they're downsampling they had 136 pixels by 136 and over here they have 104 by 104 so they crop out the center bit that's because of just kind of like the slightly weird way they did uh they basically weren't padding things nowadays we don't have to worry about that that cropping so what we do is we literally copy over this these activations and we then either concatenate or add and you can see in this case they're concatenating see how there's the white bit and the blue bit so they have concatenated the two lots together so actually i think what they did here is they went from a 52 by 52 by 512 to a 104 by 104 by 256 and i think that's what this little blue rectangle here is and then they had another uh copied copied out the 104 by 104 by 256 and then put the two together to get a 104 by 104 by 512 and so this these activations half are from the upsampling and half are from the downsampling from earlier in this whole process and it might be easiest to understand why that's interesting when we get all the way back up to the top where we've got this uh 392 by 392 thing the thing we're copying across now is just two convolutions away from the original image so like for super resolution for example we want it to look a lot like the original image so in this case we're actually going to have an entire copy of almost something very much like the original image that we can include in these final convolutions and so ditto here we have you know something that's kind of like the somewhat downsampled version we can use here and the more downsampled version we can use here so yeah that's that's how the u-net works do either of you guys have anything to add like things that you found this helpful to understand or anything surprising i guess it's a fascinating thing these days a lot of people tend to just add so you've got the you know the outputs from the down layer are the same shape the inputs fully corresponding like up block and then they just kind of add the yeah particularly for super resolution adding might make more sense than concatenating because you're like literally saying like oh this little two by two bit is basically the right pixel but it just have to be slightly modified on the edges yeah it also makes me think of like a boosting sort of thing where if you think about like the fact that a lot of information from the original image is being passed all the way across at that highest skip connection then the rest of the network can be effectively producing an update to that rather than having to recreate the whole image or to put it another way it's like a resnet but there's a skip connections right but the skip connections are like jumping from the start to the end and a bit after the start to a bit before the end and i guess a resonance a bit like boosting too hmm yeah yeah i mean i was kind of going to say the same thing so yeah but basically like i think uh in compared to like the the noising on encoder where like we saw like the results from like even worse than i guess the original image here i guess the the worst it could be is basically the original image so you know i guess it's it's just like a similar sort of uh kind of intuition behind the the the result the resnet uh and how that works so yeah i mean it could be worse if these comms at the end are incapable of undoing what these comms did um which is like one argument for maybe why there should also be a connection from here over to here and maybe a few more comms after that which is something i'm kind of interested in and not enough people do in my opinion um another thing to consider is that they've only got two comms down here but at this point you have the benefit of only being a 28 by 28 you know why not do more computation at this point you know um so there's a couple of things that sometimes people consider but maybe not enough um uh so let me try to remember what i did um so in my unit here so um we've got the down sampling path which is a list of res blocks now a module list is just like a sequential except it doesn't actually do anything so then in the forward we have to go through the down path and x equals lx each time so it's basically yeah a sequential that doesn't actually do anything um and so the up path is exactly the same as we saw before it's a bunch of up blocks um and then like we saw before the final one's going to have to go to three channel um but now for our forward what we're going to do is we're going to keep track of since we're going to be copying this over here and copying this over here we have to save it during the down sampling path so we're going to save it in a something called layers so i actually decided to do the little trick i mentioned which is to save the very first input um so i saved the very first input i then put it through the very first res block and then we go through each in the downward path there's actually no need at all for there to be an i l here doesn't have to be enumerated because we don't use i okay so we go through the downward path so for this l for layer so for each layer in the downward path append the activations so that again as we go through each one we're going to be able to copy them over by saving them for later and then call the layer okay so how many layers have we got there's n layers that we stored away so now we're going to go through the up sampling path and again we're going to call call each one but before we do we're going to actually do the thing that john i mentioned which is rather than concatenating unless we're back um at unless with this is the very first layer because the very first up sampling layer there's nothing to copy right so this is the very first up sampling layer let's just add the saved activations and then call the layer um and then right at the very end we'll add back the very first layer and then pass it through the very fine last res block all right maybe that last one should be concatenated i'm not sure anyhow um this is what i did um now the next thing that i wondered about was like how to um initialize this and basically what i wanted to do is i wanted to initialize this so that when it's when it's untrained it would um the output of the model would be identical to the input because like a reasonable starting point for like what does this look like so yeah what does this look like following super resolution would be this you know that's a reasonable starting point so um i just created this little zero weights thing which zeros out the weights and biases of a layer right so i created the model and then i said okay um let's look at the very end of the up sampling path and we'll call that the last res net and so let's zero out the very last convolutions and also the id connection and so that means that whatever it does for all this at the very end it's going to have um nothing in there this will be zero so that means that this will be equal to layer zero um and then that means we also want to make sure that this doesn't change anything so then we can just zero out the weights there um that's probably not quite right is it i guess i should have actually set those to like an identity matrix maybe i'll try to do that later um but at least it's something that would be very easy for it to i have a question germane yeah this this zero weights i see a lot of people do a thing where they instead like multiply by one e minus three or one e minus four to make the weights really small but not completely zero and i don't have a good intuition whether it's like you know in some sense having everything set to zero fires off some warnings that maybe this is going to be like perfectly balanced on some saddle point or it's not going to have any signal to work with yeah it's very small but not quite zero random weights might be better yeah do you have an individual that i think so or not so much intuition but more empirical like or both um i don't i don't think it's an issue um and i think it comes from like a lot of people's phd supervisors and stuff you know come from back in an era when they were doing like linear regression with one layer or whatever and in those cases yeah if all the weights are the same then no learning can happen because every weight update is identical but in this case all the previous weights are different so there's they all have different gradients and there's definitely nothing to worry about i mean multiplying it by a small number would work too like it's not a problem but um yeah setting it to zeros i and honestly i um i have to stop myself from i mean not that's a problem but i just i always have this natural incarnation to not want to set them to zeros because of years of being told not to but there's no reason that should be a problem um all right so i just would i was just like again like that unit code is very concise and it's very very interesting to see the basic ideas you know very simple and oh yeah to see that i guess yeah yeah it's helpful i think to just get it into a little bit of code isn't it yeah thanks um that's very simple code too okay so we do a lot of find and then we train and you can see but previously our loss even after five epochs there's 207 and in this case our loss after one epoch is oh wait six so it's obviously much easier and we end up at 073 okay so we can take a look there's our inputs and there's our outputs so it's actually better rather than dramatically worse now so that's good um yeah some of it's actually not bad at all i would say this car definitely looks like i think it's like a little over smoothed you know i think you could say so if we look at the other guy's eyes kids eyes still aren't great like in the original he's actually got proper pupils um so yeah it's definitely not recreated the original but you know given limited compute and limited data like the basic idea is not bad um i do worry that the poor koala like it it didn't have eyes here but like it ought to have known there should be eyes in a sense and it didn't create any and maybe it should have done a better job on the eyes so um my feeling is um and this is pretty common way of thinking about this is that when you use mean squared error msc as your loss function on these kinds of models you tend to get rather blurry results because if the model's not sure what to do it's just going to predict kind of the average you know um so one good way to fix that is to use perceptual loss and um i think it was johnno who taught us about perceptual loss wasn't it when we did the style transfer stuff um so perceptual loss is this idea that we could look it's kind of similar as well to the the fit idea um we could look at the some intermediate layer of a pre-trained model and try to make sure that our output images have the same features as the real images and in this case it ought to be saying like the real image you know if we went to kind of midway through a resnet it should be saying like there should be an eye here you know and in this case this would not represent an eye very well so that would should give it some useful feedback to improve how it draws an eye here um so that's the basic idea um so to do perceptual loss we need to classify a model so i just used the little i don't know why i use little 25 epoch one i guess maybe that's all i had trained when at that time um so let's use little 25 epoch model um so then um yeah just grab a batch of validation set and then we can just try it out by calling the classifier model um and here i'm doing it in fp16 just keeping my memory use down um um i don't think this dot half would be necessary since i've got autocast anyway never mind um okay this is the same code we had before for the sin sets um so here is our images um so what we've got here huh i'm just looking at some of them they're a bit weird aren't they i mean koalas are fine you know i wouldn't have picked this as a parking meter i wouldn't have picked this as a bow tie um so yeah so basically what this is doing here is it's um showing us the predictions so the predictions are not amazing um trolley bus that looks right um this is weird it's called this one a neck brace and this one a basketball that looks more like a neck brace the labrador retriever it's got right the tractor it's got right centerpiece right mushrooms right those probably aren't bunching bags okay so you know you can see our classifier it's okay but it's not amazing i think this was one with like a 60 accuracy um but the important thing is it's like it's got enough features to be able to like do an okay job i have no idea what this is so i'm pretty sure it's not a goose um okay so the model um the model is a very simple just a bunch of res blocks um three four five and then at the end we've got our pooling flatten dropout linear batch note um so we don't need yeah so what we're going to do is just to keep things simple we're just going to grab um i think the end of the three res block and so a simple way to do that is we'll just go from range four to the end of the model and delete those layers so if we do that and then look at the model again you can now see i've got zero one two three and that's it so this model um is going to yeah return the kind of the activations after the fourth res block um so for perceptual losses i think we talked about you could like pick a couple of different places like there's various ways to do it this is just the simplest i didn't even have to use hooks or anything we can just call c model and um in fact if we do it um so just to take a look at this looks like and again we're going to use mixed precision here um we can grab our y batch as before put it through our classifier model and so now that we've done this this is now going to give us those intermediate level features so the features what's the shape of them it's batch size 1024 by the number of channels of that layer by the height and width of that layer so these are 8 by 8 by 256 features we're going to be using for the perceptual loss um and so when i was doing this i kind of wanted to like check with things were vaguely looking reasonable um so i would have expected that these features um from the actual y should be similar to if i um use our model so something then i did i thought okay if we if we took that model that we trained then we would hope that the features were at least of the same sign um from you know from the result of the model than they are in the real images um so this is just me comparing that and it's like oh yeah they are generally the same sign so this is just little checks that i was doing along the way and then i also thought i kind of look at the msc loss along the way um yeah so there's no need to keep all those in there it was just stuff i was kind of doing to like debug as i went well not even debug to like identify ahead of time as of any problems um so now we can calculate create our loss function so our loss function is going to be the um the msc loss just like before between the input and the target which is just all that's being passed in here plus the msc loss between the features we get out of c model and the features we get from the actual and the features we get from the actual target image and so the features um we can calculate for the target image now the target image we're not going to be modifying that at all so we do that bit with no gradient um but we do want to be able to modify the thing that's generating our input that's the model we're trying to actually optimize so we do have gradient for that so in each case we're calling the classifier model one on the target and one on the input and so those are giving us our features now then we add them together but they're not particularly similar numerically like they're very different scales and we wouldn't want it to focus entirely on one or the other so i just ran it um for epoch or two checked what the losses were looked like and i noticed that the feature loss was about 10 times bigger so my very hacky way was just to divide it by 10 um but honestly like that detail doesn't tend to matter very much in my opinion which there's nothing wrong with doing it in a rather hacky way um there are papers which suggest more elegant ways to handle it um which isn't a bad idea to save you a bit of time if you're doing a lot of messing around with this jimmy i don't know if you know it but the um the new vae decoder from stability ai for the stable diffusion auto encoder they trained it some with just mean squared error and some with mean squared error combined with the perceptual loss and they had a scaling factor of you know times 0.1 so exactly there you go so the answer is 0.1 that's that's the official and and drake apathy says that the correct learning rate to use is always 4e neg 3 so we're getting all this sorted out now that's good all right so for my unit we're going to do the same stuff as before in terms of initializing it do our lr find train it for 20 epochs and obviously the loss is not comparable because this is lost now incorporates the perceptual loss as well and so this is one of the challenges with these things it's like is it better or worse well we just tend to have to take a look and compare i guess and maybe i should copy over our previous models images so we can compare okay there's our inputs there's our outputs and yeah look he's got pupils now which he didn't used to have koala still doesn't quite have eyeballs but i'd like it's definitely less you know out of focus looking um so yeah flipping that's going on yeah so there's some of them are going to be flipped because this is copied from earlier so yeah there's clipping and cropping going on so they won't be identical yeah you can also see like the background like was all just blurred before where else now it's got texture which if we look at the real the real has texture you know so yeah clearly the perceptual loss has improved matters quite significantly there's an interesting thing here which is that there's not really any metric we can use now right because if we did mean square error the one that's trained means good error would probably do better but visually it looks worse yeah and if we use like an fid well that's based on the features of the pre-trained network so that would probably be biased by the one that's trained using those features the perceptual loss and so you get back to this very old school thing of like well actually how we're choosing is just looking and evaluating right um and when you speak to someone like jason antich who's made a whole career out of you know image restoration and super resolution and colorization that is like a big part of his process even now is still like looking at a bunch of images to decide whether something is better rather than relying on these yeah some phd student yelled at me on twitter a few weeks ago for like saying like look at this cool thing our student made look how don't they look better and he was like don't you know there's rigorous ways to measure these things this is not a rigorous approach at all it's like phd students man they got all the answers can't have a human looking at a picture and deciding if they like it or not that's insane well i'm a pc student i agree though that we should be with me so yeah okay some phd students are better than others that's that's fair enough um what's this oh right okay so talking of cheating let's do that um um so we're going to do something which is kind of fast ai's favorite trick and has been since we first launched which is gradually unfreezing pre-trained networks um so in a sense it seems a bit funny to initialize all of this downpath randomly because we already have a model that's perfectly capable of doing something useful on tiny image net images which is this so yeah what if we um took our unit right and for the model dot start which to remind you is the res block right at the front why don't we use the actual weights of the pre-trained model and then for each of the bits in the down sampling path why don't we use the actual weights that we used from that as well and so this is a useful way to understand how we can um copy over weights which is that any part of a module an nn dot module is itself an nn dot module an nn dot module has a state dict which is a thing you can then call load state dict to put it somewhere else so this is going to fill in the whole res block called model dot start with the whole res block which is p model zero so here's how we can copy across yeah that starting one and then all the down blocks are going to have the rest of it so this is basically going to copy into our model rather than having random weights we're going to have all the weights from our pre-trained model um and then since they're they're good at doing something they're not good at doing super resolution but they're good at doing something why don't we assume that they're good at doing super resolution so turn off requires grad and so what that means if we now train it's not going to update any of the parameters in the down block i guess i should have actually done model dot start requires grad as false two now think about it um and so this is uh the the classic uh fine tune approach from fastai the library um we're going to do one epoch of just the up sampling path and that gets us to a loss of 255 now our um loss function hasn't changed that's totally comparable so previously our one epoch was 385 and in fact after one epoch with frozen weights for the down path we've beaten this now this is in a sense totally cheating but in a sense it's totally not it's totally cheating because the thing we're trying to do is to generate for the perceptual loss intermediate layer activations which are the same as this and so we're literally using that to create intermediate layer activations so obviously that's going to work but why is it okay to be cheating well because that's actually what we want like to be able to do super resolution we need something that can like recognize there's an eye here so we already has something that know that there's an eye there and in fact interestingly this thing trained a lot more quickly than this thing and it turns out it's better at super resolution than that thing even though it wasn't trained to do super resolution and I think that's because that the signal which is just like what is this is a really simple signal to use so yeah so we do that and then we can basically go through and set requires grad equals true again and so the basic idea being here that yeah when you've got a bunch of random weights which is the whole up sampling path and a bunch of pre-trained weights the down sampling path don't start then fine-tuning the whole thing because at the start it's going to be crap you know so and so just train the random weights for at least an epoch and then set everything to unfrozen and then we'll do our 20 epochs on the whole thing and so we go from 255 to 249 207 198 so it's improved a lot so to verify with the with using these weights and comparing that to the perceptual loss the perceptual loss is looking at the up sample data the super resolution images as well as incorporating the weights that's for the down sampling path and so that's looking at I guess the original downgraded right although we are just adding them so if you have zeros in the up sampling path that it's going to be the same so it is very easy for it to get the correct activations in the up sampling path and then yeah I mean then it's kind of a bit weird because it goes all the way back to the top creates the image and then goes into the class of C model the classifier again but I think it's going to create basically the same activations it's a bit confusing and weird so yeah I mean it's not totally cheating but it's it's certainly a much easier problem to solve yeah okay so let's get our results again so there's our inputs yeah so that's looking pretty impressive so the kid has a you know yeah definitely looks pretty reasonable now car looks pretty reasonable we still don't have eyes for the koala such as life but definitely the background textures look way better the candy store looks less much better than it did medicine looks a lot better than it did so yeah it's really I think it looks great okay so then we can get better still this is not part of the original unet but you know making better models is often about like where can we squeeze in more computation give it opportunities to do things and like there's nothing particularly that says that this down sampling thing is exactly the right thing you need here right it's being used for two things one is this conv and one is this conv but those are two different things and so it's kind of having to like learn to squeeze both purposes into one thing so I had this idea probably I'm sure lots of people have had this idea but whatever I had this idea which is why don't we put some res blocks in here which I called cross connections or cross cons so I decided that a cross conv is going to be just a res block followed by a conv and so the unit I just copied and pasted but now as well as the downs I've also got crosses and so the crosses are cross cons so now rather than just adding the layer I add the cross conv applied to the layer yeah I really should have added a cross con for this one as well now I think about it this is the probably the one that wants it the most oh well never mind another time um okay so now yeah again we can definitely compare loss functions so this is one nine eight so everything else was the same so I did the same thing of because you know the down sampling is the same so we can still copy in the state dict requires grad and it's better one eight nine quite a lot better really because you know this is these are hard to get improvements uh let's see if we can notice anything hey look it's got an i just yeah so how about that um at this point it's almost quite difficult to see whether it's an improvement or not but I think there's a bit of an eye on the koala I think is encouraging yeah so that's uh super res uh oh man the bad news is we're out of time okay we didn't promise to do diffusion unit this lesson so we built a unit we built a unit yes we did and it's and we did super resolution with it and it looks pretty good so um I gotta admit I haven't thought about like exercises for people to do what would be useful things for people to try with like maybe they could create a unit they could learn about segmentation create a unit for segmentation or oh you know there are a couple of points where you well I was just gonna say there were a couple of ways we said oh I should have tried this and should have tried that I think that's obviously yeah basically yeah I think that's obviously a good next step I was gonna say um style transfer is a good idea to do I think with a unit so style transfer you can actually set up a loss function so that you can create a unit that learns to create images that look like van Gogh you know for example um it's a totally different approach it's a it's a tricky one I think I think when I was playing with that it almost helped to not have the skip connections at the highest resolutions otherwise it just really wants to copy the input and modify it slightly interesting um maybe doing um whereas which one would be better there too oh yes that's a good point yeah oh well we'll put some stuff up on the website about yeah you know ideas and I'm sure some students hopefully by the time you watch this will have some ideas on the forum or things they've tried to yeah all right yeah the colorization is nice because it's um so colorization right the transform is just to grayscale and back oh yes and then that's yeah that's already actually okay so there's all kinds of decrapification you could do isn't there so if you want to keep it a bit more simple yes rather than doing these two lines of code you could um um yeah just turn it into black and white that's a great point um or um you could delete the center every time you know to create like a something that learns how to fill in or maybe delete the left hand side and that way that would lay that something that you can give it a photo in it all invent a little bit more to the left yeah and then you could keep running it to generator panorama another one you could do would be to like um in memory or something save it as a really uh highly compressed jpeg and so then you would it would be something that would learn to remove jpeg artifacts which then for your like old photos that you saved with crappy jpeg compression you could bring them back to life uh you could probably do like yeah you could do like i guess drawing to painting or something like this by taking some paintings and then like passing it through such an edge detection and using that as your starting point sounds interesting oh uh what about watermark removal you could um you know use pil or whatever to draw watermarks text whatever over the top which is quite useful for like you know radiology images and stuff sometimes have personally identifiable information written on them and you can just like learn to delete it yeah okay so lots of things people can do that's awesome thanks for your ideas basically any image to image nets super all right um or just make the super res better um try it on full image net if you like um if you've got lots of hard drive space thanks jonno thanks to nish see you next time we'll see you in a minute - Bye.
- Bye. - See you.