Back to Index

Live coding 11


Chapters

0:0 Recap on Paddy Competition
4:30 Tips on getting votes for Kaggle notebooks
7:30 Gist uploading question
10:30 Weights and Biases Sweep
14:40 Tracking GPU metrics
16:40 fastgpu
20:0 Using .gitconfig
21:0 Analysis notebook
26:0 Parallel coordinates chart on wandb
31:30 Brute force hyperparameter optimisation vs human approach
37:30 Learning rate finder
40:0 Debugging port issues with ps
42:0 Background sessions in tmux
46:20 Strategy for iterating between notebooks
49:0 Cell All Output toggle for overview
50:50 Final transform for vit models
52:5 swinv2 fixed resolution models
53:0 Building an ensemble - appending predictions
55:50 Model stacking
57:0 Keeping track of submission notebooks

Transcript

I'll follow you up to the lesson nine and I can get to number 10, it's never so close to you before, so amazing, thank you. Ah, okay. Yes, I saw you on the leaderboard, Serana, you were a tenth in the PADI competition, that's very cool. So to catch people up, the most recent news on the PADI competition is I did two more entries.

And I don't remember, I think I might have shown you, I can't remember if I showed you one or both, but yeah, so I ensembled the bottles that we had and that improved the submission from 9876 to 988. And then the other thing I did was I, you know, since the VIT models are actually definitely better than the rest, I kind of doubled their weights and that got it from 988, 1 to 9884.

And let's see, Serana. Serana, you're down to 11th, you're going to have to put in another effort to get back in the top 10, my friend. Yep. Anybody else here on this leaderboard somewhere. I'm done at, I don't know, was it 37 last my check. 37 that's not bad what's your username.

I think this is not you. Matt. I think it's 45. Stop taking further. You just you can't stop for a moment with these things or somebody or jump in ahead. 60s. Yeah, 60s is pretty good. I've had problems with paper space so I couldn't train again. I've been successful.

Like still just nothing out of login. Just error. And I subscribe to the paid version still. Sure, maybe the restructure and some error. Oh, well, feel free to share it on the forum if it's an error that we might be able to help you with. I think it's just a generic when you try to set up a machine and just says error was a paper space area.

That's annoying. The quite the quite receptive if you use that support email. I know I had an issue and they go right back to me. Another thing is if the error is your fault, I if you put something in pre run dot sh that breaks things, then just fire up a pytorch instance rather than a fast AI instance, because that doesn't run pre run.

That doesn't run pre run dot sh. And so then you can fix it. I had to say thank you for ready to set up the competition to help us to get started. Director. He also shared in the forum to set up the local for us. Oh, yeah. Yeah, so I think thank you for him to get me back on the caco.

Yes. Awesome. So now, radix next job will be to become a Kaggle notebooks grandmaster. That's what I'm going to be watching out for. I think he's got what it takes personally. Nice. Right. You've had a gold on Kaggle for books. No, it's not. I'm not sure what I have for notebooks.

I haven't done that many notebooks ever. I think I have. What's your username on Kaggle let's find you. Radek one Radek one with a number, not, not, not worth. Yeah, that's me. Two silver. Okay. This one actually is on the way to being a goal you just, it's got so close.

You need 50 votes from regulars, I guess I don't know what counts as a regular. Well, that's how it works. So it's not in the relative terms. No, it's just 50 votes full stop so. And, you know, I definitely noticed, like, it makes a big difference to Yeah, so therefore it makes a big difference to put notebooks in popular competitions because that's where people are looking so like this one got 400 votes right and I'm not sure it's necessarily my best notebook, but it was part of the patent competition, which had a lot of people working on it.

So that's one trick. Yeah, so things which are not actually attached to any competition it's much harder to get votes for. Yeah, I'm getting pretty close to notebooks grandmaster actually so excited about that. What's your, something to do with loving science I'm guessing what's your, it's actually well yeah my, the, the link is slightly different actually T a n t a m l n l i k e s m a t h.

Oh, math, not science. Okay. Let's take a look. Oh look at you 74. Very nice and you did two more golds now these nine silvers. Well, that's the stuff right now, let's see. I'm gonna go up vote. Oh, there we go. Yeah, that's that's some do it, channel our enthusiasm to getting to niche content notebooks grandmaster.

That would be cool. Yeah, so just have to get those silver ones over the line. So, I've somebody asked about where the the gist uploading thing is. So let me take that up. Oh, and actually when I do, what I might do here is I'm going to connect to my server.

Someone asked about the gist uploading the question asked in the forum somewhere. Yeah, yeah, yeah. forum. Exactly. And specifically, this is what it looks like when you're busy training a model using weights and biases. So you can see I've got three windows here. You can get rid of the dots always.

Oh, that just means that I've got another team X session running on a different computer which has a smaller screen than this one. And it is worth some way to get rid of it by disconnecting other sessions, disconnect other clients. So, this is the one I just created. So if I hit this, there we go.

So shift D, and then select the one to disconnect. Oh, nice. Okay, learn something new. Oh, we got another new face today. Hello Sophie I don't think you've joined us before is that right. I've been here, just quietly in the background sometimes. Okay, thank you for joining we're about to you visiting us from in Brisbane.

Oh, good on you. And what do you work with AI stuff for you just getting started. Not all background in psychology, doing a postdoc in psych, and sort of, yeah, kind of move over into data science. Okay, cool. Have you done a lot of the statistical side of psychology.

Yeah, yeah, quite a bit and quite a bit of coding in our, but I'm pretty new to Python so okay great big learning curve. Well you know what you're, you're a target market right so if you have any questions along the way. Just jump in even things that you feel like everybody else must know.

I guarantee not everybody else does. So, yeah, definitely. These have been really helpful and really great running. Awesome. Thanks for joining. Okay. So, training three models parallel right now. Yeah, so I've got three GPUs in this machine. And so, yeah, one nice thing with with weights and biases is you basically, let me show you.

Okay, so here's weights and biases. I'm going to use my Mac very much because nothing's locked in. Alright, and so you can see it's running this thing called a sweep, right. There's going to be 477 runs. Don't know why it says create 31 seconds ago because that's certainly not true.

It's going to be running. And so, it's coming from this get repo. I feel like there's a, there's a suite view, because this is a particular run. This is, this is a particular run. That's right. I'm terrible with a query, to be honest. Okay, so let's go to the project.

Yes. And then a project has sweeps. And then, okay, this one here I can kill out. Okay. So basically you kind of say on the, on the Linux side WNB, you know, sweet create or something like that. And then, there's things all grouped under this thing. Okay, so then, yeah, so then basically it runs lots of copies of your program feeding at different configurations.

And, yeah, you can run the client as many times as you like so I've run it three times. At each time I've set it to a different career device. You turn your models into Python scripts into able to do this or exactly. So, so this is fine tuned up pie so it's just calling.

So it causes parks, so that's going to just go through and check what batch size, etc etc etc you asked for right sticks them all into this pasta thing. And then it calls train passing in those arguments. And so then train is going to initialize weights and biases for this particular project for this particular entity which is fast AI using the configuration that you requested.

And so then you can say for example okay there's got some particular data set some particular batch size and image size etc. And then it creates a learner for some particular model name, some particular pooling type. Fine tunes it. And then at the end logs, how much GPU memory used what model it was how long it took.

And you don't have to look much because the fast AI weights and biases integration automatically tracks everything in the learner. So you can see here there's all this like learner dot architecture learner dot loss function etc etc. So curiosity, then was this process of refactoring into a script painful.

Actually, I'm so actually probably actually tell I didn't do this Tom Thomas compel did this if I had done it, I would have used fast fast quarter script instead of this stuff, I guess. But, no, it wouldn't have been painful I would have just chucked in nb dev export on the cell that I had in my notebook and that would have become.

Yeah, my script. So, wouldn't hide Jeremy painful hi. I have a question within within need be interesting to track a power consumption, for example. I mean, for some people it might be not for me as to how you would track how consumption I have no idea you'd have to have some kind of sense are connected to your power supply, I guess, they track a lot of system metrics in the runs.

So like, if you look on a run. They will track like GPU memory CPU memory. Yeah. That's enough. Like, yeah, if you click on the thing on the left. It looks like a CPU chip, that thing yeah there's a lot of. So maybe there's power in here, I don't see how it can be right because like it.

Well, unless the Nvidia. That does, you go. GPU power so it video tells you the GPU power usage apparently. Tell you about your CPU, etc power. The thing that's useful about this I think is the memory. The graph. Yeah, well, I mean the key thing is the maximum memory use so we actually track that here in the script.

Yeah, we put it into GPU man. Okay. That's a GPU memory. So Thomas did that as well. I don't know why it's. The power of negative three. What's that about. Okay, curious. I have to ask him what that's that's doing. Thomas works at rates and biases, right, is that right, correct, correct, correct.

Yeah, so he. I had never used it before. So, probably most people have never heard of this but first day I actually has a thing called fast GPU, which is what I've previously used for doing this kind of thing so in general when you've got more than one GPU, or just even if you got any one GPU and you got a bunch of things you want to run.

It's helpful to have some way to say like okay here's the things to run, and then set a script off to go and run them all and check the results. So, fast GPU is the thing I built to do that. And the way fast GPU works is that you have a whole list of the whole directory of scripts in a folder, and it runs each script run at a time and puts them in that runs and it puts them into a separate directory.

You know, to say this is completed, and it tracks the results, and you can do it on like as many your few GPUs as you like it at August, go ahead and run it. And this is fine but it's very basic. And I kind of been planning to make it a bit more sophisticated.

And yeah weights and biases takes it a lot further, you know, by. And I kind of want to read, read to or add something on top of fast GPU so it is fairly compatible with weights and biases but you could do everything locally. So the key thing. So, the thing it's actually using to for that config file is it goes through the basically the Cartesian product of all the values in this yaml.

So it's going to do each of these two data sets planets and pets. For this one running rate point eight for every one of these models. For every one of these poolings for okay this is just the one resize method, and for every one of these experiment numbers. So, yeah.

So that's a little bit of a project at some point, the, the sweep allows you to run arbitrary programs doesn't have to be a script. So, potentially you could just stay in the notebook and use tiny kernel or sorry, I can be buying thing or whatever it's called. Yeah, exactly can be.

Yeah. Yeah yeah no it'd be fun to work on this to make the whole thing, you know, run with notebooks and stick stuff in a local SQL live database and because like all this stuff, all this web gooey stuff, honestly I don't like it at all. But the nice thing is it actually doesn't matter because I don't have to use it because they provide an API.

So before I realized they have a nice API. I kept on like sending Thomas these messages saying how do I do this how do I do that, why isn't this working and you'd have to like send me these like pages of screenshots like click here click there turn this off, then you have to redo this three times and I'm just like, Oh, I hate this.

Yeah, then I found that within this like we do have an API, and I was like I looked at the API, it is so well documented it's got examples. Yeah, it's, it's really nice. So, I've put all the stuff I'm working on into this get repo. And so here's a tip by the way the information about if you're in a get repo, the get directory to clone directory the information about your get repo all lives in a file called dot get slash config.

So you can see here. This is the get repo. So if we now go to GitHub. One cool thing about this runs is it tracks your get commit like the run you can get back to what code version. Yeah, that is very cool isn't it. Yeah. I mean, I do think we could pretty easily create a local only version of this without all the fancy gooey, you know, which would also have benefits and people who want the fancy gooey and run stuff from multiple sites stuff like that would use weights and biases, but, you know, you could also do stuff without weights and biases.

Anyway, here's our. Yeah, so here's our repo. And this analysis.i.py.mb is the thing that I showed yesterday. If you want to check it out. I'll put that in the chat. Oh, by the way, you know, I think something else which would be good is we should start keeping a really good list for every walkthrough of like all the like key resources key like, you know, links key commands examples we wrote and stuff like that.

So I think to do that, what we should do is we should turn all of the walkthrough top topics into wikis. I don't know if you folks have used wiki topics before, but basically a wiki topic simply means that everybody will end up with an edit button. So if I just click.

Okay, this one already is a wiki. Right. So everybody should find on walkthrough one that you can click edit. Right. And so one thing we put in an edit, for example, would be probably like often Daniel has these really nice full walkthrough listings, we should have like a link to his reply, which you can get, by the way, by I think you click on this little date here.

Yes. And that gives you a link directly to the post, which is handy. What about this one? Okay, make that a wiki. Sorry, this is going to be a little bit boring for you guys to watch better than us. We'll do it while I'm here. And if anybody else has any questions or comments while I do that.

Yeah, Jeremy, you did the first GPU is possible to expand to high performance computing to use it on the note. Sorry to do what a pie in high performance computing so in the distributed environment, is it possible to track it as well. I mean, I don't know. I mean, yeah, I mean, anything that's running in in Python on a Linux computer should be fine.

I think some HPC things are like use their own weird job scheduling systems and stuff. But yeah, as long as it's running a normal Nvidia. It doesn't even have to be Nvidia honestly. But yeah, as long as it's running a normal Linux environment, it should be fine. It's pretty generic, you know, pretty general.

Okay, so they are now all. Wikis and so something I did the other day, for example, was in walkthrough for. I added something saying like, oh, this is the one where we actually had a bug and you need to add CD at the end, you know, and I tried to create a little list of what was covered.

So for example, maybe. Matt's fantastic timestamps we could copy and paste his list items into here, for instance. Some of radix examples, maybe, or even just to link to it. Yeah, so for this walkthrough, we should certainly include this link to the analysis to the IPY and B. Anyway, so you could see, yeah, with the API, it was just so easy just to go API dot sweep dot runs comes in as a dictionary, which we can then check a list of dictionaries into a data frame.

Okay, I'm rerunning the whole lot, by the way, because it turns out I made a mistake at some point I thought that Thomas had told me that squish was always better than crop for resizing and he told me I was exactly wrong and it's actually the crops always better than squish resizing so running the whole lot.

Disannoying but shouldn't take too long. You find that analyzing the sweep results like this was useful in relative to like what you can see in the UI, you know, you can make so much better hammer. Yes, so much. I was like, I mean, they've done a good job with that.

That UI like it's very sophisticated and clever and stuff but I just never got to be friends with it. And as soon as I turn it into a data frame is just like, okay, now I can get exactly what I want straight away, it was absolute breath of fresh air, frankly.

I really like their parallel coordinates chart. And I find it very difficult to reproduce that in like any visualization library. Do you like in a way, I don't like the parallel coordinates chart, but yeah I mean, there must be parallel coordinates chat to Python out there. There is there's like a plotly one, but it's not that nice.

Okay, because I don't bother with it so like hover over it and stuff and see, you know, what do they do they write their own. I think so. Yeah, that's impressive. And they kind of wrote their own data frame kind of language, their own visualization library, and like in a sense, it's like those weights and biases reports you and they have their own syntax.

Okay. There isn't one in plotly or something. Yeah, there's one in plotly for sure. Plotly things are normally interactive. So, have you tried that. Do you know if it's. Yeah, it works. It's not as nice, but yeah it works like when you hover over you, like, there's a, there's at least a version one doesn't.

Yeah, that one, it's like, it's very fiddly, you might have to draw a box around it. To, to, to highlight it. Oh, yeah. Okay, so you just drag over it. That's not terrible. Yeah, I mean it's okay. It's not the best UI. But, you know. Okay, this is, thanks for telling me about this it's cool.

You don't think you don't you don't like this that much it's not that useful for you. I mean, I haven't managed it. I mean I know other people like it so I don't doubt that it's useful for something it's just apparently not useful for the things I've tried to use it for yet, somehow.

I mean how do you do you kind of like drag over the end bit to see where they come from or something. Yeah, I mean it might be useful if you want to look at the weights and biases one. Because I think it renders one by default for you for the runs.

Yeah, yeah, it does. It's easier to like, let's check it out. Operate that yeah. I think it could be in the sweeps thing. Most likely. Okay. And then, yeah, pick a sweep. That one has zero runs but maybe that one. Okay, and then. Yeah, okay so here we go.

And then when you just hover over a section. See, I don't see how this is helping me. I guess like saying so there's not that much variance in the well I guess like what is the metric. We're trying to optimize doesn't really seem like it's even on this chart.

Like, you know what you probably have to tell it what your metric is, and we probably didn't. So the far right hand thing is resize method rather than. So let's. Is there some way to tell it that we care about. There's an edit, there's like a little pencil. Let's see.

Okay, add the column for add loss or something. That's the error. Wait, this is now let's do accuracy and multi. Okay. Okay, now we're talking. You probably want to get rid of pool and resize method since they don't have any variance. They're not adding any information. All right. There we go.

Now you can like cover. I actually want to do the thing. Oh, here we go. Can I do this drag area. Yeah, that does mean this is definitely not gonna tell me more than the number of experiments is not either. That's true, because there's just some arbitrary thing. Anyway, there's a thing.

Yeah, yeah, sometimes I learned something sometimes I don't find that visualization, you know, not always. So let's control PD to detach. Do you generally like to do the grid search thing or the Bayesian exploration. I sort of like, I'm all very new to all this right so but like, in general, I don't do hyper parameter Bayesian hyper parameter stuff ever.

And that's kind of funny because I was actually the one that taught weights and biases about the method they use for hyper parameter optimization, which actually tells you this is not quite true I've used it once. I used it specifically for finding a good set of dropouts for a WD LSTM, because there's like five of them.

And I told Lucas about how I'd like created a random forest that actually tries to, you know, predict how accurate something's going to be and then use that random forest to actually target better sets of hyper parameters. And then, yeah, that's what they ended up using for weights and biases, which is really cool.

But I kind of like to really use a much more human driven approach from like well what's the hypothesis I'm trying to test how can I test that as fast as possible, like, most hyper parameters are independent of most other hyper parameters. So, you know, like you don't have to do a huge grid search whatever and you can figure out so for example in this case it's like okay well learning rate of point oh wait was basically always the best.

Try every learning rate for every model for every resize type, etc. that that's just use that learning rate. Same thing for resize method, you know, crop was always better for the few things we tried it on so don't have to try every combination. And also like I feel like I learn a lot more about deep learning.

When I, you know, ask like well what do I want to know about this thing or is that thing independent of that other thing or is it, or are they connected or not. Does it, you know, and so in the end I kind of come away feeling like okay well I now know that, you know, every model we tried the optimal learning rates basically the same every model we've tried the optimal resize methods basically the same like so I'm kind of come away, knowing that I don't have to try all these different things every time.

And so now, next time I do another project, I can leverage my knowledge of what I've learned, rather than do yet another huge hyper parameter sweep. I see. You are the Bayesian optimization. Yeah, my brain is that is the thing that's learning. Exactly. And I find like people at big companies that spend all their time during these big, you know, hyper parameter optimizations like I always feel and talking to them that they don't seem to know much about the practice of deep learning, like they don't seem to know like what generally works and what generally doesn't work because they never bother trying to figure out the answers to those questions.

But instead they just chuck in a huge hyper parameter optimization thing into, you know, 1000 TPUs. Yeah, that's kind of something that's that's really interesting. I mean, like, do you does it, do you feel like these like hyper parameters generalize across different architectures different models totally. Yeah, totally. In fact, yeah, that was a piece of analysis we did gosh I don't know four or five years ago along with a fellowship today I folks in the platform today I folks, which is trying lots of different sets of hyper parameters that across this different sets of data sets as possible.

And the same sets of hyper parameters were the best or close enough to the best for everything we tried. Yeah. With different architectures like I can somewhat imagine that, you know, they just said maybe it's not that super important but you know between transformers and CNS. I mean I'm not questioning this because I don't have any experience to say that this is not correct I think this is wonderful and it is.

It is. It's amazing. So yeah, the fact that across 90 different models that we're testing that couldn't be more different. They all had basically the same best learning rate or close enough, you know, a very interesting aspect here is during the learning rate is something that you dump a lot of time into.

Usually when you start working on a project or in a competition, you would be naturally inclined to, hey, you know I'm using a different architecture. Let me try to find the experiment with learning rates, but it's nice that you can discuss. Well, I should mention, this is true of computer vision.

But not necessarily for tabular, I suspect, like all computer vision problems do look pretty similar, you know, the data for them looks pretty similar. I suspect it's also true, like, specifically of object recognition so like. Yeah, for. I don't know. I mean these are things like nobody seems to bother testing like which I find a bit crazy but we should do similar tests for segmentation and, you know, bounding boxes and so forth.

I'm pretty sure the same thing. You have the learning rate under. So we suggest maybe some different learning rates are good in different places. Well, the learning rate finder I built before I had done any of this research right. Okay. Like you might have noticed that I hardly ever use it nowadays in the course.

I don't even know if we've mentioned it yet. In this course, maybe we have the last lesson camera. Does anybody remember if we've done the learning rate finder yet in course 22. Yeah, I think we did. You think we did. Yeah. I understand that. Well, one of the really, you can sit there and play with parameters or you like, and schedule wheels and get nowhere.

And that's, it's one of the things I'm really taking away from the course is the fact that you're talking about strategy, which goes back to Renato Copi at his 2002 paper he had a term called strategy of analysis, and that's something that really stuck with me. And so, that sort of transcends that idea of just mucking around with parameters.

Yep. Exactly. I suppose the magic parameters, these are the defaults and fast AI. Yeah, pretty much, although with learning rate. Oh, that's weird. With learning rate. The defaults a bit lower than the optimal. Just because I didn't want to like, push it, you know, I'd rather it always worked pretty well, rather than be pretty much the best, you know.

Yeah. So I'm just going to go and disconnect my other computer because it's connected to point eight eight eight eight, which is going to mess things up. I'll be back in one tick. Okay. Okay. Actually, now I think about it, I don't quite know why this is connecting on port eight eight eight nine.

But part of this is to learn how to debug problems right so normally, the Jupiter server uses port eight eight eight eight. And I've only got my SSH connected to forward point eight eight eight eight so it's currently not working. So the fact that using a different port suggests it's already, it's already running somewhere.

So to find out where it's running, you can use PS which lists all the processes running on your computer. And generally speaking, I find I get used to some standard set of options that I nearly always want and then I forget what they mean. So I have no idea what w au or x means I just know that there are a set of options that I always use.

So that basically lists all your processes, which obviously is a bit too many so we want to now filter out the ones that contain Jupiter or notebook. So pipe is how you do that in Linux so that's going to send the output of this into the input of another program and a program that just prints out a list of matching lines is called grip.

So we can grab for Jupiter. Okay, there it is. So, I'm kind of wondering where that how that's running. I wonder if we've got like multiple sessions of teammates running. We don't. So teammates LS lists all your teammates sessions. Oh, I've got a stopped version in the background. Okay, that's why.

So I just have to foreground it. There we go. That was a bit weird. Okay, so now that should work. You foreground. Okay, I'll set to put it in the background of G to put it in the foreground. And where do you control that somebody it actually stops it.

Right. You can put it in the background and have it have it keep running by actually I'll show you. So if I press control Z and type jobs, that's stopped. Right. So if I now try to refresh this window. It's going to sit there waiting forever and never going to finish.

Okay. Because it's background back. It's in the stopped in the background. If you type BG optionally followed by a job number, which would be number one, and it defaults to the last thing that you put that you put in the background, it will start running it in the background.

Even after you stop there. Yeah, so it's now running in the background. So for that type jobs, it's now running. And it's still attached to this console. So if I open up this, you'll see it's still printing out things right but I'm, but I can also do other things.

And I don't do this very much because normally if I want something running at the same time I would just chuck it in another team x plane. I don't know. It's kind of nice to know this exists. Something else to point out is once I said BG it at this ampersand after the job.

That's because if you run something with an ampersand at the end. It always runs it in the background. So if you want to like fire off six processes to run in parallel, just put an ampersand at the end of each one, and it all run. It'll run in the background.

So for example, is a script that runs LS six times. And so if I run it. You can see they're all interspersed with each other, because it ran all six times at the same time. I see. And let's say like you create a process like this in the background without teammates.

And you want to kill it. You use the thing you could you could you could type fj to foreground it. And then, then then press Ctrl C. Yeah, something like that would would be fine, or you can you can kill a single job. So in general, like you probably would want to search for bash job control to learn how to do these things.

And as I said, one of the key as it mentions here, one of the key things to know is that a job number has a percent of the start. So, this is actually percent one would be okay to this. Knowing what the Google is definitely. Yes, the Google is the key thing.

Although often you can just put in a few examples so you could I'm guessing like if I take Ctrl C, B, G, F, G jobs, which are the things we just learned about. There we go. It kind of gets us pretty close. Now we know they've got job control commands.

All right. Now, so when I kind of iterate through notebooks, what I tend to do is like, once I've got something vaguely working, I generally duplicate it, and then I try to get something else vaguely working and once that starts vaguely working, I then rename it to the thing that is what I want.

So then from time to time, then I just clean up the duplicated versions that I didn't end up using and I can tell which they are, because I haven't renamed them yet. And so this is kind of how you can make it like you make a car is looks like you're making copies of it and so you can just click file.

Make a copy. Yep. Or in here you can click it and click duplicate. And so I mean, you like, what do you do after you duplicate it you try to open up that I'll open up that duplicate and I'll try something else some different type of parameter and different method or whatever.

So in this case, I started out here in Patty, and kind of just experimented. Okay. And show batch and find and try to get something running. And then, you know, after that I was like, okay, I've got something working, how do I make it better. And so I created Paddy small, but we've actually made a copy and it would have been called patty copy.ipnB.

And I was like, oh, I wonder about different architectures. So I created this, like, I was like, okay, well, basically I want to try different item transforms, different batch transforms and different architectures. So create a train, which takes those three things. And so it creates a set of image loaders with those item transforms and those batch transforms, use a fixed seed to get the same validation set each time, train it with that architecture, and then return the TTA error range.

And so then, this is kind of like your weights and biases, like, this is how you keep your different experiments ideas. So, yeah, so now you can see I've kind of gone through and tried a few different sets of item and batch transforms for this architecture. And this is like some just small architectures so they'll run reasonably quickly so these ran in about six minutes or so.

And this is very handy, right? If you go sell all output toggle, you can quickly get an overview of what you're doing. And so from that I kind of got a sense of which things seem to work pretty well for this one and then I replicated that for a different architecture and found those things which, you know, these are very, very different.

One's transformers based, one's confident based, you know, find the things which work pretty well consistently across very different architectures and for those then try those on other ones, SWIN V2 and SWIN. And, yeah, then find, you know, so then let's toggle the results back on. So I'm kind of looking at two things.

The first is what's the error rate at the end of training. The other is what's the TTA error rate. So my squish worked pretty well for both. Crop worked pretty well for both. This is all for Next. This 640x480, 288x224 didn't work so well. I mean, it's not terrible, but it's definitely worse.

And 320x240 instead, you know. Can you talk a little bit about what you're looking for in the TTA versus the final? I just want to say, like, I mean, the main thing I care about is TTA because that's what I'm going to end up using. Yeah, that's the main one, but like, let's see, in this case, this one's not really any better or worse than our best comes next.

The TTA is way better. So that's very encouraging, which is interesting. So this is now for VOT, right? Now, VIT, we can't do the rectangular ones because VIT has a fixed input size. So the final transformation has to be 224x224. So if you pass an int instead of a tuple, it's going to create square final images.

And, you know, on the other hand, this one looks crappy, right? So definitely want to use Squish for VIT. And then this one looks pretty good, you know. So this was using padding. So like for VIT, I probably wouldn't use crop. Last time I looked, TTA was not really a thing in other modeling frameworks that is given to you.

Is that still the case? No, as far as I know, that's true. Yeah. You know, so there are a lot of people, well, one group in particular has been copying without credit everything they can. And they might have done it. I won't mention their name. But yeah. So SWIN V2, apparently, Tanisha told me is what all the cool kids on Kaggle use nowadays.

That's a fixed resolution. And I found that for the larger sizes, there was no 224. You had the choice of 192 or 256. And 256, it got so slow, I couldn't bear it. But interestingly, even going down to 192, SWIN's TTA is actually nearly as good as the best VIT.

So I thought that was pretty encouraging. This one, interestingly, like VIT, didn't do nearly as well for the crop. And again, like VIT, it did pretty well on the pad. And then this is SWIN V1, which does have a 224. And so here, this TTA is OK, but the final result's not great.

And so to me, I'm like, no, it's not fantastic. This one's again, you know, it's interesting. The crop, none of them are going well, except for ConNEXT. This one's not great either, right? So SWIN V1, a little unimpressive. So basically, that's what I did next. And then I was like, OK, let's pick the ones that look good.

And I made a duplicate of Paddy small. And I just did a search and replace of small with large. So we've now got ConNEXT large. And the other things I did differently was I got rid of the fixed random seed. So there's no seed equals 42 here. And so that means we're going to have a different training set each time.

And so these are now not comparable, which is fine. You'll see if one of them is like totally crap, right? But they're not totally comparable. But the point is now, once I train each of these, they're training on a different architecture, a different resizing method. And I append to a list.

So I start off with a lefty list and I append the TTA predictions. And so I deleted the cells from the duplicate that weren't very good in Paddy small. So you'll see there's no crop anymore. Just squish and pad for VIT. And the SWIN V2. Probably shouldn't have kept both of the SWIN V1s.

Actually, they weren't so good. And then what I did in the very last Kaggle entry was I took the two VIT ones because they were the clear best. And I appended them to the list. So they were there twice. So it's just a slightly clunky way of doing a weighted average if you like.

Yes, stack them all together. Take the main of their predictions. Find the arg backs across the main of their predictions to get the predictions and then submit in the same way as before. So yeah, that was basically my process. It's like it's very not particularly thoughtful. It's pretty mechanical, which is what I like about it.

In fact, you could probably automate this whole thing. Sorry, somebody has to say something? No, no. I was going to say, how critical is this model stacking in Kaggle? Just curious how you think about that. I mean, we should try, right? We should probably submit. In fact, we're kind of out of time.

How about next time? Let's submit just the VIT, the best VIT, and we'll see how it goes. And that will give us a sense of how much the ensembling matters. We kind of know ahead of time it's not going to matter hugely. I mean, you specifically said on Kaggle.

On Kaggle, it definitely matters because in Kaggle you want to win. But in real life, my small conflict got 97, well, rounded up, that's 98%, and my ensemble got 98.8%. Now that's, in terms of error rate, that's nearly halving the error, so I guess that's actually pretty good. Really important question.

How do you keep track of what submissions are tied to which notebook? Oh, I just put a description to remind me, but a better approach would actually be to write the notebook name there, which is what I normally do. But in this case, I wasn't taking it particularly seriously, I guess.

So I was only planning to do these ones, and that was it. So I was basically like, okay, do one with a single small model, then do one with an ensemble of small models, and then do one with an ensemble of big models. And then it was after I submitted that that I thought, oh, I should probably wait for the VITs a bit higher, so I ended up with the fourth one.

So it's pretty easy for me, though, you did for significant submissions, so easy to track. But yeah, I think now that I know actually that I'm doing a little bit more, because I actually did want to try one more thing. I think what I'll probably do is I'll go back and I'm going to, you can edit these, I'm going to go and I'll put in the notebook name in each one.

And then, and then I wouldn't go back and change those notebooks later, unless there was like, I probably never, I would, I would just duplicate them and make changes in the duplicate and rename them to something sensible. And of course, this all ends up back in GitHub. So I always see, yeah, see what's going on.

So this is like, MLOps, Hamill, without. It's like you have a, you'd like every, like, quote, run is a notebook, like in a way to advise and kind of keep track. Yeah, yeah. Exactly. But I mean, the only reason I can kind of do this is because I had already done, like, lots of runs of models to find out which ones I can focus on right so I didn't have to try 100 architectures.

I mean, in a way, it forces you to really look at it closely. Yeah. If you just, you know, like have this dashboard, kind of like, like, at this role, my view is that this approach, you will actually become a better deep learning practitioner. And I also believe almost nobody does this approach and I almost feel like there are very few people I come across who are actually good deep learning practitioners, but not many people seem to know what works and what doesn't.

So, yeah. All right. Well, that's it, I think. Thanks for joining again, and yeah. See you all next time. Bye. Thank you. Take care, everybody.