In November 2020, Apple released their latest chips, the M1 chips, based solely on Apple Silicon. Now, the M1 chips are incredibly powerful for what they are, but they were not particularly well supported for anyone doing deep learning. Now, TensorFlow pretty much straight out of the gate supported GPU acceleration on the new M1 chips, but PyTorch have literally only just released it.
Now, it's been a relatively long wait, and even longer because today's deep learning models rely very heavily on large model sizes and lots of data. And in order to process that, we can't really rely on CPUs, it's incredibly slow. So, that has basically made deep learning very difficult with Macs, and practically no one is going to use a Mac for deep learning when they're using PyTorch, until now.
That being said, for at least the first generation M1 chips, we're probably not going to be training any large models with them anytime soon, but we can perform inference on them. And I will also show you how we can run through training, but it's not going to be anything spectacular.
So, PyTorch's support for GPUs comes in version 1.12. In the latest version of PyTorch, there comes a support for interaction between PyTorch and the lower-level Metal Performance Shaders, or MPS. So, it's Metal Performance Shaders that interact with our GPU, almost like a layer between PyTorch and the GPU. Another pretty interesting thing, or positive thing, about this integration is that PyTorch have collaborated directly with the Apple Metal team.
The Metal team deal with the new M1 chips, and the MPS layer has been fine-tuned for each particular M1 chip, or family of M1 chips. So, that basically means the performance should be pretty optimal for what it is doing. So, if we take a look at the release announcement over on PyTorch, we can come down here, and the accelerated training and evaluation - this is on an M1 Ultra, so I believe that's the top M1 chip at the moment - it shows you pretty incredible speed-up.
Now, I will say right now that this is not the same on my M1 chip, or it doesn't seem to be. So, this is probably the most optimistic that you're going to get, and of course this is in their release, so they're not going to show you the average results, they're just showing you the best they've ever seen.
So, either way, this is actually pretty good. So, the getting started on here is not that helpful, to be honest. This is not how you get started, so we need to go through a few steps to get everything organized and put together. With the new MPS-enabled PyTorch, there are two key prerequisites that are not really mentioned in the announcements.
The first is that you need to have macOS version 12.3 or higher. So, if you don't have that, you'll need to upgrade. And the other one is that we need to do everything via the ARM version of Python. Now, we can check all of this within our Python environment using this code here.
So, "import platform platform mac version", right? So, the first value that you get there is, in this case, you can see 12.4. That is the macOS version. That must be 12.3 or more. And then the last thing you see here is ARM64. That tells me, okay, I've got the right version of Python running here.
It's using the ARM architecture rather than the x86 architecture. In that case, you need to install the new Python environment. So, if you're using Anaconda, that's great because we're going to go through the installation of ARM Python with Anaconda. So, the first thing we're doing here is specifying that we need to use the ARM64 version of macOS.
So, we do that here. We then say, okay, we want to create a new conda environment, call that environment "ml". We are going to use Python 3.9. And Conda Forge is probably in your channel list anyway, but just in case, we're also specifying this, which is just another repository where we're going to pull all of these versions of different packages from.
Okay, so once that has installed, we need to go into that environment. So, conda activate ml, and then we need to permanently modify the conda subdirectory variable to make sure this is always going to be set to OSX ARM64. So, this is to avoid later on when we start pip installing things in this environment, this variable may switch back to x86, which we don't want.
So, we need to make sure it stays with the ARM environment architecture. So, we add that in there. And you'll probably see this where it says, "Please reactivate your environment." So, to do that, we just do conda activate and run that. That switches back to the base environment, and then we literally just activate ml again.
Now, the next step is to actually pip install PyTorch. And to do that, we are doing what you can see here. So, I am running pip install upgrade. You might not need the upgrade flag there, but just in case. And we need to make sure we are going to install the nightly version of PyTorch because as of this moment, version 1.12 is only released in the nightly releases, which is basically just a more frequent but slightly more unstable release.
It's like PyTorch releases. So, you need all of this here. So, go across, and you'll be able to copy these from the notebook link that I have in the description below. Okay, so we run that. And one thing to just be aware of here. So, if we have a look here, you'll know it's working and it's being installed in the correct version if you can see that it says ARM64 up here.
Now, if you are just wanting to use PyTorch with NPS, that's ready. You can go ahead and start using it, and I'll show you how in a moment. But for those of you that are going to want to use home-based transformers, there is an extra step. If we try and pip install transformers datasets, this will probably come up with an error for most of you.
For me, because I have already dealt with the error, it's not popping up. And it would say something like, "Error, failed building wheels for tokenizers." So, the reason for that is transformers tokenizers have particular tokenizers that are faster than other tokenizers. And they are faster because they use Rust.
Now, Rust is not installed from the ARM distribution of Python that we have at the moment. So, from within our new environment, all we need to do is install Rust like this. Once we execute that, we can just go ahead and pip install transformers and datasets like we did before.
And with that, we are ready to move on to our code. So, for this example, all we need are these libraries here with Torch, transformers, datasets, which we already installed. And we can check that our PyTorch installation has NPS using this. So, Torch has NPS. If you run that, you should see true.
And that means you're okay and you're ready to go with the rest of the code. So, here I'm just pulling some data. I'm not really going to go into detail on this. I'm not pulling much data because it's a pretty low-spec M1 chip and I can't handle much. So, here I'm initializing a tokenizer model.
That doesn't really make sense to you. It doesn't really matter for the sake of trying to understand PyTorch here. I'm tokenizing my text and then this is being run on CPU. This bit here. I have not moved anything to the NPS device yet or the GPU. This is all being run on CPU and we get 547 milliseconds as an average time for this model processing our data.
And this is a BERT model. BERT based on case from the HuggingFace library. Now, that's CPU performance. If we want to test with NPS, there's a few things we do. So, we have to move everything. The tensors and the model over to the NPS device. So, we set that.
So, Torch device NPS and then we just say model to device and tokens. So, your tensors to device as well. And that's all that device is. Now, if we rerun it, we see it's faster. So, we get 345 milliseconds. So, a little bit better. Not a massive difference, but it is a little bit better.
Now, this is using a batch size of 64. I tested a few different batch sizes with this data set. And I found that when it comes to larger batch sizes, at least for your inference and I imagine it's the same for training, we get a more significant difference as we increase the number of values in each batch.
So, really at 64 there's very little difference and you can kind of see that. But then when we increase that, the difference is definitely more pronounced. So, we're not using like A100 GPUs or anything here. We're just using the first generation MacBook Pro, the almost base specs with the M1 chip.
That's all we're using. So, it's not going to blow us away. But nonetheless, just for the sake of moving our model and tensors to a different device, I think this is pretty cool. Now, for those of you that are interested in using this with Hugging Face, I will quickly go through the setup for that because there's a few things to just be aware of.
So, as before, I'm doing the same thing. We've got our device, it's NPS, using the same data set. Nothing new here. And we go through everything. And in this case, anything beyond the batch size of one. So, this is training the entire bare model. Anything beyond a batch size of one just doesn't work.
So, you can see I'm here. Where's my batch? So, I'm creating the batch size here or the data loader here, batch size of one. We have our model to device because we're using NPS. And when we're loading the different tensors for training in this training loop here, I'm just moving them to the same NPS device.
So, anything beyond a batch size of one, I even tried just two. My kernel just died. So, yeah, you just have to be wary of that. But with one, this is actually showing a little bit lower than what I got in my other test, but I tried it a few times.
I got around 90 minutes to train that. Now, that's a full bare model. And we're probably not going to use MacBook or at least not this MacBook for training that. Now, moving on to maybe the way that we would actually use this on Mac is with these NLP models and so on, we typically have two components.
We have the larger core of the model. So, that would be BERT itself that has been pre-trained by Google or Microsoft or someone else. That includes a lot of parameters. But then there's a smaller head on top of that, which is just like a couple of linear layers that does something with the output from that BERT model.
So, for example, it might classify the text for you. And, well, OK, we can't train a full BERT model, but we can train that head on top, which is called fine tuning. So, let's go ahead and have a look at how we might do that. So, this is a little more complex because to do this, we need to initialize our entire model.
So, the whole BERT model and the head, and then we need to freeze all of the BERT model parameters. So, an extra step, although it's not anything complicated. So, here I'm initializing my model again. I'm using BERT for sequence classification. And this is all I'm doing. So, for param in model BERT parameters, I'm setting the parameter requires grad equal to false.
And taking a look at the model printout from PyTorch, we can see there's this BERT at the top, and then there's this classification part at the bottom. Those are all the parameters in our model. And when we're saying model dot BERT dot parameters, we are accessing all the BERT parameters.
And we're leaving those classified parameters at the end there. So, that is how that works. And then we just go down and we would run this again. Okay. So, in this case, we have this error. And in order to get rid of that, we actually need to downgrade to a slightly older version of the PyTorch nightly release, because this is just a bug that's popped up in one of the more recent releases.
Hopefully, by the time you're watching this, it won't be a problem anymore, so you won't get this anyway. But if you do, this is how we fix it. So, all we do is we pip install, make sure we upgrade, and we make sure that we do this. We don't actually need to include TorchVision and TorchAudio here, they're included anyway.
So, we just downgrade to this nightly release. Okay. That will fix the problem. And then we can go back to our code and rerun everything. Okay. So, in the little GPU usage history, we can see this peak now that the model is training. And that's just going to basically save the top for the next four or so minutes and go back down once we finish training.
So, let's skip ahead to this finishing. Okay. So, that has just finished. You can see GPU usage ramps down straight away. And yeah, so it took pretty much one second shy of four minutes. So, it's relatively quick. Nothing special, but to say I'm just on the first generation M1 using the almost base level MacBook Pro, it's not bad.
I think, realistically, you're probably not going to do much training on the M1 chips unless you have maybe M1 Ultra. Maybe in that case, you would. But even then, I'm not sure the chips are really at that level yet. Nonetheless, I'm sure with the future iterations of NPS and NPS-enabled PyTorch in particular, it's probably going to improve.
So, it's useful to be able to do this. And even maybe just for a little bit of maybe fine-tuning or at least inference here and there, I think this is pretty useful. And it's exciting to see where this will actually go. Maybe in the future, deep learning is for -- or Macs are for deep learning.
That would be interesting. So, yeah, it's pretty cool. I know the setup for this is kind of finicky. So, I hope this has at least helped you figure that out. So, I hope all this has been interesting. Thank you very much for watching, and I will see you again in the next one.
Bye.