Back to Index

Training BERT #5 - Training With BertForPretraining


Chapters

0:0 Introduction
1:7 Import Data
3:22 Power Data
5:14 Training Data
10:45 Mask Data

Transcript

Hi, welcome to the video. Here we're going to have a look at how we can pre-train BERT. So what I mean by pre-train is fine-tune BERT using the same approaches that are used to actually pre-train BERT itself. So we would use these when we want to teach BERT to better understand the style of language in our specific use cases.

So we'll jump straight into it but what we're going to see is essentially two different methods applied together. So when we're pre-training we're using something called mass language modeling or MLM and also next sentence prediction or NSP. Now in a few previous videos I've covered all of these so if you do want to go into a little more depth then I would definitely recommend having a look at those.

But in this video we're just going to go straight into actually training a BERT model using both of those methods using the pre-training class. So we need first to import everything that we need. So I'm going to import requests because I'm going to use request download data we're using which is from here.

You'll find a link in the description for that. And we also need to import our tokenizer and model classes from transformers. So from transformers we're going to import BERT tokenizer and also BERT for pre-training. Now like I said before this BERT for pre-training class contains both an MLM head and an NSP head.

So once we have that we also need to import torch as well so let me import torch. Once we have that we can initialize our tokenizer and model. So we initialize our tokenizer like this so BERT tokenizer and it's from pre-train and we're going to be using the BERT base uncased model.

Obviously you can use whichever BERT model you you'd like. And for our model we have the BERT for pre-training class. So that's our tokenizer model now let's get our data. Don't need to worry about that warning it's just telling us that we need to train it basically if we want to use it for inference predictions.

So we get our data we're going to pull it from here so let me copy that and it's just request.get and paste that in there and we should see a 200 code that's good. And so we just extract the data using the text attribute. So text is that we also need to split it because it's a set of paragraphs that are split by a new line character and we can see those in here.

Now we need to pair our data both for NSP and MLM so we'll go with NSP first and to do that we need to create a set of random sentences. So sentence A and B where the sentence B is not related to sentence A. We need roughly 50 percent of those and then the other 50 percent we want it to be sentence A is followed by sentence B so they are more coherent.

So we're basically teaching BERT to distinguish between coherence and non-coherence between sentences so like long-term dependencies. And we just want to be aware that within our text so we have this one paragraph that has multiple sentences so if we split by this we have those. So we need to create essentially a list of all of the different sentences that we have that we can just pull from when we're creating our training data for NSP.

Now to do that we're going to use this comprehension here and what we do is write sentence so for each sentence for each paragraph in the text so this variable for sentence in para.split so this is where we're getting our sentence variable from and we just want to be aware of if we have a look at this one we see we get this this empty sentence we get that for all of our paragraphs so we just want to not include those so we say if sentence is not equal to that empty sentence and we're also going to need to get the length of that bag for later as well and now what we do is create our NSP training data so we want that 50/50 split so we're going to use the random library to create that 50/50 randomness we want to initialize a list of sentence A's a list of sentence B's and also a list of labels and then what we do is we're going to loop through each paragraph in our text so for paragraph in text we want to extract each sentence from the paragraph so we're going to use it similar to what we've done here so write sentences and this is going to be a list of all the sentences within each paragraph so sentence for sentence in paragraph dot split by a period character and we also want to make sure we're not including those empty ones so if sentence is not equal to empty then once we're there what we want to do is want to get the number of sentences within each sentence or sentences variable so just get length and the reason we do that is because we want to check that a couple of times in the next few lines of code and first time we check that is now so we check that the number of sentences is greater than one now this because we're concatenating two sentences to create our training data we don't want to get just one sentence we need it where we have for example in this one where multiple sentences so that we can select like this sentence followed by this sentence we can't do that with these because there's no guarantee that this paragraph here is going to be talking about the same topic as this paragraph here so we just avoid that and in here first thing we want to do is set our start sentence so this is where sentence a is going to come from and we're going to randomly select say for this example we want to randomly select any of the first one two three sentences okay we'd want to select any of these three but not this one because if this sentence a we don't have a sentence b which follows it to extract so we write random rand int zero up to the length of num sentences minus two now we can now get our sentence a which is append and we just write sentences start and then for our sentence b 50% we want to select a random one from bag up here 50% of the time we want to select the genuine next sentence so say if random dot random so this will select a random float between zero and one is greater than 0.5 and sentence b is going to be we'll make this our coherent version so sentences start plus one and that means our label will have to be zero because that means that these two sentences are coherent sentence b does follow sentence a otherwise we select a random sentence for sentence b so do append and here we would write bag and we need to need to select a random one so we do random same as we did earlier on for the start we do random rand int from zero to the length of the bag size minus one so now we also need to do the label which is going to be one in this case we can execute that now that will work i go a little more into depth on this in the previous nsp video so i'll leave a link to that in the description if you want to go through it and now what we can do is tokenize our data so to do that we just write inputs and we use a tokenizer so this is just normal you know hugging face transformers and we just write sentence a and sentence b so hugging face transformers will will know what we want to do that would deal with formatting for us which is pretty useful we want to return pytorch tensors so return tensors equals pt and we need to set everything to a max length of 512 tokens so max length equals 512 the truncation needs to be set to true and we also need to set padding equal to max length okay so that creates three different tensors for us input ids token type ids and attention mask now for the pre-train model we need two more tensors we need our next sentence label tensor so to create that we write inputs next sentence label and that needs to be a long tensor containing our labels which we created before in the correct dimensionality so that's why we're using the the list here and the transpose and we can have a look at what that creates as well so let's have a look at the first 10 we get that okay and now what we want to do is create our mask data so we need the labels for our mask first so when we do this what we'll do is we're going to clone the input ids tensor we're going to use that clone for the labels tensor and then we're going to go back to our input ids and mask around 15 of the tokens in that tensor so let's create that labels tensor it's going to be equal to inputs input ids detach and clone okay so now we'll see in here we have all of the tensors we need but we still need to mask around 15 of these before moving on to training our model and to do that we'll use we'll create a random array using the torch rand that needs to be in the same shape as our input ids and that will just create a big tensor between values of zero up to one and what we want to do is mask around 15 of those so we will write something like this okay and that will give us our mask here but we also don't want to mask special tokens which we are doing here we're masking our classification tokens and we're also masking padding tokens up here so we need to add a little bit more logic to that so let me just add this to a variable so we add that logic which says and input ids is not equal to 101 which is our cls token which is what we we get down here so we can actually see the impact see we get faults now and we also want to do the same file separator tokens which is 102 we can't see any of those and our padding tokens we use zero so you see these are all that will go false now like so so that's our masking array and now what we want to do is loop through all of these extract the points at which they are not false so where we have the mask and use those indice values to mask our actual input ids up here to do that we go for i in range inputs input ids dot shape zero this is like iterating through each row and what we do here is we get selection so these are the indices where we have true values from the mask array and we do that using torch flatten mask array at the given index where they are non-zero and we want to create a list from that okay so we have that um oh and so let me show you what the selection looks like quickly so it's just a selection of indices to mask and we want to apply that to our inputs input ids so at the current index and we select those specific items and we set them equal to 103 which is the masking token id okay so that's our masking and now what we need to do is we need to take all of our data here and load it into a pytorch data loader and to do that we need to reformat our data into a pytorch data set object and we do that here so main thing to note is we pass our data into this initialization that assigns them to this self encodings attribute and then here we say okay given a certain index we want to extract the tensors in a dictionary format for that index and then here we're just passing lengths to how many uh how many tensors or how many samples we have in the full data set so run that we initialize our data set using that class so right data set equals meditations data set pass our data in there which is inputs and then with that we can create our data loader like this so torch utils data data loader and we have data set okay so that's ready now we need to set up our training loop so first thing we need to do is check if we are on gpu or not if we are we use it and we do that like so so device equals torch device cuda if torch cuda is available else torch device cpu so that's saying use the gpu if we have the cuda enabled gpu otherwise use cpu and then what we want to do is move our model over to that device and we also want to activate the training mode of our model and then we need to initialize our optimizer i'm going to be using adam with weighted decay so from transformers import adam w and initialize it like this so optim equals adam w we pass our model parameters to that and we also pass a learning rate so learning rate is going to be 5e to the minus 5 okay and now we can create our training loop so you're going to use tqdm to create the the progress bar and we're going to go through two epochs so for epoch in range 2 we initialize our loop by wrapping it within tqdm and in here we have our data loader and we set leave equal to true so that we can see that progress bar and then we loop through each batch within that loop um oh up here so i didn't actually set the batches my mistake so up here we want to set where we initialize the data loader i'm going to set batch batch size equal to 16 and also shuffle the data set as well okay so for batch in loop here we want to initialize the gradients on our optimizer and then we need to load in each of our tensors which there are quite a few of them so we have input keys we need to load in each one of these so input ids equals batch we access this like a dictionary so input ids we also want to move each one of those tensors that we're using to our device so we do that for each one of those and we have tension mask and next sentence labels and also labels okay and now we can actually process that through our model so in here we just need to pass all of these tensors that we have so input ids then we have token type ids just copy this attention mass next sentence label and labels okay so there's quite a lot going into our model and now what we want to do is extract the loss from that then we calculate loss for every parameter in our model and then using that we can update our gradients using our optimizer and then what we want to do is print the relevant info to our progress bar that we set up using tqdm and loop so loop with set description and here I was going to put the epoch info so the epoch we're currently on and then I also want to set the postfix which will contain the loss information so loss.item okay we can run that and you see that our model is now training so we're now training a model using both our sign language modeling and next sentence prediction and we haven't needed to take any structured data we've just taken a book and pulled all data and formatted it in the correct way for us to actually train a better model which I think is really cool so that's it for this video I hope it's been useful and I'll see you in the next one.