back to index

Stanford CS224N: NLP with Deep Learning | Winter 2021 | Lecture 1 - Intro & Word Vectors


Chapters

0:0 Introduction
1:43 Goals
3:10 Human Language
10:7 Google Translate
10:43 GPT
14:13 Meaning
16:19 Wordnet
19:11 Word Relationships
20:27 Distributional Semantics
23:33 Word Embeddings
27:31 Word tovec
37:55 How to minimize loss
39:55 Interactive whiteboard
41:10 Gradient
48:50 Chain Rule

Whisper Transcript | Transcript Only Page

00:00:00.000 | Hi, everybody.
00:00:07.000 | Welcome to Stanford CS224N, also known as Ling284, Natural Language Processing with Deep Learning.
00:00:16.000 | I'm Christopher Manning and I'm the main instructor for this class.
00:00:22.000 | So what we hope to do today is to dive right in.
00:00:27.000 | So I'm going to spend about 10 minutes talking about the course, and then we're going to get straight into content for reasons I'll explain in a minute.
00:00:36.000 | So we'll talk about human language and word meaning. I'll then introduce the ideas of the Word2Vec algorithm for learning word meaning.
00:00:45.000 | And then going from there, we'll kind of concretely work through how you can work out objective function gradients with respect to the Word2Vec algorithm
00:00:55.000 | and say a teeny bit about how optimization works. And then right at the end of the class,
00:01:00.000 | I then want to spend a little bit of time giving you a sense of how these word vectors work and what you can do with them.
00:01:08.000 | So really, the key learning for today is I want to give you a sense of how amazing deep learning word vectors are.
00:01:17.000 | So we have this really surprising result that word meaning can be represented not perfectly,
00:01:23.000 | but really rather well by a large vector of real numbers.
00:01:28.000 | And, you know, that's sort of in a way a commonplace of the last decade of deep learning,
00:01:33.000 | but it flies in the face of thousands of years of tradition and it's really rather an unexpected result to start focusing on.
00:01:43.000 | OK, so quickly, what do we hope to teach in this course? So we've got three primary goals.
00:01:50.000 | The first is to teach you the foundations, i.e. a good deep understanding of the effect of modern methods for deep learning applied to NLP.
00:02:00.000 | So we are going to start with and go through the basics and then go on to key methods that are used in NLP,
00:02:07.000 | recurrent networks, attention, transformers and things like that.
00:02:12.000 | We want to do something more than just that.
00:02:15.000 | We'd also like to give you some sense of a big picture understanding of human languages
00:02:21.000 | and what are the reasons for why they're actually quite difficult to understand and produce,
00:02:26.000 | even though humans seem to do it easily. Now, obviously, if you really want to learn a lot about this topic,
00:02:32.000 | you should enroll in and go and start doing some classes in the linguistics department.
00:02:37.000 | But nevertheless, for a lot of you, this is the only human language content you'll see during your master's degree or whatever.
00:02:44.000 | And so we do hope to spend a bit of time on that starting today.
00:02:49.000 | And then finally, we want to give you an understanding of an ability to build systems in PyTorch for some of the major problems in NLP.
00:02:58.000 | So we'll look at learning word meanings, dependency parsing, machine translation, question answering.
00:03:05.000 | Let's dive in to human language.
00:03:10.000 | Once upon a time, I had a lot longer introduction that gave lots of examples about human,
00:03:16.000 | how human languages can be misunderstood and complex.
00:03:20.000 | I'll show a few of those examples in later lectures.
00:03:25.000 | But since right for today, we're going to be focused on word meaning.
00:03:30.000 | I thought I'd just give one example, which comes from a very nice XKCD cartoon.
00:03:38.000 | And that isn't sort of about some of the sort of syntactic ambiguities of sentences,
00:03:45.000 | but instead it's really emphasizing the important point that language is a social system constructed and interpreted by people.
00:03:54.000 | And that's part of how it changes as people decide to adapt its construction.
00:04:02.000 | And that's part of the reason why human languages are great as an adaptive system for human beings,
00:04:08.000 | but difficult as a system for our computers to understand to this day.
00:04:15.000 | So in this conversation between the two women, one says, anyway, I could care less.
00:04:21.000 | And the other says, I think you mean you couldn't care less.
00:04:25.000 | Saying you could care less implies you care at least some amount.
00:04:29.000 | And the other one says, I don't know.
00:04:32.000 | We're these unbelievably complicated brains drifting through a void,
00:04:37.000 | trying in vain to connect with one another by blindly fleeing words out into the darkness.
00:04:43.000 | Every choice of phrasing, spelling and tone and timing carries countless signals and contexts and subtexts and more.
00:04:53.000 | And every listener interprets those signals in their own way.
00:04:57.000 | Language isn't a formal system. Language is glorious chaos.
00:05:02.000 | You can never know for sure what any words will mean to anyone.
00:05:06.000 | All you can do is try to get better at guessing how your words affect people
00:05:11.000 | so you can have a chance of finding the ones that will make them feel something like what you want them to feel.
00:05:17.000 | Everything else is pointless.
00:05:19.000 | I assume you're giving me tips on how you interpret words because you want me to feel less alone.
00:05:25.000 | If so, then thank you. That means a lot.
00:05:29.000 | But if you're just running my sentences past some mental checklist so you can show off how well you know it, then I could care less.
00:05:38.000 | OK, so that's ultimately what our goal is, is to how to do a better job at building computational systems
00:05:48.000 | that try to get better at guessing how their words will affect other people
00:05:54.000 | and what other people are meaning by the words that they choose to say.
00:06:01.000 | So an interesting thing about human language is it is a system that was constructed by human beings.
00:06:12.000 | And it's a system that was constructed, you know, relatively recently in some sense.
00:06:19.000 | So in discussions of artificial intelligence, a lot of the time people focus a lot on human brains and the neurons buzzing by.
00:06:30.000 | And this intelligence that's meant to be inside people's heads.
00:06:35.000 | But I just wanted to focus for a moment on the role of language.
00:06:40.000 | There's actually, you know, this is kind of controversial, but, you know,
00:06:45.000 | it's not necessarily the case that humans are much more intelligent than some of the higher apes like chimpanzees or bonobos.
00:06:53.000 | Right. So chimpanzees and bonobos have been shown to be able to use tools to make plans.
00:06:59.000 | And in fact, chimps have much better short term memory than human beings do.
00:07:05.000 | So relative to that, if you look through the history of life on Earth, human beings developed language really recently.
00:07:13.000 | How recently? We kind of actually don't know because, you know, there's no fossils that say, OK, here's a language speaker.
00:07:21.000 | But, you know, most people estimate that language arose for human beings sort of, you know, somewhere in the range of one hundred thousand to a million years ago.
00:07:33.000 | OK, that's a while ago. But compared to the process of evolution of life on Earth, that's kind of blinking an eyelid.
00:07:42.000 | But that power of communication between human beings quickly set off our ascendancy over other creatures.
00:07:51.000 | So it's kind of interesting that the ultimate power turned out not to be having poisonous fangs or being super fast or super big,
00:07:59.000 | but having the ability to communicate with other members of your tribe.
00:08:05.000 | It was much more recently, again, that humans developed writing, which allowed knowledge to be communicated across distances of time and space.
00:08:14.000 | And so that's only about five thousand years old, the power of writing.
00:08:19.000 | So in just a few thousand years, the ability to preserve and share knowledge took us from the Bronze Age to the smartphones and tablets of today.
00:08:30.000 | So a key question for artificial intelligence and human computer interaction is how to get computers to be able to understand the information conveyed in human languages.
00:08:41.000 | Simultaneously, artificial intelligence requires computers with the knowledge of people.
00:08:47.000 | Fortunately, now our AI systems might be able to benefit from a virtuous cycle.
00:08:52.000 | We need knowledge to understand language and people well, but it's also the case that a lot of that knowledge is contained in language spread out across the books and Web pages of the world.
00:09:04.000 | And that's one of the things we're going to look at in this course is how that we can sort of build on that virtuous cycle.
00:09:11.000 | A lot of progress has already been made, and I just want to very quickly give a sense of that.
00:09:18.000 | So in the last decade or so, and especially in the last few years with neural methods of machine translation, we're now in a space where machine translation really works moderately well.
00:09:33.000 | So, again, from the history of the world, this is just amazing, right?
00:09:37.000 | For thousands of years, learning other people's languages was a human task which required a lot of effort and concentration.
00:09:47.000 | But now we're in a world where you could just hop on your Web browser and think, oh, I wonder what the news is in Kenya today.
00:09:55.000 | And you can head off over to a Kenyan website and you can see something like this and you can go, huh, and you can then ask Google to translate it for you from Swahili.
00:10:06.000 | And, you know, the translation isn't quite perfect, but it's, you know, it's reasonably good.
00:10:12.000 | So the newspaper Tuco has been informed that local government minister, Linsan Belakanyama, and his transport counterpart, Siddig Meir, died within two separate hours.
00:10:23.000 | So, you know, within two separate hours is kind of awkward, but essentially we're doing pretty well at getting the information out of this page.
00:10:31.000 | And so that's quite amazing.
00:10:35.000 | The single biggest development in NLP for the last year, certainly in the popular media, was GPT-3, which was a huge new model that was released by OpenAI.
00:10:51.000 | What GPT-3 is about and why it's great is actually a bit subtle.
00:10:56.000 | And so I can't really go through all the details of this here, but it's exciting because it seems like it's the first step on the path to what we might call universal models, where you can train up one extremely large model on something like that library picture I showed before.
00:11:15.000 | And it just has knowledge of the world, knowledge of human languages, knowledge of how to do tasks, and then you can apply it to do all sorts of things.
00:11:25.000 | So no longer are we building a model to detect spam and then a model to detect pornography and then a model to detect whatever foreign language content and just building all these separate supervised classifiers for every different task.
00:11:41.000 | We've now just built up a model that understands.
00:11:45.000 | So exactly what it does is it just predicts following words.
00:11:51.000 | On the left, it's being told to write about Elon Musk in the style of Dr. Seuss, and it started off with some text and then it's generating more text.
00:12:07.000 | And the way it generates more text is literally by just predicting one word at a time, following words to complete its text.
00:12:17.000 | But this has a very powerful facility, because what you can do with GPT-3 is you can give it a couple of examples of what you'd like it to do.
00:12:29.000 | So I can give it some text and say, I broke the window, change it into a question, what did I break? I gracefully save the day, I change it into a question, what did I gracefully save?
00:12:42.000 | And then that text tells GPT-3 what I'm wanting it to do. And so then if I give it another statement, like I gave John flowers, I can then say GPT-3 predict what words come next, and it'll follow my prompt and produce who did I give flowers to.
00:13:01.000 | I can say I gave her a rose and a guitar, and it will follow the idea of the pattern and do who did I give a rose and a guitar to. And actually this one model can then do an amazing range of things, including many that's quite surprising to do at all.
00:13:18.000 | So that's one example of that. Another thing that you can do is get it to translate human language sentences into SQL. So this can make it much easier to do CS145.
00:13:31.000 | So having given it a couple of examples of SQL translation of human language text, which I'm this time not showing because it won't fit on my slide, I can then give it a sentence like how many users have signed up since the start of 2020, and it turns it into SQL, or I can give it another query, what is the average number of influences each user subscribed to?
00:13:56.000 | And it then converts that into SQL. So GPT-3 knows a lot about the meaning of language and the meaning of other things like SQL and can fluently manipulate it.
00:14:13.000 | Okay, so that leads us straight into this topic of meaning, and how do we represent the meaning of a word? Well, what is meaning? Well, we can look up something like the Webster dictionary and say, okay, the idea that is represented by a word, the idea that a person wants to express by using words, signs, etc.
00:14:36.000 | So Webster's dictionary definition is really focused on the word idea somehow, but this is pretty close to the commonest way that linguists think about meaning.
00:14:46.000 | So that they think of word meaning as being a pairing between a word, which is a signifier or symbol, and the thing that it signifies, the signified thing, which is an idea or thing, so that the meaning of the word chair is a set of things that are chairs.
00:15:05.000 | This is referred to as denotational semantics, a term that's also used and similarly applied for the semantics of programming languages.
00:15:14.000 | This model isn't very deeply implementable, like, how do I go from the idea that, okay, chair means the set of chairs in the world, to something I can manipulate meaning with in my computers.
00:15:29.000 | So, traditionally, the way that meaning has normally been handled in actual language processing systems is to make use of resources like dictionaries and thesauri, in particular a popular one is WordNet, which organized words and terms into both synonym sets,
00:15:50.000 | words that can mean the same thing and hyponyms, which correspond to is a relationships.
00:15:57.000 | And so for the is a relationships, you know, we can kind of look at the hyponyms of panda and a panda is a kind of procyonid, whatever those are, I guess that's probably with red pandas, which is a kind of carnivore, which is a kind of placental, which is kind of mammal, and you sort of head up this hyponym hierarchy.
00:16:19.000 | So WordNet has been a greater resource for NLP, but it's also been highly deficient. So, it lacks a lot of nuance. So for example in WordNet, proficient is listed as a synonym for good, but you know maybe that's sometimes true but it seems like in a lot of context it's not true and you mean something rather different when you say proficient versus good.
00:16:43.000 | So, it's limited as a human constructed thesaurus. So, in particular, there's lots of words and lots of uses of words that just aren't there including you know anything that is, you know, sort of more current terminology like wicked is there for the wicked witch but not for more modern colloquial uses.
00:17:07.000 | So, the word thesaurus certainly isn't there for the kind of description some people make of programmers, and it's impossible to keep up to date. So it requires a lot of human labor, but even when you have that, you know, it has a sense of synonyms, but doesn't really have a good sense of words that means something similar.
00:17:29.000 | Fantastic and great means something similar without really being synonyms. And so this idea of meaning similarity is something that'd be really useful to make progress on and where deep learning models excel.
00:17:46.000 | Okay, so what's the problem with a lot of traditional NLP. Well the problem with a lot of traditional NLP is that words are regarded as discrete symbols so we have symbols like hotel conference motel our words, which in deep learning speak, we refer to as a localist representation.
00:18:08.000 | And that's because if you in statistical or machine learning systems, want to represent these symbols that each of them is a separate thing. So the standard way of representing them, and this is what you do in something like a statistical model if you're building a logistic regression
00:18:29.000 | model with words as features is that you represent them as one hot vectors so you have a dimension for each different word. So maybe like my example here are my representations as vectors for motel and hotel.
00:18:46.000 | And so that means that we have to have huge vectors corresponding to the number of words now vocabulary. So the kind of if you had a high school English dictionary it probably had about 250,000 words in that.
00:19:00.000 | But there are many, many more words in the language really so maybe we at least want to have a 500,000 dimensional vector to be able to cope with that.
00:19:11.000 | Um, but the bigger, even bigger problem with the streets symbols, is that we don't have this notion of word relationships and similarity. So for example in web search.
00:19:22.000 | If a user searches for Seattle motel, we'd also like to match on documents containing Seattle hotel. But our problem is we've got these one hot vectors for the different words.
00:19:34.000 | And so, in a formal mathematical sense, these two vectors are orthogonal, that there's no natural notion of similarity between them whatsoever.
00:19:44.000 | Well, there are some things that we could do a bit try and do about that and people did do about that in, you know, before 2010, we could say hey we could use word net synonyms and we count things that list the synonyms are similar anyway, or, hey, maybe we could
00:20:01.000 | build up a representations of words that have meaning overlap and people did all of those things, but they tended to fail badly from incompleteness.
00:20:12.000 | So instead, what I want to introduce today is the modern deep learning method of doing that, where we encode similarity in a real value vector themselves.
00:20:25.000 | So how do we go about doing that.
00:20:28.000 | The way we do that is by exploiting this idea called distributional semantics.
00:20:35.000 | So the idea of distributional semantics is, again, something that when you first see it, maybe feels a little bit crazy.
00:20:45.000 | Because rather than having something like denotational semantics, what we're now going to do is say that a word's meaning is going to be given by the words that frequently appear close to it.
00:20:59.000 | JR Firth was a British linguist from the middle of last century, and one of his pity slogans that everyone quotes at this moment is, you shall know a word by the company it keeps.
00:21:13.000 | And so this idea that you can represent a sense for words meaning as a notion of what context it appears in has been a very successful idea.
00:21:26.000 | One of the most successful ideas that's used throughout statistical and deep learning NLP.
00:21:33.000 | It's actually an interesting idea, more philosophically, so that there are kind of interesting connections, for example, in Wittgenstein's later writings, he became enamored of a use theory of meaning.
00:21:47.000 | And this is, in some sense, a use theory of meaning. But whether, you know, it's the ultimate theory of semantics is actually still pretty controversial.
00:21:56.000 | But it proves to be an extremely computational sense of semantics, which has just led to it being used everywhere very successfully in deep learning systems.
00:22:08.000 | So when a word appears in a text, it has a context, which are the set of words that appear nearby.
00:22:16.000 | So for a particular word, my example here is banking, we'll find a bunch of places where banking occurs in texts, and we'll collect the sort of nearby words as context words, and we'll see, say that those words that are appearing in that kind of muddy brown color around
00:22:36.000 | the word banking, that those context words will in some sense, represent the meaning of the word banking.
00:22:45.000 | While I'm here, let me just mention one distinction that will come up regularly. When we're talking about a word in our natural language processing class.
00:22:55.000 | There are two senses of word, which are referred to as types and tokens. So there's a particular instance for word. So there's in the first example government debt problems turning, turning into banking crises, there's banking there.
00:23:11.000 | That's a token of the word banking, but then I've collected a bunch of instances of quote unquote the word banking and when I say the word banking, and a bunch of examples of it.
00:23:24.000 | I'm treating banking as a type, which refers to, you know, the uses and meaning the word banking has across instances.
00:23:34.000 | So, what are we going to do with these distributional models of language. Well, what we want to do is we're going based on looking at the words that occur in context as vectors that we want to build up dense real valued vector for each word that in some sense, represents
00:24:02.000 | the meaning of that word, and the way it will represent the meaning of that word is that this vector will be useful for predicting other words that occur in the context.
00:24:18.000 | So, in this example to keep it manageable on the slide vectors are only eight dimensional.
00:24:25.000 | But in reality we use considerably bigger vectors so very common sizes actually 300 dimensional vectors. Okay, so for each word, that's a word type, we're going to have a word vector.
00:24:39.000 | These are also used with other names. They refer to as newer word representations, or for a reason they'll become clear on the next slide, they're referred to as word embeddings.
00:24:51.000 | So these are now distributed representation not a localist representation, because the meaning of the word banking is spread over all 300 dimensions of the vector.
00:25:05.000 | And these are called word embeddings because effectively, when we have a whole bunch of words.
00:25:12.000 | These representations place them all in a high dimensional vector space. And so they're embedded into that space. Now unfortunately human beings, very bad at looking at 300 dimensional vector spaces or even eight dimensional vector spaces.
00:25:31.000 | The only thing that I can really display to you here is a two dimensional projection of that space. Now even that's useful.
00:25:39.000 | But it's also important to realize that when you're making a two dimensional projection of a 300 dimensional space, you're losing almost all the information in that space, and a lot of things will be crushed together that don't actually deserve to be better.
00:25:54.000 | So here's my word embeddings. Of course you can't see any of those at all. But if I zoom in, and then I zoom in further, what you'll already see is that the representations we've learned distributionally do just a good job at grouping together similar
00:26:17.000 | to a word embedding. So in this sort of overall picture, I can zoom into one part of the space is actually the part that's up here in this view of it.
00:26:28.000 | And it's got words for countries so not only are countries generally grouped together, even the sort of particular sub groupings of countries, make a certain amount of sense and down here we then have nationality words.
00:26:44.000 | And then in another part of the space we can see different kinds of words so here are verbs and we have ones like come and go, a very similar saying and thinking words say think expect the kind of similar and nearby.
00:27:01.000 | And then down here in the bottom right, we have sort of verbal auxiliaries and copulas so have had has forms of the verb to be, and certain contentful verbs are similar to copula verbs because they describe states, you know, he remained angry he became angry.
00:27:19.000 | And they're actually then group close together to the word, the verb to be so there's a lot of interesting structure in this space that then represents the meaning of words for the algorithm I'm going to introduce now is one that's called word to Vec which was introduced
00:27:40.000 | by Michael off and colleagues in 2013 as a framework for learning word vectors and it's sort of a simple and easy to understand place to start.
00:27:49.000 | So the idea is we have a lot of text from somewhere which we commonly refer to as a corpus of text corpus is just the Latin word for body. So it's a body of text.
00:28:03.000 | And we choose a fixed vocabulary, which will typically be large, but nevertheless truncated so we get rid of some of the really rare words. So we might say vocabulary size of 400,000, and we then create for ourselves vector for each word.
00:28:22.000 | Okay, so then what we do is we want to work out what's a good vector to for each word. And the really interesting thing is that we can learn these word vectors from just a big pile of text by doing this distributional similarity task of being able to predict
00:28:44.000 | what words occur in the context of other words. So in particular, we're going to iterate through the text. And so at any moment we have a center word, C and context words outside of it, which we'll call O.
00:29:02.000 | And then, based on the current word vectors, we're going to calculate the probability of a context word occurring, given the center word, according to our current model.
00:29:17.000 | And we know that certain words did actually occur in the context of that center word. And so what we want to do is then keep adjusting the word vectors to maximize the probability that's assigned to words that actually occur in the context of the center word, as we proceed
00:29:36.000 | through these texts.
00:29:38.000 | So to start to make that a bit more concrete. This is what we're doing.
00:29:43.000 | So we have a piece of text, we choose our center word, which is here in two. And then we say, well,
00:29:53.000 | a model of predicting the probability of context words given the center word, and this model will come to in a minute but it's defined in terms of our word vectors.
00:30:04.000 | And so let's see what probability it gives to the words that actually occurred in the context of this word. Huh. It gives them some probability but maybe be nice if the probability of the sign was higher.
00:30:19.000 | So how can we change our word vectors to raise those probabilities. And so we'll do some calculations with into being the center word, and then we'll just go on to the next word, and then we'll do the same kind of calculations and keep on chunking.
00:30:36.000 | So the big question then is, well, what are we doing for working out the probability of a word occurring in the context of the center word. And so that's the central part of what we develop as the word to object.
00:30:55.000 | And then we have an overall model that we want to use. So, for each position and our corpus our body of text. We want to predict context words within a window of fixed size and given the center word, WJ, and we want to become good at doing that so we want to give high probability
00:31:16.000 | of a word occurring in the context.
00:31:19.000 | And so what we're going to do is we're going to work out what's formally the data likelihood as to how good a job we do at predicting words in the context of other words.
00:31:30.000 | And so formally that likelihood is going to be defined in terms of our word vectors so they're the parameters of our model, and it's going to be calculated as taking the product of using each word as the center word, and then the product of each word in a window
00:31:47.000 | around that of the probability of predicting that context word in the center word.
00:31:55.000 | And so in this model, we're going to have an objective function sometimes also called a cost or a loss that we want to optimize and essentially what we want to do is we want to maximize the likelihood of the context we see around center words, but following standard practice,
00:32:14.000 | we don't fiddle that because rather than dealing with products it's easier to deal with sums. And so we work with log likelihood. And once we take log likelihood all of our products turn into sums.
00:32:28.000 | And then we also work with the average log likelihood so we've got a one on T term here for the number of words in the corpus. And finally, for no particular reason, we like to minimize our objective function, rather than maximizing it so we stick a minus sign in there.
00:32:46.000 | But then by minimizing this objective function J of theta, that maximizing our predictive accuracy.
00:32:57.000 | Okay, so that's the setup, but we still haven't made any progress in how do we calculate the probability of a word occurring in the context, given the center word.
00:33:10.000 | So, the way we're actually going to do that is we have vector representations for each word, and we're going to work out the probability, simply in terms of the word vectors.
00:33:24.000 | Now at this point there's a little technical point, we're actually going to give to each word, two word vectors, one word vector for when it's used as the center word, and a different word vector when it's used as a context word.
00:33:40.000 | And this is done because it just simplifies the math and the optimization. So it seems a little bit ugly, but actually makes building word vectors a lot easier, and really, we can come back to that and discuss it later.
00:33:56.000 | But that's what it is. And so then once we have these word vectors, the equation that we're going to use for giving the probability of a context word appearing given the center word is that we're going to calculate it using the expression in the middle bottom of my slide.
00:34:16.000 | So, let's sort of pull that apart, just a little bit more.
00:34:24.000 | So, what we have here with this expression is, so for a particular center word and a particular context word, O, we're going to look up the vector representation of each word, so they're U of O and V of C.
00:34:41.000 | And so then we're simply going to take the dot product of those two vectors. So, dot product is a natural measure for similarity between words, because in any particular mention of positive, you'll get a component that adds to the dot product sum.
00:35:00.000 | If both are negative, it'll add a lot to the dot product sum. If one's positive and one's negative, it'll subtract from the similarity measure. If both of them are zero, it won't change the similarity.
00:35:12.000 | So, it sort of seems a sort of plausible idea to just take a dot product and thinking, well, if two words have a larger dot product, that means they're more similar.
00:35:24.000 | So, then after that, we sort of really doing nothing more than, okay, we want to use dot products to represent word similarity, and now let's do the dumbest thing that we know how to turn this into a probability distribution.
00:35:41.000 | So, what do we do? Well, firstly, well, taking a dot product of two vectors that might come out as positive or negative, but well, we want to have probabilities, we can't have negative probabilities.
00:35:55.000 | So, one way to avoid negative probabilities is to exponentiate them, because then we know everything is positive. And so, then we are always getting a positive number in the numerator.
00:36:07.000 | But for probabilities, we also want to have the numbers add up to one. So, we have a probability distribution. So, we're just normalizing in the obvious way where we divide through by the sum of the numerator quantity for each different word in the vocabulary.
00:36:24.000 | So, then necessarily that gives us a probability distribution.
00:36:29.000 | So, all the rest of that that I was just talking through, what we're using there is what's called the softmax function. So, the softmax function will take any Rn vector and turn it into things between zero to one.
00:36:46.000 | And so, we can take numbers and put them through this softmax and turn them into a probability distribution, right? So, the name comes from the fact that it's sort of like a max.
00:36:58.000 | So, because of the fact that we exponentiate, that really emphasizes the big contents in the different dimensions of calculating similarity. So, most of the probability goes to the most similar things.
00:37:15.000 | And it's called soft because, well, it doesn't do that absolutely. It'll still give some probability to everything that's in the slightest bit similar.
00:37:26.000 | I mean, on the other hand, it's a slightly weird name because, you know, max normally takes a set of things and just returns one, the biggest of them, whereas the softmax is taking a set of numbers and is scaling them, but is returning the whole probability distribution.
00:37:45.000 | Okay, so now we have all the pieces of our model. And so, how do we make our word vectors? Well, the idea of what we want to do is we want to fiddle our word vectors in such a way that we minimize our loss, i.e. that we maximize the probability of the words that we actually saw in the context of the center word.
00:38:12.000 | And so, the theta represents all of our model parameters in one very long vector. So, for our model here, the only parameters are our word vectors.
00:38:25.000 | So, we have for each word, two vectors, its context vector and its center vector. And each of those is a d dimensional vector where d might be 300. And we have v many words.
00:38:40.000 | So, we end up with this big huge vector, which is 2dv long, which if you have a 500,000 vocab times the 300 dimensional vector time, it's more math than I can do in my head, but it's got millions and millions of parameters.
00:38:57.000 | So, we've got millions and millions of parameters. And we somehow want to fiddle them all to maximize the prediction of context words.
00:39:08.000 | And so, the way we're going to do that then is we use calculus. So, what we want to do is take that math that we've seen previously and say, huh, well, with this objective function, we can work out derivatives.
00:39:26.000 | And so, we can work out where the gradient is. So, how we can walk downhill to minimize loss. So, we're at some point and we can figure out what is downhill and we can then progressively walk downhill and improve our model.
00:39:45.000 | And so, what our job is going to be is to compute all of those vector gradients.
00:39:52.000 | Okay.
00:39:55.000 | So, at this point, I then want to kind of show a little bit more as to how we can actually do that.
00:40:06.000 | A couple more slides here, but maybe I'll just try and jigger things again and move to my interactive whiteboard. What we wanted to do, right, so we had our overall, we had our overall J theta that we were wanting to minimize our average neg log likelihood.
00:40:32.000 | So, that was the minus one on T of the sum of T equals one to big T, which was our text length. And then we were going through the words in each context.
00:40:44.000 | So, we were doing J between M words on each side, except itself. And then what we wanted to do was in the side there, we were then working out the log probability of the context word at that position, given the word that's in the center position T.
00:41:09.000 | And so, then we converted that into our word vectors by saying that the probability of O given C is going to be expressed as the softmax of the dot product.
00:41:38.000 | Okay.
00:41:45.000 | So, now what we want to do is work out the gradient, the direction of downhill for this last gen. And so, the way we're doing that is we're working out the partial derivative of this expression with respect to every parameter in the model.
00:42:12.000 | And all the parameters in the model are the components, the dimensions of the word vectors of every word. And so, we have the center word vectors and the outside word vectors.
00:42:26.000 | So, here, I'm just going to do the center word vectors.
00:42:33.000 | But on a future homework, assignment two, the outside word vectors will show up and they're kind of similar.
00:42:41.000 | So, what we're doing is we're working out the partial derivative with respect to our center word vector, which is, you know, maybe a 300 dimensional word vector of this probability of O given C.
00:43:00.000 | And since we're using log probabilities of the log of this probability of O given C of this exp of U of OT VC over my writing will get worse and worse, sorry.
00:43:14.000 | I've already made a mistake, haven't I? Sum, sum W equals one to the vocabulary of the exp of UWT VC.
00:43:25.000 | Okay.
00:43:27.000 | Well, at this point, things start off pretty easy. So, what we have here is something that's log of A over B, so that's easy. We can turn this into log A minus log B.
00:43:41.000 | But before I go further, I'll just make a comment at this point.
00:43:46.000 | You know, so at this point, my audience divides on in two, right? There are some people in the audience for which maybe a lot of people in the audience, this is really elementary math.
00:44:02.000 | I've seen this a million times before and he isn't even explaining it very well.
00:44:07.000 | So if you're in that group, well, feel free to look at your email or the newspaper or whatever else is best suited to you. But I think there are also other people in the class who, oh, the last time I saw calculus was when I was in high school, for which that's not the case.
00:44:26.000 | So I wanted to spend a few minutes going through this a bit concretely so that to try and get over the idea that, you know, even though most of deep learning and even word vector learning seems like magic, that it's not really magic.
00:44:46.000 | It's really just doing math. And one of the things we hope is that you do actually understand this math that's being done.
00:44:54.000 | So that you can keep along and do a bit more of it. Okay, so then what we have is this way of writing the log.
00:45:04.000 | We can say that that expression above equals the partial derivatives with VC of the log of the numerator log x u o t v c minus
00:45:24.000 | the partial derivative
00:45:29.000 | of the log of the denominator. So that's then the sum of w equals one to v of the x of u w t v c.
00:45:46.000 | Okay, so at that point, I have
00:45:51.000 | my numerator here, and my former denominator there.
00:45:57.000 | So at that point, there is that the first part is the numerator part. So the numerator part is really, really easy. So, we have here
00:46:11.000 | log and x but just inverses of each other. So they just go away. So that becomes the derivative
00:46:21.000 | with respect to
00:46:27.000 | just what's left behind, which is u0 dot producted with VC. Okay.
00:46:35.000 | And so the thing to be aware of is, you know, we're still doing this multivariate calculus. So what we have here is calculus with respect to a vector, like hopefully you saw some of in math 51 or some other place, not high school
00:46:52.000 | single variable calculus. On the other hand, you know, to the extent you half remember some of this stuff, most of the time you can just do perfectly well by thinking about what happens
00:47:07.000 | with one dimension at a time, and it generalizes the multivariable calculus. So if about all that you remember of calculus is that d dx of ax equals a, really, it's the same thing that we're going to be using here, that here we have
00:47:34.000 | inside word dot producted with the VC. Well, at the end of the day, that's going to have terms of sort of u0 component one times the center word component one plus u0 component two plus
00:47:59.000 | u0 component two. And so we're sort of using this bit over here. And so what we're going to be getting out is the u0 and u01 and the u02. So this will be all that is left with respect to VC1 when we take its derivative with respect to VC1, and this term will be
00:48:20.000 | everything left when we take the derivative with respect to the variable VC2. So the end result of taking the vector derivative of u0 dot producted with VC is simply going to be u0.
00:48:41.000 | Okay, great. So that's progress. So then at that point, we go on and we say, oh damn, we still have the denominator, and that's slightly more complex, but not so bad. So then we want to take the partial derivatives with respect to VC of the log of the denominator.
00:49:10.000 | Okay.
00:49:19.000 | And so then at this point, the one tool that we need to know and remember is how to use the chain rule. So the chain rule is when you're wanting to work out
00:49:35.000 | a way of having derivatives of compositions of functions. So we have f of g of whatever x, but here it's going to be VC. And so we want to say, okay, what we have here is we're working out a composition of functions.
00:49:54.000 | So here's our f, and here is our x, which is g of VC.
00:50:05.000 | Actually, maybe I shouldn't call it x.
00:50:09.000 | Oops.
00:50:11.000 | Maybe I should probably better to call it z or something.
00:50:17.000 | Okay, so when we then want to work out the chain rule, well, what do we do?
00:50:26.000 | We take the derivative of f at the point z. And so at that point, we have to actually remember something. We have to remember that the derivative of log is the one on x function.
00:50:39.000 | So this is going to be equal to the one on x for z.
00:50:48.000 | So that's then going to be one over the sum of w equals one to v of x of u, t, v, c, multiplied by the derivative of the inner function.
00:51:07.000 | So the derivative of the part that is remaining, I hope I'm getting this right, the sum of, oh, and there's one trick here.
00:51:21.000 | At this point, we do want to have a change of index. So we want to say the sum of x equals one to v of x of u of x, v, c.
00:51:33.000 | Since we can get into trouble if we don't change that variable to be using a different one.
00:51:43.000 | Okay, so at that point, we're making some progress, but we still want to work out the derivative of this.
00:51:51.000 | And so what we want to do is apply the chain rule once more. So now here's our f, and in here is our new z equals g of v, c.
00:52:04.000 | And so we then sort of repeat over. So we can move the derivative inside a sum always.
00:52:16.000 | So we're then taking the derivative of this.
00:52:27.000 | And so then the derivative of x is itself. So we're going to just have x of u, x, t, v, c times, this is sum of x equals one to v, times the derivative of u, x, t, v, c.
00:52:53.000 | And so then this is what we've worked out before.
00:52:59.000 | We can just rewrite as u, x.
00:53:02.000 | Okay, so we're now making progress.
00:53:06.000 | So if we start putting all of that together, what we have is the derivative of the partial derivatives with v, c of this log probability.
00:53:23.000 | Right, we have the numerator, which was just u zero, minus, we then had the sum of the numerator, sum of x equals one to v of x, u, x, t, v, c times u of x.
00:53:43.000 | And then that was multiplied by our first term that came from the one on x, which gives you the sum of w equals one to v of the x of u, w, t, v, c.
00:54:01.000 | And the fact that we changed the variables became important. And so by just sort of rewriting that a little, we can get that that equals u zero minus the sum of v equals, oh sorry, x equals,
00:54:24.000 | one to v of this x, u of x, t, v, c over the sum of w equals one to v of x, u, w, t, v, c times u of x.
00:54:42.000 | And so at that point, this sort of interesting thing has happened that we've ended up getting straight back exactly the softmax formula probability that we saw when we started.
00:54:57.000 | We can just rewrite that more conveniently as saying this equals u zero minus the sum over x equals one to v of the probability of x given c times u, x.
00:55:14.000 | And so what we have at that moment is this thing here is an expectation.
00:55:21.000 | And so this is an average over all the context vectors weighted by their probability according to the model.
00:55:30.000 | And so it's always the case with these softmax style models that what you get out for the derivatives is you get the observed minus the expected.
00:55:43.000 | So our model is good if our model on average predicts exactly the word vector that we actually see.
00:55:53.000 | And so we're going to try and adjust the parameters of our model so it does that as much as possible.
00:56:02.000 | Now, I mean, we try and make it do it as much as possible. I mean, of course, as you'll find, you can never get close, right?
00:56:11.000 | You know, if I just say to you, okay, the word is croissant, which words are going to occur in the context of croissant?
00:56:20.000 | I mean, you can't answer that. There are all sorts of sentences that you could say that involve the word croissant.
00:56:25.000 | So actually, our particular probability estimates are going to be kind of small, but nevertheless, we want to sort of fiddle our word vectors to try and make those estimates as high as we possibly can.
00:56:41.000 | So I've gone on about this stuff a bit, but haven't actually sort of shown you any of what actually happens.
00:56:52.000 | So I just want to quickly show you a bit of that as to what actually happens with word vectors.
00:57:02.000 | So here's a simple little iPython notebook, which is also what you'll be using for assignment one only.
00:57:09.000 | So in the first cell, I import a bunch of stuff.
00:57:14.000 | So we've got NumPy for our vectors, matplotlib for plotting, it learns kind of your machine learning, Swiss Army Knife.
00:57:25.000 | GenSim is a package that you may well not have seen before. It's a package that's often used for word vectors.
00:57:31.000 | It's not really used for deep learning. So this is the only time you'll see it in the class.
00:57:36.000 | But if you just want a good package for working with word vectors and some other application, it's a good one to know about.
00:57:44.000 | Okay, so then in my second cell here, I'm loading a particular set of word vectors.
00:57:51.000 | So these are our GloVe word vectors that we made at Stanford in 2014.
00:57:57.000 | And I'm loading 100 dimensional word vectors so that things are a little bit quicker for me while I'm doing things here.
00:58:06.000 | Sort of do this model of bread and croissant. Well, what I've just got here is word vectors.
00:58:14.000 | So I just wanted to sort of show you that there are word vectors.
00:58:25.000 | Well, maybe I should have loaded those word vectors in advance.
00:58:33.000 | Let's see.
00:58:42.000 | Oh, okay. Well, I'm in business.
00:58:46.000 | Okay, so right. So here are my word vectors for bread and croissant.
00:58:53.000 | And while I'm seeing that maybe these two words are a bit similar, so both of them are negative in the first dimension, positive in the second, negative in the third, positive in the fourth, negative in the fifth.
00:59:05.000 | So it sort of looks like they might have a fair bit of dot product, which is kind of what we want because bread and croissant are kind of similar.
00:59:12.000 | But what we can do is actually ask the model, and these are Gensim functions now, you know, what are the most similar words so I can ask for croissant.
00:59:23.000 | What are the most similar words to that, and it will tell me it's things like brioche, baguette, focaccia. So that's pretty good.
00:59:32.000 | Pudding is perhaps a little bit more questionable. We can say, most similar to the USA and it says Canada, America, USA with periods, United States, that's pretty good.
00:59:45.000 | Most similar to banana.
00:59:48.000 | I get out coconut, mangoes, bananas, sort of fairly tropical fruit. Great.
00:59:56.000 | Before finishing though, I want to show you something slightly more than just similarity, which is one of the amazing things that people observed with these word vectors, and that was to say, you can actually sort of do arithmetic in this vector space that makes sense.
01:00:13.000 | And so in particular people suggested this analogy task. And so the idea of the analogy task is you should be able to start with a word like king, and you should be able to subtract out a male component from it, add back in a woman component, and then you should be able to ask,
01:00:31.000 | well what word is over here, and what you'd like is that the word over there is queen.
01:00:41.000 | And so, this sort of little bit of, so we're going to do that with this sort of same most similar function which is actually more, so as well as having positive words, you can ask for most similar negative words, and you might wonder what's most negatively similar to a banana
01:01:04.000 | and you might be thinking, oh, it's, I don't know, some kind of meat or something.
01:01:11.000 | Actually that by itself isn't very useful because when you could just ask for most negatively similar to things you tend to get crazy strings that were found in the data set that you don't know what they mean if anything.
01:01:24.000 | But if we put the two together, we can use the most similar function with positives and negatives to do analogies. So, we're going to say we want a positive king, we want to subtract out negatively man, we want to then add in positively woman and find out what's most similar
01:01:43.000 | to this point in the space. So my analogy function does that precisely that by taking a couple of most similar ones, and then subtracting out the negative one. And so we can try out this analogy function, so I can do the analogy I show in the picture with man is the king
01:02:06.000 | as woman is fight.
01:02:09.000 | I'm not saying this right. Yeah, man is the king as woman is to.
01:02:15.000 | Sorry, I haven't done my cells.
01:02:22.000 | Okay, man is the king as woman is the queen so that's great. And that's works well. I mean, and you can do it the sort of other way around king is the man as queen is to woman.
01:02:36.000 | If this only worked for that one freakish example, you may be wouldn't be very impressed, but you know it actually turns out like it's not perfect but you can do all sorts of fun analogies with this, and they actually work so you know I could ask for something
01:02:53.000 | like an allergy.
01:02:58.000 | Oh, here's a good one.
01:03:00.000 | Australia is to be as France is to what, and you can think about what you think the answer that one should be, and it comes out as champagne which is pretty good.
01:03:17.000 | Or I could ask for something like analogy pencil is to sketching as camera is to what, and it says photographing.
01:03:35.000 | You can also do the analogies with people.
01:03:38.000 | At this point I have to point out that this data was, and the model was built in 2014, so you can't ask anything about Donald Trump in it while you can eat Trump is in there but not as president, but I could ask something like analogy.
01:03:56.000 | Is to Clinton as Reagan is to what and you can think of what you think is the right analogy there.
01:04:12.000 | The analogy it returns is Nixon. So I guess that depends on what you think of Bill Clinton, as to whether you think that was a good analogy or not. You can also do sort of linguistic analogies with it so you can do something like analogy, tall is to tallest as long is to what
01:04:35.000 | is to what is the longest. So it really just sort of knows a lot about the meaning and behavior of words. And you know I think when these methods were first developed and hopefully still for you that you know people were just gobsmacked about how well this actually worked at capturing
01:04:56.000 | the meaning of words. And so these word vectors then went everywhere as a new representation that was so powerful for working out word meaning. And so that's our starting point for this class, and we'll say a bit more about them next time.
01:05:11.000 | And they're also the basis of what you're looking at for the first assignment. Can I ask a quick question about the distinction between the two vectors per word.
01:05:22.000 | My understanding is that there can be several context words per word in the vocabulary, but then there's only two vectors I kind of thought the distinction between the two is that one is like the actual word and one's like the context word but multiple context words.
01:05:38.000 | Right. How do you, how do you pick just two then. Well, so we're doing every one of them. Right. So, like, maybe I won't turn back on the screen share, but you know we were doing.
01:05:51.000 | In the objective function there was a sum over you. So you've got, you know, this big corpus of text right so you're taking a sum over every word, which is it appearing as the center word, and then inside that there's a second sum, which is for each word
01:06:10.000 | it's a context so you are going to count each word as a context word. And so then for one particular term of that objective function, you've got a particular context word, and a particular center word, but you're then sort of summing over different context words for each
01:06:29.000 | word, and then you're summing over all of the decisions of different center words and, and to say a little just a sentence more about having two vectors. I mean, you know, in some sense it's an ugly detail, but it was done to make things sort of simple and fast.
01:06:47.000 | So, you know, if you look at the math carefully if you sort of treated this two vectors as the same so if you use the same vectors for center and context.
01:07:03.000 | You can say okay let's work out the derivatives, things get uglier, and the reason that they get uglier is it's okay when I'm iterating over all the choices of context word oh my god sometimes the context word is going to be the same as the center word, and so that messes
01:07:25.000 | up my derivatives, whereas by taking them as separate vectors that never happens so it's easy.
01:07:34.000 | But the kind of interesting thing is, you know, saying that you have these two different representations, sort of just ends up really sort of doing no harm, and my wave my hands argument for that is, you know, since we're kind of moving through each position the corpus
01:07:53.000 | is moving one by one, you know, something, a word that is the center word at one moment is going to be a context word at the next moment, and the word that was the context word is going to have become the center word.
01:08:07.000 | So you're sort of doing the, the computation both ways. In each case, and so you should be able to convince yourself that the two representations for the word, end up being very similar, and they do not not identical for technical reasons of the ends of documents and things
01:08:26.000 | like that, but very, very similar.
01:08:30.000 | So we tend to get two very similar representations for each word, and we just average them and call that the word vector. And so when we use word vectors we just have one vector for each word.
01:08:42.000 | That makes sense. Thank you.
01:08:45.000 | I have a question purely out of curiosity. So we saw when we projected the vectors, the word vectors onto the 2D surface we saw like little clusters of words that are similar to each other and then later on we saw that with the analogies thing, we kind of
01:08:59.000 | see that there's these directional vectors that sort of indicate like the ruler of or the CEO of something like that. And so I'm wondering, is there, are there relationships between those relational vectors themselves such as like, is the ruler of vector sort of similar
01:09:15.000 | to the CEO of vector which is very different from like is makes a good sandwich with vector.
01:09:23.000 | Is there any research on that.
01:09:26.000 | That's a good question.
01:09:30.000 | How are you stumped me already in the first lecture.
01:09:38.000 | I mean that.
01:09:39.000 | I can't actually think of a piece of research and so I'm not sure I have a confident, I'm not sure I have a confident answer. I mean it seems like that's a really easy thing to check.
01:09:51.000 | Once you have one of these sets of word vectors that it seems like any for any relationship that is represented well enough by word you should be able to see if it comes out, kind of similar.
01:10:08.000 | I mean, I'm not sure we can look and see.
01:10:12.000 | That's totally okay just just curious.
01:10:18.000 | I want to go last little bit to your answer to first question so when you want to collapse two vectors for the same word do you see usually take the average different people have done different things but the most common practices after you, you know, there's still a bit
01:10:34.000 | left to cover about running word to back that we didn't really get through today so I still got a bit more work to do on Thursday, but you know once you run your word to back algorithm, and you sort of your output is two vectors for each word and kind of like
01:10:51.000 | when it's center and when it's context. And so, typically people just average those two vectors and say okay that's the representation of the word croissant, and that's what appears in the sort of word vectors file like the one I loaded.
01:11:12.000 | So my question is, if a word have two different meanings or multiple different meanings, can we still represent it as a single vector.
01:11:22.000 | Yes, that's a very good question.
01:11:25.000 | Actually there is some content on that and Thursday's lecture, so I can say more about that.
01:11:44.000 | But I think that it does have lots of meaning so if you have a word like star that can be astronomical object or it can be you know film star Hollywood star, or it can be something like the gold stars that you got in elementary school, and we just taking all those uses
01:12:04.000 | of the word star and collapsing them together into one word vector.
01:12:11.000 | That sounds really crazy and bad, but actually turns out to work rather well.
01:12:18.000 | Maybe I won't go through all of that right now because there is actually stuff on that on Thursday's lecture.
01:12:26.000 | Oh, I see.
01:12:28.000 | I'm going to save my slides for next time.
01:12:47.000 | My next question is, do we look at how to implement or do we look at like the stack of like something like Alexa or something for like speech to context actions in this course, or is it just primarily understanding.
01:13:08.000 | Yeah, so this is an unusual an unusual quarter.
01:13:13.000 | But for this quarter, there's a very clear answer, which is this quarter.
01:13:21.000 | There's a new class being taught, which is CS to 24 s, a speech class being taught by Andrew mass, and you know this is a class that's been more regularly offered.
01:13:33.000 | Sometimes it's only been offered every third year, but it's being offered right now. So, if what you want to do is learn about speech recognition and learn about sort of methods for building dialogue systems.
01:13:48.000 | So, CS to 24 s.
01:13:51.000 | So, you know for this class in general.
01:13:55.000 | The vast bulk of this class is working with text and doing various kinds of text analysis and understanding so we do tasks like some of the ones I've mentioned we do machine translation.
01:14:11.000 | We do question answering.
01:14:15.000 | We look at how to pass this structure of sentences and things like that, you know, in other years I sometimes say a little bit about speech.
01:14:24.000 | But since this quarter there's a whole different class that's focused on speech that seem a little bit silly.
01:14:30.000 | So that sounds good.
01:14:48.000 | I'm now getting a bad echo I'm not sure if that's my fault or your fault but anyway.
01:14:55.000 | Yeah, so, yeah, so the speech class does a mix of stuff so I mean the sort of pure speech problems classically have been doing speech recognition so going from a speech signal to text and doing text to speech going from text to a speech signal, and both of those
01:15:18.000 | which are now normally done, including by the cell phone that sits in your pocket, using neural networks and so it covers, both of those, but then between that the class covers quite a bit.
01:15:32.000 | And in particular it starts off with looking at building dialogue systems so this is sort of something like Alexa Google Assistant Siri, as to, well, assuming you have a speech recognition a text to speech system.
01:15:50.000 | You do have text in and text out what are the kind of ways that people go about building
01:15:59.000 | dialogue systems like the ones that I just mentioned.
01:16:05.000 | I have a question. So, I think there was some people in the chat noticing that the like opposites were really near to each other, which was kind of odd, but I was also wondering, what about like positive and negative valence or like affect is that captured
01:16:22.000 | in this type of model, or is it like not captured well like with the opposites how those weren't really.
01:16:30.000 | So the short answer is for both of those. And so there's this is a good question a good observation, and the short answer is no both of those are captured really really badly.
01:16:41.000 | I mean, there's, there's a definition.
01:16:45.000 | When I say really really badly. I mean, what I mean is, if that's what you want to focus on.
01:16:54.000 | You've got problems I mean it's not that the algorithm doesn't work so precisely what you find is that, you know, antonyms generally occur in very similar topics because you know whether it's saying, you know, john is really tall john is really short, or that
01:17:14.000 | movie was fantastic or that movie was terrible, right, you get antonyms occurring in the same context so because of that, their vectors are very similar, and similarly for sort of affect and sentiment based words well like my great and terrible example,
01:17:33.000 | their context is similar. There for that if you're just learning this kind of predict words in context models that know that's not captured. Now, that's not the end of the story.
01:17:48.000 | You know, absolutely people wanted to use neural networks for sentiment and other kinds of sort of connotation affect, and there are very good ways of doing that, but somehow you have to do something more than simply predicting words and context because that's not sufficient
01:18:06.000 | to capture that dimension.
01:18:09.000 | More on that later.
01:18:12.000 | Yeah, I mean I happen to like adjectives to like very basic adjectives like so and like not because those would like appear in like similar context.
01:18:22.000 | What was your first example before not like so this is so cool. So that's actually a good question as well. So, yeah, so there are these very common words that are commonly referred to as function words by linguists, which now includes ones like.
01:18:40.000 | Other ones like and and prepositions like you know to and on.
01:18:47.000 | You sort of might suspect that the word vectors for those don't work out very well because they occur and all kinds of different contexts and they're not very distinct from each other in many cases, and to a first approximation I think that's true, and part of why I didn't use
01:19:07.000 | those examples in my slides. Yeah.
01:19:11.000 | But you know at the end of the day, we do build up vector representations of those words too. And you'll see in a few lectures time when we start building what we call language models that actually they do do a great job in those words as well.
01:19:27.000 | So, I mean, I think the meaning there. I mean, you know, another feature of the word to vect model is it actually know the position of words right so it said I'm going to predict every word around the center word, but you know I'm predicting it in the same way.
01:19:48.000 | I'm predicting constantly the word before me or versus the word after me, or the word to away in either direction right that all just predicted the same by that one probability function.
01:20:00.000 | So if that's all you've got that sort of destroys your ability to do a good job at capturing these sort of common more grammatical words like so not an end, but we build slightly different models that are more sensitive to the structure of sentences and then we start
01:20:19.000 | doing a good job on those two.
01:20:22.000 | Okay, thank you.
01:20:26.000 | I have a question about the characterization of what to bet.
01:20:31.000 | Because I read one of the media, and it seems to characterize architecture as well.
01:20:37.000 | So, it's slightly different from how it's presented in the book.
01:20:56.000 | So what to back is kind of a framework for building word vectors, and that there are sort of several variant precise algorithms within the framework.
01:21:08.000 | And, you know, one of them is how, whether you're predicting the context words or whether you're predicting the center word.
01:21:17.000 | So the model I showed was predicting the context words, so it was the skip gram model, but then there's sort of a detail of how in particular, do you do the optimization, and what I presented was the sort of easiest way to do it, which is naive optimization
01:21:40.000 | with the equation, the softmax equation for word vectors.
01:21:45.000 | And it turns out that that naive optimization is sort of needlessly expensive, and people have come up with faster ways of doing it in particular, the commonest thing you see is what's called skip gram with negative sampling, and the negative sampling is
01:22:05.000 | then sort of a much more efficient way to estimate things and I'll mention that on Thursday.
01:22:11.000 | Right. Okay.
01:22:13.000 | So, who's asking for more information about how word vectors are constructed beyond the summary of random initialization, and then gradient based iterative optimization.
01:22:28.000 | Yeah.
01:22:29.000 | So, I sort of will do a bit more connecting this together.
01:22:33.000 | In the Thursday lecture, I guess this sort of, I mean so much fun can fit in the first class.
01:22:39.000 | But the picture.
01:22:41.000 | The picture is essentially the picture I showed the pieces of. So, to learn word vectors, you start off by having a vector for each word type, both for context and outside, and those vectors, you initialize randomly.
01:23:04.000 | So, that you just put small little numbers that are randomly generated in each vector component, and that's just your starting point. And so from there on, you're using an iterative algorithm, where you're progressively updating those word vectors, so they do a better
01:23:24.000 | job at predicting which words appear in the context of other words. And the way that we're going to do that is by using the gradients that I was sort of starting to show how to calculate, and then, you know, once you have a gradient you can walk in the opposite direction
01:23:44.000 | of that gradient, and you're then walking downhill I you're minimizing your loss, and we're going to sort of do lots of that until our word vectors get as good as possible.
01:23:56.000 | So, you know, it's really all math, but in some sense, you know, word vector learning is sort of miraculous since you do literally just start off with completely random word vectors and run this algorithm of predicting words for a long time, and out of nothing emerges
01:24:20.000 | a set of word vectors that represent meaning well.