(upbeat music) - Hi, we're Isaac and Peter from Roboflow. And we're gonna talk about the best papers of 2024 in computer vision. So for us, we define best as what made the biggest shifts in the space. And to determine that we looked at what are some major trends that happened and what papers most contributed to those trends.
So I'm gonna talk about a couple of trends. Peter's gonna talk about a trend and then we're gonna hand it off to Moondream. So the trends that I'm interested in talking about are a major transition from models that run on per image basis to models that run using the same basic ideas on video.
And then also how debtors are starting to take over the real-time object detection scene from the YOLOs, which have been dominant for years. So as a highlight, we're gonna talk about Sora, which from my perspective is the biggest paper of 2024, even though it came out in February. Yeah, yeah.
So Sora is just a post. So I'm going to fill it in with details from replication efforts, including open Sora and related work such as a stable diffusion video. And then we're also gonna talk about SAM2, which applies the SAM strategy to video. And then how debtors are, the improvements in 2024 to debtors that are making them a Pareto improvement to YOLO-based models.
So to start this off, we're gonna talk about the state-of-the-art of video generation at the end of 2023. MagVIT is a discrete token model discrete token video tokenizer akin to VQ, GAN, but applied to video sequences. And it actually outperforms state-of-the-art handcrafted video compression frameworks in terms of the bit rate versus human preference for quality and videos generated by autoregressing on these discrete tokens.
Generates some pretty nice stuff, but up to like five seconds length and you know, not super detailed. And then suddenly a few months later, we have this, which when I saw it, it was totally mind-blowing to me. 1080p, a whole minute long. We've got light reflecting in puddles. That's reflective, reminds me of those RTX demonstrations for next generation video games, such as Cyberpunk, but with better graphics.
You can see some issues in the background if you look closely, but they're kind of, as with a lot of these models, the issues tend to be things that people aren't going to pay attention to unless they're looking for. In the same way that like six fingers on a hand, you're not going to notice is a giveaway unless you're looking for it.
So yeah, as we said, Sora does not have a paper. So we're going to be filling it in with context from the rest of the computer vision scene attempting to replicate these efforts. So the first step, you have an LLM caption, a huge amount of videos. This is a trick that they introduced in Dolly 3 where they train a image captioning model to just generate very high quality captions for a huge corpus and then train a diffusion model on that.
Their Sora and the replication efforts also show a bunch of other steps that are necessary for good video generation, including filtering by aesthetic score and filtering by making sure the videos have enough motion so they're not just like kind of the generators not learning to just generate static frames.
So then we encode our video into a series of space-time latency. Once again, this were very sparse in details. So the replication related works, OpenSora actually uses a MagVIT V2 itself to do this, but swapping out the discretization step with a classic VAE autoencoder framework. They show that there's a lot of benefit from getting the temporal compression, which makes a lot of sense as each sequential frames and videos have mostly redundant information.
So by compressing in the temporal space, you allow the latent to hold a lot more semantic information while avoiding that duplicate. So we've got our space-time latency possibly via, there's some 3D VAE, presumably a MagVIT V2. And then you throw it into a diffusion transformer. So I think it's personally interesting to note that OpenSora is using a MagVIT V2, which originally used an autoregressive transformer decoder to model the latent space, but is now using a diffusion transformer.
So it's still a transformer happening. Just the question is like, is it parameterizing the stochastic differential equation? Is it parameterizing a conditional distribution via autoregression? It's also worth noting that most diffusion models today, the very high performance ones are switching away from the classic like DDPM, denoising diffusion probability modeling framework to rectified flows.
Rectified flows have a very interesting property that as they converge, they actually get closer to being able to be sampled with a single step, which means that in practice, you can actually generate high quality samples much faster. Major problem of DDPM and related models for the past four years is just that they require many, many steps to generate high quality samples.
So, and naturally the third step is throwing lots of compute at the problem. So I never figured out how to manage to get this video to loop, but we see very little compute, medium compute, lots of compute. This is so interesting because the original diffusion transformer paper from Facebook actually showed that, in fact, the specific hyperparameters of the transformer didn't really matter that much.
What mattered was that you were just increasing the amount of compute that the model had. So I love how in the, once again, little blog posts, they don't even talk about like the specific hyperparameters. They say, we're using a diffusion transformer and we're just throwing more compute at it and this is what happens.
OpenSORA shows similar results. The primary issue I think here is that no one else has 32X compute budget. So we end up with these, we end up in the middle of the domain in most of the related work, which is still super, super cool. It's just a little disappointing considering the context.
So I think this is a beautiful extension of the framework that was introduced in '22 and '23 for these very high quality per image generation and then extending that to videos. It's awesome. And it's GA as of Monday, except no one can seem to get access to it because they keep shutting down the login.
The next, so next paper I wanted to talk about is SAM. So we at RoboFlow allow users to label data and train models on that data. SAM for us has saved our users 75 years of labeling time. We are the, to the best of my knowledge, the largest SAM API that exists.
We also, SAM also allows us to have our users train just pure bounding box regression models and use those to generate high quality masks, which has the great side effect of requiring less training data to have a meaningful convergence. So most people are data limited in the real world.
So anything that requires less data to get to a useful thing is super useful. Most of our users actually run their object, per frame object detectors on every frame in a video, or maybe not most, but many, many. And so SAM follows into this category of taking, SAM2 falls into this category of taking something that really, really works and applying it to a video, which has the wonderful benefit of being plug and play with most of our, many of our users use cases.
We're still building out a sufficiently mature pipeline to take advantage of that, but it's in the works. So here we've got a great example. We can click on cells and then follow them. You even notice the cell goes away and comes back and we can still keep track of it, which is very challenging for existing object trackers.
High-level overview of how SAM2 works. There's a simple pipeline here where we can give, provide some type of prompt and it fills out the rest of the likely masks for that object throughout the rest of the video. So here we're giving a bounding box in the first frame, a set of positive negative points, or even just a simple mask.
I'm going to assume people are somewhat familiar with SAM. So I'm going to just give a high-level overview of how SAM works. You have an image encoder that runs on every frame. SAM2 can be used on a single image, in which case the only difference between SAM2 and SAM is that image encoder, which SAM used a standard VIT.
SAM2 replaced that with a Hera hierarchical encoder, which gets approximately the same results, but leads to a six times faster inference, which is excellent, especially considering how in a trend of 23 was replacing the VIT with more efficient backbones. In the case where you're doing video segmentation, the difference is that you actually create a memory bank and you cross attend the features from the image encoder based on the memory bank.
So the feature set that is created is essentially, well, I'll go more into it in a couple of slides, but we take the features from the past couple frames plus a set of object pointers and the set of prompts and use that to generate our new masks. Then we then fuse the new masks for this frame with the image features and add that to the memory bank.
It's, well, I'll say more in a minute. Just like SAM, SAM2 actually uses a data engine to create its data set in that people are, they assembled a huge amount of reference data, used people to label some of it and train the model, used the model to label more of it and asked people to refine the predictions of the model.
And then ultimately the data set is just created from the final output of the model on the reference data. It's very interesting. This paradigm is so interesting to me because it unifies a model in a data set in a way that is very unique. It seems unlikely that another model could come in and have such a tight relationship with the training set.
Yeah, so brief overview of how the memory bank works. The paper did not have a great visual, so I'm just, I'm going to fill in a bit more. So we take the last couple of frames from our video and we take the last couple of frames from our video.
Attend that along with the set of prompts that we provided, they could come from the future, they could come from anywhere in the video, as well as reference objects pointers saying, by the way, here's what we've found so far. Attending to the last few frames has the interesting benefit of allowing it to model complex object motion without actually, by limiting the amount of frames that you attend to, you manage to keep the model running in real time.
This is such an interesting topic for me because one would assume that attending to all of the frames is super essential or having some type of summarization of all the frames is super essential for a high performance, but we see in their later ablation that that actually is not the case.
So here, just to make sure that there is some benchmarking happening, we just compared to some of the stuff that's came out prior, and indeed the SAM2 strategy does improve on the state of the art. This ablation deep in their dependencies was super interesting to me. We see in section C, the number of memories.
One would assume that increasing the count of memories would meaningfully increase performance. And we see that it has some impact, but not the type that you'd expect. And that it meaningfully decreases speed, which justifies in my mind, just having this FIFO queue of memories. Although in the future, I'm super interested to see a more dedicated summarization of all of the last video, not just a stacking of the last frames.
So that another extension of beautiful per frame work into the video domain. The next trend I'm interested in talking about is this interesting at Roboflow, we're super interested in training real-time object detectors. Those are bread and butter. And so we're doing a lot to keep track of what is actually happening in that space.
We are finally starting to see something change. So for years, yellows have been the dominant way of doing real-time object detection. And we can see here that they've essentially stagnated. The performance between 10 and 11 is not meaningfully different, at least in this type of high-level chart. And even from the last couple of series, there's not a major change.
So yellows have hit a plateau. Deaders have not. So we can look here and see the yellow series has this plateau, and then these RT-deader, LW-deader, and D-fine have meaningfully changed that plateau so that, in fact, the best D-fine models are plus 4.6 AP on COCO at the same latency.
So three major steps to accomplish this. The first RT-deader, which is technically a 2023 paper preprint, but published officially in '24, so I'm going to include that. I hope that's okay. Deaders showed that, RT-deader showed that we could actually match or out-speed yellows. And then LW-deader showed that pre-training is hugely effective on deaders, and much less so on yellows.
And then D-fine added the types of bells and whistles that we expect from these types, this arena. So the major improvements that RT-deader shows was taking the multiscale features that deaders typically pass into their encoder and decoupling them into a much more efficient transformer encoder. The transformer is, of course, quadratic complexity, so decreasing the amount of stuff that you pass in at once is super helpful for increasing your runtime, or increasing your throughput.
So that change basically brought us up to yellow speed, and then they do a hardcore analysis on benchmarking yellows, including the NMS step. Once you include the NMS in the latency calculation, you see that, in fact, these deaders are outperforming, at least at this time, the yellows that existed.
Then LW-deader goes in and suggests that, in fact, this frame, the huge boost here is from pre-training So this is the defined line, and this is the defined line without pre-training. It's within range, it's still an improvement over the yellows, but the really huge boost comes from the benefit of pre-training.
When YOLO-X came out in 2021, they showed that they got much better results by having a much, much longer training time, but they found that when they did that, they actually did not benefit from pre-training. So you see in this graph from LW-deader, in fact, yellows do have a real benefit from pre-training, but it goes away as we increase the training time.
Then the deaders converge much faster. LW-deader trains for only 50 epochs, RT-deaders, 60 epochs. So one could assume that, in fact, the entire extra gain from pre-training is that you're not destroying your original weights by relying on pre-training. You're not destroying your original weights by relying on this long training cycle.
And then LW-deader also shows superior performance to our favorite data set, Roboflow 100, which means that they do better on the real world, not just on Cocoa. Then Define throws all the bells and whistles at it. YOLO models tend to have a lot of very specific, complicated loss functions.
Define brings that into the deader world and shows consistent improvement on a variety of deader-based frameworks. So bring these all together, and we see that suddenly we have almost 60 AP on Cocoa while running in like 10 milliseconds. Huge, huge stuff. So we're spending a lot of time trying to build models that work better with less data, and deaders are clearly becoming a promising step in that direction.
What we're interested in seeing from the deaders in this trend to next is Codeader and the models that are currently sitting on the top of the leaderboard for large-scale inference scale really well as you switch out the backbone. We're very interested in seeing and having people publish a paper, potentially us, on what happens if you take these real-time ones and then throw a Swing G at it.
Like, do we have a Pareto curve that extends from the real-time domain all the way up to the super, super slow but high-performance domain? We also wanna see people benchmarking an RF100 more because that type of data is what's relevant for most users. And we wanna see more pre-training because pre-training works now.
It's super cool. - All right, so, yeah. So, in that theme, one of the big things that we're focusing on is how do we get more out of our pre-trained models? And one of the lenses to look at this is through sort of this new requirement for, like, fine-grained visual details and your representations that are extracted from your foundation model.
So, it's sort of a hook for this. Oh, yeah, this is just a list of all the papers that I'm gonna mention. I just wanted to make sure I set an actual paper so you can find it later. Yeah, so, sort of the big hook here is that I make the claim that LLMs can't see.
If you go to Claude or ChatGPT, you ask it to see this watch and tell me what time it is, it fails, right? And so, you could say, like, maybe the, like, this is, like, a very classic test of an LLM, but you could say, okay, maybe this image is, like, too zoomed out and it just, like, it'll do better if we increase the resolution and it has easier time finding these fine-grained features, like where the watch hands are pointing.
No dice. And you could say, okay, well, maybe the model just doesn't know how to tell time from knowing the position of the hands, but if you actually prompt it textually, it's very easy for it to tell the time. So, this, to me, is proof that these LLMs literally cannot see the position of the watch hands and it can't see those details.
So, the question is, sort of, why? And for you anthropic heads out there, Claude fails, too. So, my first pick for Best Paper of 2024 Envision is this MMVP paper, which tries to investigate why do LLMs not have the ability to see fine-grained details? And so, for instance, it comes up with a lot of images like this, where you ask it a question that seems very visually apparent to us, like, which way is the school bus facing?
And it gets it wrong. And then, of course, it makes up details to support its wrong claim. And so, the process by which it finds these images is, sort of, contained in its hypothesis for why it can't see these details. So, it hypothesizes that models that have been initialized with Clip as their vision encoder, they don't have fine-grained details and the features extracted using Clip because Clip, sort of, doesn't need to find these fine-grained details to do its job correctly, which is just to match captions and images, right?
And, sort of, at a high level, even if ChatGPT wasn't initialized with Clip and wasn't trained contrastively, the vision encoder wasn't trained contrastively at all, still, in order to do its job of capturing the image, it could do a pretty good job without actually finding the exact position of all the objects and visual features in the image, right?
So, this paper finds a set of difficult images for these types of models. And the way it does it is it looks for embeddings that are similar in Clip space, but far in DynaV2 space. So, DynaV2 is a foundation model that was trained self-supervised purely on image data, and it, kind of, uses, like, some complex student-teacher framework, but, essentially, it patches out, like, certain areas of the image or, like, crops at certain areas of the image and tries to make sure that those have consistent representations, which is a way for it to learn very fine-grained visual features.
And so, if you take things that are very close in Clip space and very far in DynaV2 space, you get a set of images that basically are pairs of images that are hard for a chat GPT and other big language models to distinguish. So, if you then ask it questions about this image, well, as you can see from this chart, it's going to answer the same way for both images, right?
Because, from the perspective of the vision encoder, they're the same image. And so, if you ask a question, like, "How many eyes does this animal have?" It answers the same for both. And, like, all these other models, including Lava, do the same thing, right? And so, this is the benchmark that they create, which is, like, finding, like, clip-blind pairs, which is pairs of images that are similar in Clip space, and creating a data set of multiple-choice questions based off of those.
And so, how do these models do? Well, really bad. Lava, I think... So, chat GPT and Jim and I do a little bit better than random guessing, but, like, half of the performance of humans who find these problems to be very easy. Lava is, interestingly, extremely negatively correlated with this data set.
It does much, much, much, much worse than random guessing, which means that this process has done a very good job of identifying hard images for Lava, specifically. And that's because Lava is basically not trained for very long and is initialized from Clip. And so, you would expect it to do poorly on this data set.
So, one of the proposed solutions that this paper attempts is by basically saying, "Okay, well, if Clip features aren't enough, "what if we train the visual encoder "of the language model also on Dyno features?" And so, it proposes two different ways of doing this. One, additively, which is basically interpolating between the two features.
And then, one is interleaving, which is just kind of like training one on the combination of both features. So, there's this really interesting trend when you do the additive mixture of features. So, zero is all Clip features and one is all Dyno v2 features. So, I think it's helpful to look at the rightmost chart first, which is as you increase the number of Dyno v2 features, your model does worse and worse and worse on the actual language modeling task.
And that's because Dyno v2 features were trained completely from a self-supervised manner and completely in image space. It knows nothing about text. These features aren't really compatible with these text models. And so, you can train an adapter all you want, but it seems that it's in such an alien language that it's like a very hard optimization for these models to solve.
And so, that kind of supports what's happening on the left, which is that, yeah, it gets better at answering these questions as you include more Dyno v2 features up to a point, but then when you oversaturate, it completely loses its ability to answer language and do language tasks. So, you can also see with the interleaving, they essentially double the number of tokens that are going into these models and just train on both.
And it still doesn't really solve the MMVP task. It gets Lava 1.5 above random guessing by a little bit, but it's still not close to Chachapiti or any human performance, obviously. So, clearly, this proposed solution of just using Dyno v2 features directly isn't gonna work. And basically what that means is that as a vision foundation model, Dyno v2 is gonna be insufficient for language tasks, right?
So, my next pick for best paper of 2024 would be Florence 2, which tries to solve this problem by incorporating not only this dimension of spatial hierarchy, which is to say pixel level understanding, but also in making sure to include what they call semantic granularity, which ends up, the goal is basically to have features that are sufficient for finding objects in the image.
So, they have enough pixel information, but also can be talked about and can be reasoned about. And that's on the semantic granularity axis. So, here's an example of basically three different paradigms of labeling that they do. So, they create a big data set. One is text, which is just captioning.
And you would expect a model that's trained only on captioning to have similar performance like chat2BT and not have spatial hierarchy, not have features that are meaningful at the pixel level. And so, they add another type, which is region text pairs, which is essentially either classifying a region or doing object detection or doing instant segmentation on that region or captioning that region.
And then they have text phrase region annotations, which is essentially a triple. And basically, not only do you have a region that you've described, you also find its place in a descriptive paragraph about the image, which is basically trying to introduce even more semantic understanding of these regions. And so, for instance, if you're saying a woman riding on the road, you have to know what a woman is and what the road is and that she's on top of it.
And that's basically composing a bunch of objects in this visual space, but also thinking about it semantically. Right? And so, the way that they do this is they take... Basically, they just dump features from a vision encoder straight into a encoder-decoder transformer. And then they train a bunch of different tasks like object detection and so on as a language task.
And I think that's one of the big things that we saw in 2024 is these vision language models operating on pixel space linguistically. So, they introduce a bunch of new tokens to point to locations in pixel space. So, how does it work? How does it actually do? We can see, if you look at the graph on the right, which is using the Dino framework, your pre-trained Florence 2 models transfer very, very well.
They get 60% map on Cocoa, which is like approaching state-of-the-art. And they train with... - Recording in progress. - You're good. And they train with much more efficiently. So, they converge a lot faster, which both of these things are pointing to the fact that they're actually leveraging their pre-trained weights effectively.
So, where is it falling short? So, these models, I forgot to mention, Florence is a 0.2 billion and a 0.7 billion parameter count. So, they're very, very small in terms of being a language model. And I think that this framework, you can see saturation. So, what this graph is showing is that if you train a Florence 2 model purely on the image level and region level annotations and not including the pixel level annotations, like segmentation, it actually performs better as an object detector.
And what that means is that it's not able to actually learn all the visual tasks that it's trying to learn because it doesn't have enough capacity. So, I'd like to see this paper explore larger model sizes, which brings us to our next big paper of 2024, or two papers.
So, PolyGemma came out earlier this year. PolyGemma 2 was released, I think, like a week or two ago. Oh, I forgot to mention, you can actually train like label text data sets on RoboFlow and you can train a Florence 2 model and you can actually train a PolyGemma 2 model on RoboFlow, which we got into the platform within like 14 hours of release, which I was really excited about.
So, anyway, so PolyGemma 2... So, PolyGemma is essentially doing the same thing, but instead of doing an encoder-decoder, it just dumps everything into a decoder-only transformer model. But it also introduced the concept of location tokens to point to objects in pixel space. PolyGemma 2... So, PolyGemma uses Gemma as the language encoder and it uses Gemma 2B.
PolyGemma 2 introduces using multiple different sizes of language encoders. So, the way that they sort of get around having to do encoder-decoder is they use the concept of prefix loss, which basically means that when it's generating tokens autoregressively, it's all those tokens in the prefix, which is like the image that it's looking at and like a description of the task that it's trying to do, they're attending to each other fully, full attention, which means that it can sort of bind high level...
It's easier for the prefix to color the output of the suffix and also to just find features easily. So, this is sort of an example of one of the tasks that I was trained on, which is you describe the task in English and then you give it all these...
You're asking for it to segment these two classes of objects and then it finds their locations using these tokens and it finds their masks using some encoding of the masks into tokens. And yeah, so one of my critiques, I guess, of PolyGemma 1, at least, is that you find that performance saturates as a pre-trained model after only 300 million examples seen.
So, what this graph is representing is each blue dot is a performance on some downstream task. You can see that after seeing 300 million examples, it sort of does equally well on all of the downstream tasks that they tried it on, which was a lot, as one billion examples, which to me also kind of suggests a lack of capacity for this model.
PolyGemma 2, you can see the results on object detection. So, these were transferred to Coco. And you can see that this sort of also points to an increase in capacity being helpful to the model. You can see as both the resolution increases and the parameter count of the language model increases, performance increases.
So, resolution makes sense. Obviously, it helps to find small images or small objects in the image, but it also makes sense from another reason, which is that it kind of gives the model a thinking register and it gives it more tokens to process when making its predictions. But yeah, you could say, oh, 43.6, that's not that great.
Like Florence 2 got 60, but this is not training a dino or a debtor on top of this language or this image encoder. It's doing the raw language modeling task on Coco. So, it doesn't have any of the bells and whistles. It doesn't have any of the fancy losses.
It doesn't even have bipartite graph matching or anything like that. Okay, the big result and one of the reasons that I was really excited about this paper is that they blow everything else away on MMVP. I mean, 47.3, sure, that's nowhere near human accuracy, which again is 94%, but for a 2 billion parameter language model to be chat2bt, that's quite the achievement.
And that sort of brings us to our final pick for paper of the year, which is AIMV2. So, AIMV2 sort of says, okay, maybe this language model, like maybe coming up with all these specific annotations to find features and with high fidelity in pixel space isn't actually necessary. And we can come up with an even simpler and more beautiful idea for combining image tokens and pixel tokens in a way that's interfaceable for language tasks.
And this is nice because it can scale. You can come up with lots more data if you don't have to come up with all these annotations, right? So, the way that it works is it does something very, very similar to PolyGemo where you have a vision encoder that dumps image tokens into a decoder only transformer.
But the interesting thing is that it also autoregressively tries to learn the mean squared error of the image tokens. So, instead of having to come up with fancy object detection or segmentation labels, you can just try to reconstruct the image and have it learn fine-grained features that way. And it does this in kind of, I think, a beautiful way that's kind of compatible with the PolyGemo line of thinking, which is randomly sampling a prefix length and using only this number of image tokens as the prefix.
And so, doing a similar thing with the causal. So, the causal prefix is the attention mask on the right. So, it's doing full block attention with some randomly sampled number of image tokens to then reconstruct the rest of the image and the downstream caption for that image. And so, this is the dataset that they train on.
It's internet-scale data, very high-quality data created by the Data Filtering Network's paper, essentially, which is maybe the best clip data that exists. And we can see that this is finally a model that doesn't saturate. It's even at the highest parameter count, it appears to be, well, at the highest parameter count, it appears to be improving in performance with more and more samples seen.
And so, you can sort of think that, you know, if we just keep bumping the parameter count and increasing the example seen, which is the line of thinking for language models, then it'll keep getting better. So, how does it actually do at finding... Oh, it also improves with resolution, which you would expect for a model that...
This is the ImageNet classification accuracy, but yeah, it does better if you increase the resolution, which means that it's actually leveraging and finding fine-grained visual features. And so, how does it actually do compared to CLIP on COCO? Well, you can see that if you slap a transformer detection head on it, and train it on COCO, it gets to 60.2, which is also within spitting distance of SODA, which means that it does a very good job of finding visual features.
But you could say, okay, well, wait a second, CLIP got to 59.1, so, like, how does this prove your claim at all? Because doesn't that mean, like, CLIP, which is known to be CLIP-blind and do badly on MMVP, it's able to achieve a very high performance on this fine-grained visual features task of object detection?
Well, they train on, like, tons of data. They train on, like, Objects 365, COCO, Flickr, and everything else. And so, I think that this benchmark doesn't do a great job of selling how good of a pre-trained model MV2 is. And we would like to see performance on fewer data as examples and not train to convergence on object detection.
So, seeing it in the real world on, like, a dataset like RoboFlow 100, I think would be quite interesting. And our, I guess, our final, final pick for paper of 2024 would be Moondream. So, introducing Vic to talk about that. - But overall, that was exactly what I was looking for.
Like, best of 2024, amazing job. Yeah, you can. Does anyone have questions while Vic gets set up, like, vision stuff? Yeah? Vic, go ahead. - Hi. Well, while we're getting set up, hi, over here. Thanks for the really awesome talk. One of the things that's been weird and surprising is that the foundation model companies and even these MLMs, they're just, like, worse than RTTetter at detection still.
Like, if you wanted to pay a bunch of money to auto-label your detection dataset, if you gave it to OpenAI or Cloud, that would be, like, a big waste. So, I'm curious, just, like, even Palo Gemma 2, like, is worse. So, I'm curious to hear your thoughts on, like, how come nobody's cracked the code on, like, a generalist that really, you know, beats a specialist model in computer vision like they have in LM land?
- I can, can you hear me? - Yeah, you gotta press the speak button. - Okay. - Oh, yeah. (laughing) - It's a very, very interesting question. I think it depends on the specific domain. For image classification, it's basically there. In the, AIMV2 showed a simple attentional probe on the pre-trained features gets, like, 90%, which is as well as anyone does.
The bigger question, like, why isn't it transferring to object detection, especially, like, real-time object detection? I think, in my mind, there are two answers. One is object detection is really, really, really, the architectures are super domain-specific. You know, we see these, all these super, super complicated things, and it's not super easy to build something that just transfers naturally like that, whereas image classification, you know, clip pre-training transfers super, super easily.
And the other thing is, until recently, the real-time object detectors didn't even really benefit from pre-training. Like, you see the YOLOs that are, like, essentially saturated, showing very little difference with pre-training improvements, with using pre-trained model at all, it's not surprising, necessarily, that people aren't looking at the effects of better and better pre-training on real-time detection.
Maybe that'll change in the next year. Does that answer your question? - Cool. Can you guys hear me? Yeah, one thing I want to add is just, like, or just to summarize, basically, is that, like, until 2024, you know, we haven't really seen a combination of transformer-based object detectors and fancy losses, and PolyGemma suffers from the same problem, which is basically to say that these ResNet, or, like, the convolutional models, they have all these, like, extreme optimizations for doing object detection, but essentially, I think it's kind of been shown now that convolutional models, like, just don't benefit from pre-training and just don't, like, have the level of intelligence of transformer models.
- Awesome. Balundri. - Hi, can you hear me? - Cool. - I can hear you, see you. Are you sharing your screen? - I might have forgotten to do that. Let me do that. - Sorry, you should've done that. - Okay. - Here's your screen. - Uh-oh, classic. You might have to quit Zoom and restart.
- What? - It's fine. Yeah, it's like, we have a capture of your screen. I'll just make sure it's visible. So let's get to your screen. - Okay. Easy enough. - How do you make it, like, wait for you? - Quit Zoom. No. - Yeah, yeah, there you go.
Perfect. - All right. Hi, everyone. My name is Vik. I've been working on Moondream for almost a year now, like Sean mentioned. I just went and looked, and it turns out the first version, I released December 29, 2023. It's been a fascinating journey. So Moondream started off as a tiny vision language model.
Since then, we've extended scope a little bit to also try and build some tooling, client libraries, et cetera, to help people really deploy it. Unlike traditional large models that are focused at assistant-type use cases, we're laser-focused on building capabilities that developers can, sorry, it's, yeah, we're laser-focused on building capabilities that developers can use to build vision applications that can run anywhere.
So in a lot of cases for vision more so than for text, you really care about being able to run on the edge, run in real time, et cetera. So that's really important. We have different output modalities that we support. There's query where you can ask general English questions about an image and get back human-like answers.
There's captioning, which a lot of our users use for generating synthetic datasets to then train diffusion models and whatnot. We've done a lot of work to minimize hallucinations there. So that's used a lot. We have open vocabulary object detection built in, similar to a couple of more recent models like Palagem, et cetera, where rather than having to train a dedicated model, you can just say, "Show me soccer balls in this image," or, "Show me if there are any deer in this image." It'll detect it.
More recently, earlier this month, we released pointing capability where if all you're interested in is the center of an object, you can just ask it to point out where that is. This is very useful when you're doing EOI automation-type stuff. Let's see. We have two models out right now.
There's a general-purpose 2B paramodel, which runs fairly... Like, it's fine if you're running on server. It's good for our local Lama desktop friends, and it can run on flagship mobile phones, but it never really fulfilled the promise of being able to run anywhere. Last week, we released a new 0.5B paramodel, which should be seen more as a 2B paramodel and more as a distillation target as opposed to a general-purpose model.
It's very good if you're running on older mobile phones or edge devices. Uses less memory, even with our not-yet-fully-optimized inference client. So the way we built our 0.5B model was to start with the 2B parameter model and prune it while doing continual training to retain performance. We... Our objective during the pruning was to preserve accuracy across a broad set of benchmarks.
So the way we went about it was to estimate the importance of different components of the model, like attention heads, channels, MLP rows and whatnot, using basically a technique based on the gradient. I'm not sure how much people want to know details. We'll be writing a paper about this, but feel free to grab me if you have more questions.
Then we iteratively prune a small chunk that'll minimize loss in performance, retrain the model to recover performance and bring it back. The 0.5B we released is more of a proof of concept that this is possible. I think the thing that's really exciting about this is it makes it possible for...
For developers to build using the 2B param model and just explore, build their application. And then once they're ready to deploy, figure out what exactly they need out of the model and prune those capabilities into a smaller form factor that makes sense for their deployment target. So yeah, very excited about that.
Let me talk to you folks a little bit about another problem I've been working on recently, which is similar to the clocks example we've been talking about. We had a customer reach out who was talking about, who had a bunch of gauges out in the field. This is very common in manufacturing and oil and gas where you have a bunch of analog devices that you need to monitor.
It's expensive to have humans look at that and monitor stuff and make sure that the system gets shut down when the temperature goes over 80 or something. So I was like, yeah, this seems easy enough. Happy to help you distill that. Let's get it going. Turns out our model couldn't do it at all.
I went and looked at other open source models to see if I could just generate a bunch of data and learn from that. That did not work either. So I was like, let's look at what the folks with hundreds of billions of dollars in market cap have to offer.
And yeah, that doesn't work either. My hypothesis is that the way these models are trained are using a large amount of image text data scraped from the internet. And that can be biased. In the case of gauges, most gauge images aren't gauges in the wild. They're product detail images like these, where it's always set to zero.
It's paired with an alt text that says something like G-I-V-T-O pressure sensor, PSI zero to 30 or something. And so the models are fairly good at picking up those details. It'll tell you that it's a pressure gauge. It'll tell you what the brand is, but it doesn't really learn to pay attention to the needle over there.
And so, yeah, that's a gap we need to address. So naturally my mind goes to like, let's use synthetic data to solve this problem. That works, but it's problematic because it turned out we needed millions of synthetic gauge images to get to reasonable performance. And thinking about it, reading a gauge is like not a one, like it's not a zero short process in our minds, right?
Like if you had to tell me the reading in Celsius for this real world gauge, there's two dials on there. So first you have to figure out which one you have to be paying attention to, like the inner one or the outer one. You look at the tip of the needle, you look at what labels it's between, and then you count how many and do some math to figure out what that probably is.
So what happens if we just add that as chain of thought to give the model a better understanding of the difference up, to allow the model to better learn the subtasks it needs to perform to accomplish this goal? So you can see in this example, this was actually generated by the latest version of our model.
It's like, okay, Celsius is the inner scale. It's between 50 and 60. There's 10 ticks. It's at the second tick. It's a little debatable here. Like there's a weird shadow situation going on. The dial is off. So I don't know what the ground truth is, but it works okay.
There's points on there that are, the points over there are actually grounded. I don't know if this is easy to see, but when I click on those, there's a little red dot that moves around. On the image, the model actually has to predict where those points are. I was originally trying to do this with bounding boxes, but then Malmo came out with pointing capabilities and it's like pointing is a much better paradigm to represent this.
We see pretty good results. This one's actually for clock reading. I couldn't find our chart for gauge reading at the last minute. So the light blue chart is with our grounded chain of thought. This measures, we built a clock reading benchmark about 500 images. This measures accuracy on that.
You can see it's a lot more sample efficient when you're using the chain of thought to help the model. Yep. Another big benefit from this approach is you can kind of understand how the model is doing it and how it's feeling. So in this example, the actual correct reading is 54 Celsius, the model output 56.
Not too bad, but you can actually go and see where it messed up. Like it got a lot of these right, except instead of saying it was on the seventh tick, it actually predicted that it was the eighth tick and that's why it went with 56. So now that you know that this is feeling in this way, you can adjust how you're doing the chain of thought to maybe say like actually count out each tick from 40 instead of just trying to say it's the eighth tick.
Or you might say like, okay, I see that there's that middle thing. I'll count from there instead of all the way from 40. So it helps a ton. The other thing I'm excited about is a few short prompting or test time training with this. Like if a customer has a specific gauge that we're seeing minor errors on, they can give us a couple of examples where like if it's misdetecting the needle, they can go in and correct that in the chain of thought and hopefully that works the next time.
Now, exciting approach, we only apply it to clocks and gauges. The real question is, is it going to generalize? Probably like there's some science from text models that when you train on a broad number of tasks, it does generalize. And I'm seeing some science with our model as well.
So in addition to the image-based chain of thought stuff, I also added some spelling-based chain of thought to help it understand, better understand OCR, I guess. I don't understand why everyone doesn't do this by the way. Like it's trivial benchmark question. It's very, very easy to nail. But I also wanted to support it for stuff like license plate partial matching, like hey, does any license plate in this image start with WHA or whatever?
So yeah, that sort of worked. All right, that ends my story about the gauges. If you think about what's going on over here, it's interesting that like LLMs are showing enormous progress in reasoning, especially with the latest set of models that we've seen. But we're not really seeing, I have a feeling that VLMs are lagging behind as we can see with these tasks that should be very simple for a human to do that are very easy to find VLMs failing at.
My hypothesis on why this is the case is because on the internet, there's a ton of data that talks about how to reason. There's books about how to solve problems. There's books critiquing the books about how to solve problems. But humans are just so good at perception that we never really talk about it.
Like maybe in art books where it's like, hey, to show that that mountain is further away, you need to desaturate it a bit or whatever, but the actual data on how to like look at images isn't really present. Also the data we have is kind of sketched. The best source of data we have is like image all text pairs on the internet and that's pretty low quality.
So yeah, I think our solution here is really just, we need to teach them how to operate on individual tasks and figure out how to scale that out. All right, yep. So conclusion, at Moondream we're trying to build amazing VLMs that run everywhere. Very hard problem, much work ahead, but we're making a ton of progress that I'm really excited about.
If anyone wants to chat about more technical details about how we're doing this or interested in collaborating, please hit me up. - Yeah, like I always, when people say multi-modality, I always think about vision as the first among equals in all the modalities. So I really appreciate having the experts.
- This is the year that vision language models became mainstream with every model from GPT-40 to 1 to Claude 3 to Gemini 1 and 2 to Lama 3.2 to Mistral's Pixtrol to AI2's Pixmo going multi-modal. We asked Peter and Isaac to highlight the best work in computer vision for 2024.
And they blew us away with the complete overview. As a special bonus, we also got a bonus talk from Vik Kaurapati at Moondream who gave an incredible talk at this year's AI Engineer World's Fair on his tiny 0.5 billion parameter pruned vision language model that absolutely slaps. As always, don't forget to check the show notes for the YouTube link to their talk, as well as their slides.
Watch out and take care.