back to index

AI Won't Be AGI, Until It Can At Least Do This (plus 6 key ways LLMs are being upgraded)


Whisper Transcript | Transcript Only Page

00:00:00.000 | Can you spot the following pattern? If you look at this grid here and see how it's been transformed
00:00:06.320 | into this grid, and likewise how this grid on the left has been transformed into this grid,
00:00:12.240 | can you spot that pattern? Could you, in other words, predict what would happen to
00:00:17.040 | this grid here in the test example? Well, it might shock you to learn that language models like GPT-4
00:00:24.880 | can't do this. They are terrible at noticing that the little squares are being filled in with a
00:00:31.920 | darker shade of blue. This specific abstract reasoning challenge was not in the training
00:00:37.200 | data set of GPT-4 and that model cannot generalize from what it has seen to solve the challenge. It's
00:00:43.920 | not generally intelligent, it's not an artificial general intelligence. Now, you might think that's
00:00:49.440 | a minor quibble, but it gets to the heart of why current generation AI is not AGI and frankly isn't
00:00:57.040 | even close. And no, neither will this problem be solved by a simple naive scaling up of our models.
00:01:04.080 | But this video isn't just about picking out one flaw, albeit a critical one, in our current LLMs.
00:01:10.400 | It's more my attempt to address this swirling debate about whether AI is overhyped or underhyped.
00:01:17.600 | For many, AI is nothing but hype and is a giant bubble, while for others, AGI has already arrived
00:01:25.200 | or is just months away. But you don't need an opinion, what does the evidence show? This video
00:01:30.800 | will draw upon dozens of papers and reports to give you the best snapshot I can on LLMs and AI
00:01:37.120 | more broadly. I'm going to start, I'm afraid, with so much of what's wrong with the current
00:01:42.480 | landscape, from delayed releases, overpromises, and my biggest current concern, a tragedy of the
00:01:49.360 | commons from AI slop. But then I will caution those who might throw the baby out with the
00:01:55.680 | bathwater, giving six detailed evidence-based pathways, from the LLMs we have today, to
00:02:02.160 | substantially more powerful and useful models. Including systems that, yes, perform decently,
00:02:07.840 | even on that abstract reasoning challenge I showed you earlier. But let's start with the
00:02:12.880 | dodgy stuff in "AI". And no, I'm not referring to the fact that the creators and funders of this
00:02:19.280 | ARK AGI challenge are so confident that current generation LLMs cannot succeed,
00:02:25.600 | that they've put up a prize pool of over a million dollars. If GPT-4.0 was a pure
00:02:31.200 | reasoning engine, well then why is its performance negligible in this challenge?
00:02:36.080 | But no, I'm actually referring to the landscape of over-promising and under-delivering.
00:02:42.640 | You might remember Demis Hassabis referring to the original Gemini model when it was launched
00:02:47.120 | as being "as good as human experts". But Google has had to roll back its LLM-powered AI overview
00:02:53.600 | feature because there were simply far too many mistakes. If Gemini was as good as human experts,
00:02:59.280 | as some benchmarks were claiming to show, then why wouldn't it be better than a random Google search?
00:03:06.000 | But surely the newly announced Apple Intelligence will be far better though?
00:03:09.840 | Well, aside from the fact that we can't actually test it, no, Tim Cook admitted that it still
00:03:14.480 | hallucinates all the time. Now to some, as we'll see, that's actually the point of LLMs,
00:03:19.200 | we want them to be creative. But to others, it smells more like BS.
00:03:24.080 | This particular paper, aside from being quite funny, gave one clear take-home message.
00:03:28.880 | Language models aren't actually designed to be correct. They aren't designed to transmit
00:03:34.880 | information. So don't be surprised when their assertions turn out to be false.
00:03:39.520 | Of course, people are working on that and the paper even makes reference to the
00:03:43.920 | Let's Verify Step-by-Step paper, more on that later, but the point remains.
00:03:48.480 | So we have the hallucinations in AI and then the hallucinations in the marketing about AI.
00:03:54.640 | And the fact that we have AI-powered toothbrushes isn't even my primary concern.
00:03:59.760 | And nor is it actually that we occasionally have those wildly overhyped products like the Rabbit
00:04:05.200 | R1 and the Humane AI Pin. Then there's the possible breaches in privacy that features like
00:04:11.360 | Microsoft's Recall seem to inevitably invite. I don't know about you, but it was never appealing
00:04:16.080 | to me to have an LLM analyse screenshots taken of my desktop every few seconds.
00:04:21.280 | Imagine the poor LLM trying to sift through all of the creations that
00:04:25.760 | some are likely to come up with using OpenAI's tools.
00:04:29.680 | You might think I've reached the end of my dodgy list, but I'm not actually even close.
00:04:34.480 | What about the increase in academics using LLMs to write or polish papers?
00:04:40.000 | You can see the recent and dramatic increase in the use of the word 'delve' on papers on PubMed.
00:04:47.520 | For me, as soon as I suspect an article I'm reading is LLM generated, I just discount it heavily.
00:04:53.200 | Then we get the delayed releases. Now this one
00:04:56.240 | is arguably a bit more forgivable, but we were promised GPT 4.0 within a few weeks.
00:05:02.640 | I think we all would prefer a tradition when features are announced the moment
00:05:07.840 | they're actually available. But I will come back to GPT 4.0 in a moment.
00:05:12.640 | Now you might be different, but for me the number one concern at the moment
00:05:17.520 | is just AI generated slop.
00:05:20.160 | Take this tool where on LinkedIn you can imitate the writing of someone in your field or industry.
00:05:26.800 | Are they going viral while you get little to no engagement?
00:05:30.480 | Copy their tone and address the same topic.
00:05:32.880 | And I would call this a kind of tragedy of the commons.
00:05:36.000 | For the individuals using this, and I'm not meaning to pick on one individual tool,
00:05:40.320 | but for the individuals using this, it's probably pretty helpful.
00:05:43.360 | It probably does boost engagement and help source out any language issues.
00:05:47.360 | But as we've seen of late on Facebook, it just leads to this general AI generated miasma.
00:05:53.520 | Bots engaging with bots. Gullible people drawn in and fooled.
00:05:58.160 | A landscape where increasingly you can't trust what you see or even hear.
00:06:02.640 | And that's before even getting to the topic of deepfakes,
00:06:05.200 | which of course can acutely affect the individuals concerned.
00:06:09.040 | You can, of course, let me know if you agree.
00:06:11.200 | But for me, this is at the moment the number one concern
00:06:15.200 | and it's hard to see how it would be stopped.
00:06:17.920 | At this point, you're probably thinking, damn, this is pretty negative on AI.
00:06:22.080 | But as you can see, the video has plenty of time to go.
00:06:25.680 | But at this point, my rejoinder to some is that
00:06:29.040 | it's all too easy to fall into diametrically opposed camps.
00:06:33.360 | As we saw last year with accelerationists and doomers,
00:06:36.960 | we're seeing this year with those who say AI is nothing but hype
00:06:40.640 | and those who say AGI is imminent.
00:06:42.880 | But I think the world is just so much more complex than that.
00:06:46.320 | I reckon that if I was blind, I would forgive the occasional hallucination with GPT-40
00:06:51.600 | and be grateful to have a model that can tell me about the world around me.
00:06:56.000 | Oh, and interactively and in real time.
00:06:58.400 | Try and tell me exactly what they're doing right now, please.
00:07:01.840 | Um, right now, the ducks are gently gliding across the water.
00:07:06.560 | They're moving in a fairly relaxed manner, not in a hurry.
00:07:10.000 | Occasionally, one of them will dip its head under the water,
00:07:13.680 | probably looking for food, and then pop back up.
00:07:16.880 | I even know when a taxi is coming with its orange light on.
00:07:22.080 | I think I'll hail it to get home.
00:07:23.440 | Yes, I spotted one just now.
00:07:27.840 | It's heading your way on the left side of the road.
00:07:31.120 | Get ready to wave it down.
00:07:32.560 | Great job hailing that taxi.
00:07:39.280 | It looks like you're all set to go.
00:07:40.800 | That's a good dog right there, leading the way into the taxi.
00:07:43.760 | Safe travels.
00:07:45.360 | And remember, too, that as Project Astra from Google demonstrated,
00:07:50.160 | models are getting better at ingesting more and more tokens, more and more context.
00:07:56.080 | They are increasingly able to help users of any kind locate things not just in text,
00:08:02.160 | but in the real world.
00:08:03.840 | Now, we'll get back to LLMs in a moment,
00:08:05.680 | but of course, it's worth remembering that there are far more to neural networks than just LLMs.
00:08:11.120 | A new study in Nature showed how you could use GANs, Generative Adversarial Networks,
00:08:14.800 | to predict the effects of untested chemicals on mice.
00:08:18.800 | Gen AI, in this case, was able to simulate a virtual animal experiment
00:08:24.480 | to generate profiles similar to those obtained from traditional animal studies.
00:08:28.720 | The TL;DR is not just that the Gen AI predictions had less error,
00:08:32.720 | but they were produced much more quickly.
00:08:34.880 | And as the BBC reported, this is at least one tentative step toward an end to animal testing.
00:08:41.040 | And yes, that study was different to this one released a few days ago with
00:08:45.040 | Harvard and Google DeepMind, where they built a virtual rodent powered by AI.
00:08:50.000 | Again, a highly realistic simulation,
00:08:52.160 | but this time down to the level of neural activity inside those real rats.
00:08:57.280 | Then of course, we have the good old convolutional neural networks for image analysis.
00:09:02.160 | Their use in the Brainomix eStroke system is, as we speak, helping stroke victims in the NHS.
00:09:09.360 | Essentially, it has enabled diagnoses to be made by clinicians much more quickly,
00:09:13.600 | which in the case of strokes is super important, of course,
00:09:16.160 | and that has tripled the number of patients recovering.
00:09:18.880 | Now, the only tiny, tiny complaint I would make is that, as ever,
00:09:22.800 | the title just uses the phrase AI.
00:09:25.200 | It took me a disturbing amount of digging to track down the actual techniques used.
00:09:30.080 | And even then, of course, for commercial reasons, they don't say everything.
00:09:33.920 | But now I want to return to large language models, the central focus of this channel.
00:09:38.640 | Now, I have discussed in the past on this channel some of the reasoning gaps
00:09:43.040 | that you can find in current generation large language models.
00:09:46.560 | But the ARC Abstract Reasoning Challenge and Prize publicized this week by the
00:09:52.800 | legendary Francois Chollet is a great opportunity to clarify what exactly our current models miss
00:10:00.880 | and what is being done to rectify that gap.
00:10:03.920 | Hopefully, at least, the following will explain in part
00:10:07.920 | why our models can sometimes be shockingly dumb and shockingly smart.
00:10:13.280 | I'm going to leave in the background for a minute
00:10:15.280 | an example of the kind of challenge that models like GPT-4 fail at consistently.
00:10:20.640 | So here is, in 60 seconds, what the current issue is.
00:10:24.560 | If language models haven't seen a solution to something in their training data,
00:10:30.240 | they won't be able to give you a solution when you test them.
00:10:33.920 | That's why models fail at this challenge.
00:10:36.480 | They simply haven't seen these tests before.
00:10:39.040 | Moreover, the models aren't generally intelligent.
00:10:42.240 | You can train them on millions of these kind of examples, and people have tried,
00:10:47.440 | and they'll still fail on a new, fresh one.
00:10:50.880 | Again, if that new, fresh example isn't in the training dataset, they will fail.
00:10:56.080 | If the mother of Gabriel Macht is in the dataset, they will output the correct answer.
00:11:02.400 | If, however, the data son of that Suzanne Victoria Pullia is not in the dataset, it will not know.
00:11:09.200 | It doesn't reason its way to the answer based on other parts of its training data.
00:11:14.240 | So how can models do so well on certain benchmarks like the math benchmark?
00:11:18.560 | Well, they can "recall" from their training dataset certain reasoning chains that they've seen before.
00:11:25.280 | That's enough in certain circumstances to get the answer right.
00:11:28.240 | So hang on a second, they can recall certain reasoning procedures,
00:11:32.320 | let's call them programs, but they can't create them.
00:11:35.520 | Yes, it's a good news, bad news kind of situation.
00:11:39.440 | But for the remainder of this video, indeed for the remainder of LLMs, it will be important to remember that distinction.
00:11:46.480 | Recalling reasoning procedures or programs versus doing fresh reasoning itself.
00:11:53.040 | If it's seen it before, great. If it hasn't seen it before, not so great.
00:11:58.160 | Okay, so we're almost ready for those six ways which might drag LLMs closer to that artificial general intelligence.
00:12:06.320 | But first, I want to quickly focus on one thought you may have had in reaction to this framework.
00:12:11.920 | Why not train language models on every reasoning procedure out there?
00:12:16.480 | Feed it data on any scenario it might encounter and wouldn't that be AGI?
00:12:21.760 | Well, here's Francois Chollet, the author of the ARC challenge.
00:12:25.360 | He'll explain why memorization isn't enough because we don't see the whole map.
00:12:30.640 | If the world, if your life were a static distribution, then sure you could just brute force the space of possible behaviors.
00:12:38.640 | You can think of intelligence as a pathfinding algorithm in future situation space.
00:12:44.080 | Like, I don't know if you're familiar with game development, like RTS game development, but you have a map, right?
00:12:50.480 | And you have, it's like a 2D map and you have partial information about it.
00:12:55.840 | There is some fog of war on your map, there are areas that you haven't explored yet, you know nothing about them.
00:13:01.680 | And then there are areas that you've explored, but you only know how they were like in the past.
00:13:06.480 | And now instead of thinking about a 2D map, think about the space of possible future situations that you might encounter and how they're connected to each other.
00:13:14.880 | Intelligence is a pathfinding algorithm.
00:13:16.720 | So once you set a goal, it will tell you how to get there optimally.
00:13:22.880 | But of course, it's constrained by the information you have.
00:13:26.400 | It cannot pathfind in an area that you know nothing about.
00:13:30.400 | If you had complete information about the map, then you could solve the pathfinding problem by simply memorizing every possible path, every mapping from point A to point B.
00:13:41.120 | Solve the problem with pure memory.
00:13:42.960 | But the reason you cannot do that in real life is because you don't actually know what's going to happen in the future.
00:13:49.280 | So if AI will be encountering novel situations, it will need to adapt on the fly.
00:13:55.520 | What would make me change my mind about that is basically if I start seeing a critical mass of cases where you show the model with something it has not seen before.
00:14:05.920 | A task that's actually novel from the perspective of its training data, something that's not in training data.
00:14:11.680 | And if it can actually adapt on the fly.
00:14:14.400 | And that might be more possible than you think, as Noam Brown of OpenAI thinks.
00:14:19.760 | He's optimistic that LLMs will crack it.
00:14:22.160 | But again, it won't be just a naive scaling up of data that alone gets us there.
00:14:27.600 | Without examples or zero shot, as we've seen, models don't generalize from what they've seen to what they haven't.
00:14:34.720 | This paper demonstrates that in the visual domain, as we've already seen it in the text domain.
00:14:39.680 | No matter what neural network architecture or parameter scale they tested, models were data hungry.
00:14:45.920 | Unlike a human child, they didn't learn in a sample efficient manner.
00:14:49.840 | Remember that a child might be shown one image of a camel with the caption camel and retain that term for life.
00:14:57.280 | However, midway through the paper, there was some tentative evidence that with enough scale, you can get decent results on rarely found concepts.
00:15:06.320 | Those were represented by the "let it wag" dataset referring to the "long tail" of a distribution.
00:15:12.800 | Anyway, as you can see, models that performed at over 80% on this traditional, well-known ImageNet accuracy test,
00:15:20.000 | did perform fairly well on this "long tail" dataset.
00:15:24.160 | As they note, the gap in performance does seem to be decreasing for higher capacity models.
00:15:29.280 | In other words, more data definitely helps, especially exponentially more data, but it definitely won't be enough.
00:15:36.240 | But even this paper pointed to some ways that this challenge might be overcome.
00:15:41.280 | They note possibilities for not only retrieval mechanisms, but compositional generalization.
00:15:46.560 | In other words, piecing together concepts that have been found in the training dataset to be able to recognize more complex ones.
00:15:53.120 | And if you think that's impossible, I've got news for you in a moment.
00:15:56.400 | Just before I do though, I want to get to one other approach that I don't think will work.
00:16:01.760 | You may or may not have heard on the Twittersphere about the "Situational Awareness" 165-page report put out by a former OpenAI researcher.
00:16:11.760 | I've done a full 45-minute breakdown on my Patreon, AI Insiders, but one key takeaway is this.
00:16:18.640 | He thinks the straight march to AGI will be achieved by scaling up the parameters and data of our current LLMs.
00:16:27.360 | But again, I think that's far too simplistic.
00:16:30.000 | Just throwing on more parameters and more data wouldn't resolve the kind of issues you've seen today.
00:16:35.920 | Leopold Aschenbrenner also made some other somewhat crazier claims, but that's for this video.
00:16:41.360 | So it's time to get to the first of those six methods that I mentioned that might drag LLMs closer to something we might call a general intelligence.
00:16:50.640 | And this paper published late last year in Nature is about that compositionality I mentioned just a moment ago.
00:16:57.920 | Perhaps models can't reason, but if they can better compose reasoning blocks into something more complex, might that be enough?
00:17:05.600 | Well, the authors prove that point, in principle at least, on just a 1.4 million parameter transformer-based model.
00:17:12.640 | Boldly, they claim, "Our results show how a standard neural network architecture, optimized for its compositional skills, can mimic human systematic generalization in a head-to-head comparison."
00:17:24.880 | And as the lead author of the paper retweeted, "The answer to better AI is probably not just more training data, but rather diversifying training strategies."
00:17:36.320 | TLDR, send the robot to algebra class every so often.
00:17:40.720 | And the challenge, as you can see in the bottom left, was this.
00:17:44.160 | It might even remind you a little bit of the ARC challenge.
00:17:47.280 | The challenge was to work out what this made-up language fragment actually meant, based on these rules.
00:17:53.440 | Given enough time, humans tend to do fairly well at this challenge, like the ARC challenge, but models like GPT-4, as we'll see, flop hard.
00:18:01.680 | Remember, GPT-4 has 1.8 trillion parameters, versus the model they trained at 1.4 million.
00:18:09.120 | Anyway, with enough time in your case, and training in the case of their transformer model, it worked out that, for example, "Hu" means "two", whereas "Sa" means "green" and "Ri" means "light blue".
00:18:22.640 | There are other rules to work out, of course, and then those rules have to be composed together to work out the question-answer.
00:18:29.440 | Now, remember, when they were tested, they were tested on a new configuration of words.
00:18:34.480 | They had to "understand" the made-up rules of a new language and apply them to phrases it hadn't been trained on.
00:18:42.480 | It was able to do so, in other words, show the first tentative hints at true reasoning.
00:18:48.480 | It's only a very small step towards AGI, though, because when they tested these flickers of compositional reasoning on a new task, this tiny model failed.
00:18:57.440 | Okay, time for the next approach.
00:18:59.760 | If, as we've seen, language models have these reasoning chains or programs within them to solve challenges, they're just hard to find.
00:19:07.520 | What about methods that improve our ability to find those programs within language models?
00:19:12.640 | That's what, in part, verifiers are about.
00:19:15.360 | A string of papers came out this week about using verifiers and Monte Carlo tree search to improve the mathematical reasoning of language models.
00:19:23.920 | Of course, covering all of them would be a video in and of itself, but in a nutshell, it's about this.
00:19:29.360 | You can train a model to recognize faulty steps in a reasoning chain to pick out the bad programs.
00:19:36.480 | With Let's Verify Step-by-Step, which I've talked about many times before on this channel, that required human annotation,
00:19:42.160 | but Google DeepMind came up with an approach which was done in an automated fashion.
00:19:46.560 | By automatically collecting results which led to a correct answer, as well as contrasting them with outputs that led to an incorrect output,
00:19:54.960 | they trained a process reward model.
00:19:57.440 | Think of it like a supervisor analyzing each step of a language model's outputs.
00:20:03.040 | Or, to use the analogy we've been using throughout this video, the process reward model could alert the language model the moment it's calling a faulty or inappropriate program, in theory.
00:20:13.360 | But at least in this paper, there is a limit to this approach.
00:20:17.920 | Even when analyzing and deciding amongst over a hundred solutions, the performance started to plateau.
00:20:24.320 | For sure, it's still a huge boost on this math benchmark from 50% to around 70%, but there's a limit.
00:20:31.360 | Is that because it's not using human annotation like with Let's Verify?
00:20:35.840 | Or is it a fundamental limitation with the approach?
00:20:38.480 | Perhaps if a language model doesn't have the requisite program in its training dataset to solve a math problem, it can't do so no matter what supervision it's given.
00:20:47.840 | Nevertheless, the principle is clearly established.
00:20:50.960 | We don't have to rely on the language model itself locating the correct and necessary program to solve a challenge.
00:20:58.720 | We can at least help it along the way.
00:21:01.040 | And we know that there are even better verifiers out there, namely simulations and the real world.
00:21:08.240 | As I discussed with Jason Ma, the lead author of the Dr. Eureka paper.
00:21:13.600 | It's a feature because that means the LLM can sample, let's say, a hundred different solutions.
00:21:18.480 | And your simulator in this case serves as an external verifier to see which ones are good.
00:21:22.560 | If your LLM doesn't have the ability to hallucinate, it's always deterministic,
00:21:26.400 | then the method will actually not work because you would only be able to generate one candidate per iteration, right?
00:21:32.080 | So it becomes very slow.
00:21:33.520 | Every time the model generates any response, it's technically hallucinating.
00:21:37.680 | It's a sample from the probability distribution, right?
00:21:40.640 | And it's only a hallucination if it doesn't agree with what you think it should output, right?
00:21:44.880 | But in my case, I don't care if it agrees with what I think is a good reward.
00:21:48.480 | I have something external to verify.
00:21:50.240 | So it's great the model can output a hundred different things.
00:21:52.720 | That makes the iterative evolutionary process much faster.
00:21:55.840 | So I think that's actually a blessing that I think if you only think about applications like chatbot,
00:22:01.120 | agents, you may underappreciate.
00:22:02.960 | But if you think about the use case for large language models or any foundation model for,
00:22:07.680 | I think, discovery tasks or scientific discoveries,
00:22:10.880 | what you want is the model to be able to propose 10 different solutions.
00:22:14.480 | Is it possible that we can turn hallucinations from a weakness to a strength?
00:22:20.000 | NVIDIA and others are, of course, working hard to find out.
00:22:23.680 | That second approach then can be summed up as using verifiers and
00:22:27.840 | other approaches to better locate the requisite programs within a large language model.
00:22:33.520 | And I will quickly mention another method for locating that latent knowledge.
00:22:38.080 | ManyShot.
00:22:38.880 | Give models tons and tons of examples of the kind of task you want it to achieve,
00:22:43.760 | and it can better learn how to do so.
00:22:46.000 | It seems obvious, but it can lead to significant performance gains.
00:22:49.680 | But what about teaching language models new programs on the fly?
00:22:53.440 | This is what Francois Chollet calls active inference,
00:22:56.240 | and it's responsible for the current state-of-the-art score of 34% on the ARC AGI prize.
00:23:02.800 | There are many facets to this approach, but I'm going to summarize heavily.
00:23:06.160 | The key insight is to use test-time fine-tuning.
00:23:09.520 | When the model sees three examples like those on the left,
00:23:12.400 | that isn't enough to teach it the way to solve the fourth one.
00:23:15.760 | It's too minuscule a signal amongst all its many parameters.
00:23:19.920 | But one of the things that Jack Cole and co did is augment these three examples
00:23:25.040 | with many, many synthetic examples that mimic the style.
00:23:28.640 | They then fine-tune the model on those augmented examples.
00:23:32.240 | I think of it as prioritizing, as humans do, the thing right in front of its face.
00:23:37.040 | You can almost think of it like a language model concentrating.
00:23:41.600 | Its parameters are adjusted to focus on the task at hand.
00:23:45.440 | A bit like a human going into the flow.
00:23:47.680 | By the way, they also used GPT-4 to generate many of the synthetic riddles to train their system.
00:23:54.320 | Here's how Francois Chollet describes the approach.
00:23:57.120 | Most of the time when you're using an LLM, it's just doing static inference.
00:24:01.440 | The model is frozen and you're just prompting it and then you're getting an answer.
00:24:06.000 | So the model is not actually learning anything on the fly.
00:24:09.120 | Its state is not adapting to the task at hand.
00:24:12.720 | And what Jack Cole is actually doing is that for every test problem,
00:24:17.440 | he's on the fly, he's fine-tuning a version of the LLM for that task.
00:24:23.920 | And that's really what's unlocking performance.
00:24:25.840 | If you don't do that, you get like 1%, 2%.
00:24:28.560 | So basically something completely negligible.
00:24:31.840 | And if you do test time fine-tuning and you add a bunch of tricks on top,
00:24:35.760 | then you end up with interesting performance numbers.
00:24:37.920 | So I think what he's doing is trying to address one of the key limitations of LLMs today,
00:24:43.360 | which is the lack of active inference.
00:24:45.600 | It's actually adding active inference to LLMs and that's working extremely well, actually.
00:24:50.240 | So that's fascinating to me.
00:24:51.520 | In short, even on the Achilles heel of large language models, abstract reasoning,
00:24:55.920 | Jack Cole says this,
00:24:57.200 | "What is clear from our work and from others is that the upper limits of the capabilities of LLMs,
00:25:02.480 | even small ones, have not yet been discovered."
00:25:04.960 | By the way, his model is just 240 million parameters.
00:25:08.720 | "There is clear evidence that more generally capable models are possible."
00:25:12.640 | Again, sticking with the program metaphor, as Francois Chollet says,
00:25:16.560 | "This is like doing program synthesis."
00:25:19.040 | Another definition we can use is reasoning is the ability to,
00:25:22.640 | when you're faced with a puzzle,
00:25:24.800 | given that you don't have already a program in memory to solve it,
00:25:29.040 | you must synthesize on the fly a new program
00:25:33.280 | based on bits of pieces of existing programs that you have.
00:25:36.400 | You have to do on the fly program synthesis.
00:25:38.960 | And it's actually dramatically harder than just
00:25:41.680 | fetching the right memorized program and replying it.
00:25:44.640 | Now, at this point in the video, as we get to the fourth approach,
00:25:48.240 | I'm going to make a confession.
00:25:49.600 | I have about 20 tabs left on my screen going over other relevant papers
00:25:54.000 | on how they improve LLMs,
00:25:55.680 | but I am beginning to worry that this video might be getting a little bit too long.
00:25:59.840 | So for the last few approaches, I'm going to be much more brief.
00:26:03.520 | I hope I'm still conveying that central message though,
00:26:06.320 | that LLMs currently suck at abstract reasoning,
00:26:09.840 | but that need not be a death sentence.
00:26:12.480 | Nothing in the literature indicates that AGI is at all imminent,
00:26:16.640 | but neither is AI or hype.
00:26:18.960 | Here is a paper from the arch LLM skeptic Professor Rao,
00:26:23.440 | who I interviewed almost a year ago.
00:26:25.360 | It's a position paper from this week.
00:26:27.360 | LLMs can't plan, but can help planning in LLM modulo frameworks.
00:26:32.960 | We had a great discussion almost a year ago,
00:26:35.600 | which I'll hopefully get to in a different video,
00:26:38.560 | but the summary of this paper is this.
00:26:41.360 | Earlier work by Professor Rao and co had shown that even models like GPT-4
00:26:46.080 | can't come up with coherent plans.
00:26:48.080 | They fail in this domain of blocks world.
00:26:50.960 | Essentially blocks world is like some of the other reasoning challenges
00:26:54.160 | you've seen today in that you have to come up with a coherent plan,
00:26:57.600 | unstacking blocks and restacking them to meet the required objective.
00:27:01.600 | Definitely not a memorization test as the MMLU can sometimes be.
00:27:05.840 | Indeed, if you throw off the model and use mysterious words
00:27:09.600 | instead of household objects,
00:27:11.200 | the models perform even worse.
00:27:13.120 | Zero-shot GPT-4 gets one out of 600 challenges.
00:27:17.280 | However, this fourth approach says why do we have to use LLMs alone?
00:27:21.840 | Why can't we use them with traditional symbolic systems?
00:27:25.520 | Maybe that combination of neural networks
00:27:27.760 | and traditional symbolic hard-coded programmatic systems
00:27:31.440 | is better than either alone.
00:27:33.200 | Professor Rao is a friend of Yan Lecun, a famed LLM skeptic,
00:27:37.680 | but the paper that he led said this.
00:27:40.480 | There is unwarranted pessimism about the roles LLMs can play
00:27:45.760 | in planning/reasoning tasks.
00:27:48.320 | The key insight is that LLMs can act as idea generators.
00:27:53.120 | Those grounded symbolic systems can then check those plans.
00:27:56.320 | LLMs as the ideas man,
00:27:58.640 | with symbolic systems as the kind of accountants.
00:28:01.520 | LLMs are great at guessing candidate plans
00:28:04.720 | to solve blocks world challenges.
00:28:06.800 | And those ideas, or you could say retrieve programs,
00:28:10.000 | aren't all bad.
00:28:10.960 | Even after three or four rounds of feedback
00:28:13.680 | from the symbolic system,
00:28:15.280 | 50% of the final plan retains the elements
00:28:18.640 | of the initial large language model plan.
00:28:20.800 | With that feedback from the symbolic system,
00:28:23.200 | you back-prompt the LLM and it comes up
00:28:25.520 | with hopefully a better plan.
00:28:27.200 | Not even hopefully though, the results are clear.
00:28:29.600 | GPT-4 can score 82% with this approach.
00:28:33.120 | Still struggles with mysterious languages,
00:28:35.280 | but can solve five of them.
00:28:37.040 | This all reminded me of alpha geometry,
00:28:39.520 | which I have done a separate video on,
00:28:41.440 | so I won't focus on today,
00:28:43.040 | but it's that same combination of a neural network
00:28:46.000 | and symbolic system.
00:28:47.200 | For geometry problems specifically,
00:28:49.360 | in the International Math Olympiad,
00:28:51.440 | it scored almost a gold medal.
00:28:53.600 | Those are incredibly hard reasoning challenges.
00:28:56.800 | Very quickly now, the fifth approach.
00:28:58.880 | Instead of calling a separate system,
00:29:00.800 | how about jointly training on its knowledge?
00:29:03.360 | For time purposes, here is a very quick summary.
00:29:06.160 | They trained a separate neural network,
00:29:08.320 | not a language model, in this case,
00:29:09.760 | a graph neural network.
00:29:11.120 | It learnt specialized algorithms.
00:29:13.520 | Then they embedded that fixed optimized know-how
00:29:16.800 | and had a language model train
00:29:18.800 | with access to those embeddings.
00:29:20.640 | In other words, a language model
00:29:22.160 | fluent in the language of text and algorithms.
00:29:25.920 | Programs that you might need, for example, for sorting.
00:29:28.640 | Sixth, and finally for this video,
00:29:30.880 | there's tacit data.
00:29:32.400 | So much of what humans do and how humans reason
00:29:35.680 | is not written down.
00:29:36.800 | Let's hear from Terence Tao,
00:29:38.800 | arguably the smartest man on the planet.
00:29:41.680 | He said, "So much knowledge is somehow trapped
00:29:44.720 | in the head of individual mathematicians,
00:29:47.120 | and only a tiny fraction is made explicit.
00:29:49.600 | A lot of the intuition of mathematicians
00:29:52.000 | is not captured in the printed papers in journals,
00:29:54.640 | but in conversations among mathematicians,
00:29:57.280 | in lectures, and in the way we advise students.
00:30:00.160 | People only publish the success stories.
00:30:03.040 | The data that are really precious
00:30:05.200 | are from when someone tries something
00:30:07.200 | and it doesn't quite work,
00:30:08.640 | but they know how to fix it.
00:30:10.080 | But they only publish the successful thing,
00:30:12.080 | not the process."
00:30:13.200 | And all of this, he says,
00:30:14.160 | simultaneously points to a dramatic way to improve AI,
00:30:18.000 | but also why we shouldn't expect
00:30:20.000 | an intelligence explosion imminently.
00:30:22.160 | Training on that tacit data
00:30:23.920 | would unlock notable progress,
00:30:26.320 | but human mathematicians, he says,
00:30:28.320 | would just, in his view,
00:30:29.760 | move to a higher type of mathematics.
00:30:32.400 | As we speak, open AI,
00:30:33.840 | and I'm sure many others,
00:30:35.040 | are trying to make as explicit as possible
00:30:37.760 | that tacit knowledge.
00:30:39.280 | I'm sure hundreds of PhDs
00:30:41.120 | are writing down their methodologies
00:30:43.040 | as they solve problems.
00:30:44.320 | Millions, if not billions,
00:30:46.080 | of YouTube hours of video
00:30:48.000 | are being ingested
00:30:49.360 | in the hopes that AI models
00:30:51.200 | pick up some of that implicit reasoning.
00:30:53.680 | While you may see this
00:30:54.720 | as the most promising approach
00:30:56.400 | of the six I've so far mentioned,
00:30:58.720 | it wouldn't yield immediate explosive results.
00:31:02.000 | It would be reliant on us
00:31:04.000 | and other human experts
00:31:05.760 | to write down with fidelity our reasoning.
00:31:08.400 | Less a remorseless, faceless shoggoth
00:31:11.200 | solving the universe,
00:31:12.400 | and more a student imitating its teachers.
00:31:15.360 | Of course, you will have likely seen
00:31:17.200 | the somewhat ambiguous clip
00:31:18.720 | from Mira Murati, the CTO of OpenAI,
00:31:21.920 | saying they don't have any giant breakthrough
00:31:24.480 | behind the scenes.
00:31:25.280 | Inside the labs,
00:31:26.880 | we have these capable models,
00:31:29.920 | and, you know,
00:31:32.400 | they're not that far ahead
00:31:34.400 | from what the public has access to for free.
00:31:37.920 | And that's a completely different trajectory
00:31:40.960 | for bringing technology into the world
00:31:44.000 | than what we've seen historically.
00:31:45.920 | But as I've hopefully shown you today,
00:31:47.680 | it doesn't have to be an all or nothing.
00:31:50.000 | AGI imminently or all hype.
00:31:52.720 | As even François Chollet says,
00:31:54.960 | it could be a combination of approaches
00:31:57.760 | that solves, for example, ARK.
00:31:59.440 | That indeed might be the path to AGI.
00:32:02.960 | People who are going to be
00:32:03.760 | winning the ARK competition
00:32:06.560 | and who are going to be
00:32:07.520 | making the most progress
00:32:08.640 | towards near-term AGI
00:32:10.160 | are going to be those that manage
00:32:11.360 | to merge the deep learning paradigm
00:32:13.760 | and the discrete form search paradigm
00:32:15.680 | into one elegant way.
00:32:17.680 | So much more to get to
00:32:19.200 | that I couldn't get to today,
00:32:20.800 | but I hope this video has helped you
00:32:22.960 | to navigate the current AI landscape.
00:32:25.360 | As ever, the world is more complex
00:32:27.520 | than it seems.
00:32:28.320 | Thank you so much, as ever,
00:32:30.080 | for watching and have a wonderful day.