Back to Index

A Taxonomy for Next-gen Reasoning — Nathan Lambert, Allen Institute (AI2) & Interconnects.ai


Chapters

0:0 The Current State of Reasoning in AI Models
1:6 Unlocking New Language Model Applications
3:48 The Need for Advanced Planning in AI
4:29 A Proposed Taxonomy for Next-Generation Reasoning
6:16 Reinforcement Learning with Verifiable Rewards
8:23 Current Challenges and Future Directions
12:7 The Effort Required to Build New Capabilities
16:20 A Research Plan for Training Reasoning Models
17:36 The Shift in Compute Allocation from Pre-training to Post-training

Transcript

I really came to this thinking about trying to reflect on six months into this like reinforcement learning with verifiable rewards post-01 post-deep seek and I think that a lot of this stuff is somewhat boring because everybody has a reasoning model we all know the basics of you can scale RL at training time and the numbers will go up and that's deeply correlated with being able to then do this inference time scaling but really an AI right now everybody there's a lot of people are up to speed but the crucial question is like where are things going to go and how do you skate where the plug is going so a lot of this talk it's really me trying to process is like where is this going besides getting high benchmark scores with using 10,000 tokens per answer and like what do we need to do to actually train these models and what are the things that open AI etc are probably already doing but it's increasingly hard to get that signal out of them so if we look at this like reasoning is really also unlocking really new language model applications I think I this is the same search query which is like I as an RL researcher I need to find this all the time I forget that it's called coast runners and the Google like over optimization 20 times to find it but I tried asking O3 and it like literally gave me the download link directly so I didn't have to do anything and that's a very unusual use case to just pop out of this reasoning training where math and code was a real thing to start with and O3 is great it's the model that I use the most for finding information and this just really is the signal that I have that a lot of new interesting things are coming down the pipe I would say it's starting to unlock a lot of new language model applications that I use some of these so this is a screenshot of deep research it's great you can use it in really creative ways like prompt it to look at your website and find typos or look at only the material on your website and things like this it's actually more steerable than you than you may expect cloud code which I describe is just the the vibes are very good it's fun I'm not a serious software engineer so I don't use it on hard things but I use it for fun things because I can I can put the company API key in and just kind of mess around like helping me build my the website for this book that I wrote online and then there's the really serious things which are like codex and these fully autonomous agents that are sharing to come if you play with it it's obvious that the form factor is going to be able to work I'm sure there are people that are getting a lot of value out of it right now I think for ML tasks it's like there's no GPUs in it right now and if you are dealing with open models it's like they just added internet so like it wasn't going to be able to go back and forth and look at like hugging face configs or something and all these headaches they don't want to deal with but in the six months like all these things are going to be stuff you should be using on a day-to-day basis and it's all downstream of this kind of step change in performance from reasoning models and then this is kind of like another plot that's been talked about and when I look at this it's like through 2024 if we look at like GPT-40 and things a lot and really were saturating then and then there's these new sonnet models in 01 which really helped push out the frontier and time horizon so this is the y-axis is how long a task can roughly be completed by the models in time which is kind of a weird way to measure it because things will get faster but it's going to keep going and this reasoning model is the technique that was kind of unlocked in order to figure out how to push the limits and when you look at things like this it's not that just we're like on a path determined from AI and more gains are going to come it's really like we have to think about what the models need to be able to do in order to keep pushing out these frontiers so there's a lot of human effort that goes into continuing the trends of AI progress so it's like gains aren't free and I'm thinking that a lot of planning and kind of thinking about training in a bit of a different way beyond just reasoning skills is going to be what helps push this and enable these language modeling applications and products that are kind of in their early stages to really shine so this is a core question that I'm thinking about is like what do I have to do to come up with a research plan to train reasoning models that can work autonomously autonomously and really have meaningful ideas for what planning would be so I kind of came up with the taxonomy that has a few different what I call traits within it the first one is skills which we've pretty much already done skills are like getting really good at math and code inference time scaling was useful to getting there but they kind of could become more researchy over time I think for products calibration is going to be crucial which is like these models overthink like crazy so they need to be able to kind of have some calibration to how many output tokens are used relative to the difficulty of the problem and this will kind of become more important when we're spending more on each task that we're planning and then the last two are subsets of planning that I'm thinking about and happy to take feedback on this taxonomy but like strategy which is just going in the right direction and knowing different things that you can try because it's really hard for these language models to really change course where they can backtrack a little bit but restarting their plan is hard and then as tasks become very hard we need to do abstraction which is like the model has to choose on its own how to break down a problem into different things that it can do on its own I think right now humans would often do this but if we want language models to do very hard things they have to make a plan that has subtasks that are actually tractable or calls in a bigger model to do that for it but these are things that are the models aren't going to do natively natively they're trying to like doing math problem solving like that doesn't have clear abstraction on like this task I can do and with this additional tool and all these things so this is this is a new thing that we're going to have to add so to kind of summarize it's like we have skills we have research or calibration and I'll highlight some of it but like planning is a new frontier where people are talking about it and we really need to think about like how we will actually put this into the models so to just put this up on the slide what we call reinforce learning with verifiable rewards looks very simple I think a lot of rl and language models especially before you get into this multi-turn setting has been you take prompts the agent creates a completion to the prompt and then you score the completions and with those scored completions you can update the weights to the model it's been single turn it's been very simple I'll have to update this diagram for multi-turn and tools and it makes it a little bit more complex but the core of it is just a language model generates completions and gets feedback on it and it's good to just take time to look at these skills these are a collection of evals and we can look at like where gpt40 was and these were the hardest evals that have existed and look were called like the frontier of ai and if we look at the oh one improvements and the like oh three improvements in quick succession these are really incredible eval gains that are mostly just from adding this new type of training in and the core of this argument is that we need to do something similar if we want planning to work so I would say that a lot of the planning tasks look mostly like humanity's last exam and amy just after adding this reasoning skill and we need to figure out what other types of things these models are going to be able to do so it's like this list of reasoning abilities that these kind of like low-level skills is going to continue to go up I think the most recent one if you look at recent deep seek models or recent quen models is really this tool use being added in and that's going to build more models like o3 so using o3 just feels very different because it is this kind of combination of tool use with reasoning and it's obviously good at math and code but I think these kind of low-level skills that we expect from reasoning training are we're going to keep getting more of them as we figure out what is useful I think an abstraction for the kind of agenticness on top of tool use is going to be very nice but it's hard to measure and people mostly say that clod is the best at that but it's not yet super established on how we measure it or communicate it across different models and then this is where we get into the fun interesting things I think it's hard for us because calibration is passed to the user which is we have all sorts of things like model selectors if you're a chat to bt user clod has reasoning on off with this extended thinking and gemini has something similar and there's these reasoning effort selectors in the api and this is really rough on a user side of things and making it so the model knows this will just really make it so it's easier to find the right model for the job and just kind of your kind of over spent tokens for no reason will go down a lot it's kind of obvious to want it and then it'll just it becomes a bigger problem the longer we don't have this some examples from when overthinking was kind of identified as a problem it's like the left half of this is you can ask a language model like what is two plus three and you can see these reasoning models use hundreds to a thousand tokens or something that could realistically be like one token as an output and then on the right is a kind of comparison of sequence links from a standard like non-rl trained instruction model versus the qwq thinking model and you really can gain this like 10 to 100x in token spend when you shift to a reasoning model and if you do that in a way that is wasteful it's just going to really load your infrastructure and cost and as a user i don't want to wait minutes for an easy question and i don't want to have to switch models or providers to deal with that so i think one of the things that once we start have this calibration is i'm is this kind of strategy idea and on the right i went to the um i think it's epoch ai website i took a question one of their example questions from frontier math and i was like does this new deep seek r105 28 model like does it do any semblance of planning when it starts and you ask it a math problem it just like okay the first thing i'm going to do is i mean i need to construct a polynomial it's like it just goes right in and it doesn't do anything like trying to sketch the problem before it thinks and this is going to probably output 10 to 40 000 tokens and if it's going to need to do another 10x there it's just like if that's all in the wrong direction that's multiple dollars of spend and a lot of latency that's just totally useless and most of these applications are set up to expect a latency between one and 30 minutes so it's like there there is just a timeout they are fighting so either going in the wrong direction or just thinking way too hard about a sub problem it is going to make it so the user leaves so right now these models i said they do very little planning on their own but as we look at these applications they're very likely prompted to plan which is like the beginning of deep research and cloud code and we kind of have to make it so that is model native rather than something that we do manually and then once we look at this plan there's all these implementation details across something like deep research or codex which is like how do i manage a memory so we have cloud code compresses its memory when it fills up its context window we don't know if that's the optimal way for every application we want to avoid repeating the same mistakes we talked to greg was talking about the playing pokemon earlier which is a great example of that we want to have tractable parts we want to offload thinking if we have a really challenging part so i'll talk about parallel compute a little bit later it's a way to kind of boost through harder things and really we want language models to call multiple other models in parallel so right now people are spinning up tmux and launching cloud code in 10 windows to do this themselves but there's no reason a language model can't be able to do that it just needs to know the right way to approach it and as i started with this idea of kind of we need effort for or like we need to make effort to add new capabilities into language models when you when i think about this kind of story of qstar that became strawberry that became oh one the reason that it was in the news for so long and was such a big deal is like it was a major effort for open ai spending like 12 to 18 months building these initial reasoning traces that they could then train an initial model on that has some of these behaviors so it took a lot of human data to get things like backtracking and verification to be reliable in their models and we need to go through a similar arc with planning but with planning the kind of outputs that we're going to train on are are much more intuitive than something like reasoning i think if i were to ask you to sit down and write a 10 000 token reasoning trace with backtracking it's like you can't really do this but a lot of expert people can write a five to ten step plan that is very good or check the work of gemini or open ai when asked to write an initial plan so i'm a lot more optimistic on being able to hill climb on this and then it goes through the same path where once you have initial data you can do some sft and then the hard question is if the rl and even bigger tasks can reinforce these planning styles on the right i added kind of a hypothetical which is like we already have thinking tokens before answer tokens and there's no reason we can't apply more structure to our models to just really make them plan out their answer before they think so to give a bit more depth on this idea of skill versus planning if we go back to this example i would say that o3 is extremely skilled at search so being able to find a piece of nice and niche information that researchers in a field know of but can't quite remember the exact search words that is an incredible skill but when you try to put this into something like deep research there's this lack of planning is making it so that sometimes you get a masterpiece and sometimes you get a dud and if as these models get better at planning it'll just be more thorough and reliable and getting the kind of coverage that you want so it's like if it's crazy that we have models that can do this search but if you ask it to recommend some sort of electronics purchase or something it's really hard to trust because they can't just know how to pull in the right information and how hard it should try to do all that coverage so to kind of summarize these are the four things i presented i think you can obviously add more to these you could call a mix of strategy and abstraction there's like con you could call what i was describing as like context management in many ways but really just want to have things like this so that you can break down the training problem and think about data acquisition or new algorithmic methods for kind of each of these tasks and i mentioned parallel compute because i think this is an interesting one because if you use o1 pro it's still been one of the best models and the most robust models for quite some time and i've been very excited for o3 pro but it doesn't solve problems in the same way as like traditional inference time scaling where inference time scaling just made a bunch of things that didn't work go from zero to one where this parallel compute is really like it makes things more robust it just makes them nicer and it seems like this kind of rl training is something that can encourage exploration and then if you apply more compute in parallel it feels something kind of exploiting and getting a really well-crafted answer so there's a time when you want that but it doesn't solve every problem and to kind of transition into the end of this talk it's like there's been a lot of talks today saying the things that you can do with rl and there's obviously a lot of talk on the ground of what is called continual learning and if we're just continually using very long horizon rl tasks to update a model and diminish the need of pre-training and there are a lot of data points that were closer to that in many ways i think continual learning has a big algorithmic bottleneck where but just like scaling up rl further is very tractable and something that is happening so if people are to ask me what i'm working on at ai2 and what i'm thinking about this is my like rough summary of what i think a research plan looks like to train a reasoning model without without all the in between the line details so step one is you just get a lot of questions that have verified answers across a wide variety of domains most of these will be math and code because that's without that what is out there and then two if you look at all these recipe papers they're having a step where they filter the questions based on the the difficulty with respect to your base model so if a question is solved zero out of 100 times by your base model or 100 out of 100 you don't want questions that look like that because you're both not only wasting compute but you're messing up the gradients in your rl updates to make them a bit noisier and once you do that you just want to make a stable rl run that'll go through all these questions and have the numbers keep going up and that's the core of it is really stable infrastructure and data and then you can tap into all these research papers that tell you to do methods like overlong filtering or different clipping or resetting the reference model and that'll give you a few percentage points on the top where really it's just data and stable infrastructure and this kind of leads to the provocation which is like what if we rename post training as training and if openai 01 was like one percent of compute is post training relative to pre-training they've already said that 03 has increased it by 10x so if the numbers started at one percent you're very quickly getting to what you may see as like parity and compute in terms of gpu hours between pre-training and post training which if you were to take anybody back a year ago before 01 would seem pretty unfathomable and one of the fun data points for this is that the deep seek v3 paper and you kind of watch deep seeks transition into becoming more serious about post training like the original deep seek read through paper they use 0.18 percent of compute on post training in gpu hours and they said their pre-training takes about two months and there was a deleted tweet from one of their rl researchers that said the r1 training took a few weeks so if you make a few very strong probably not completely accurate assumptions that rl was on the whole cluster that would already be 10 to 20 percent of their compute i think like specific things for deep seek are like oh their pre-training efficiency is probably way better than their rl code and things like this but scaling rl is a very real thing if you if you look at this if you look at frontier labs and you look at the types of tasks that people want to solve with these long-term plans so it's good to kind of embrace what you think these models will be able to do and kind of break down tasks on their own and solve some of them so thanks for having me and let me know what you think