Back to Index

LangGraph 101: it's better than LangChain


Chapters

0:0 Intro to LangGraph
0:52 Graphs in LangGraph
3:0 More Complex LangGraph Agent
8:12 LangGraph Graph State
14:0 LangGraph Agent Node
17:8 Forcing a Specific LLM Output
20:0 Building the Graph
23:23 Using our Agent Graph
28:32 LangGraph vs LangChain

Transcript

Today we are going to be taking a look at langraph. Now langraph is another library within the line chain ecosystem and what it allows us to do is build very highly custom and flexible agents which I believe that agents are the sort of short-term future of AI. Now you can build agents with the core line chain library and I've spoken about this a lot before but from experimenting with both line chain and langraph I'm still forming my opinion but I think langraph is simply a much more powerful solution for building agents.

Now let's take a quick look at the general concept of what langraph is. Now as you may have guessed by the name langraph has a big emphasis on graphs so whereas lang chain thought of agents as objects that you kind of attach tools to insert some prompts so on and so on langraph instead thinks of agents as more of a graph so you have your initial starting point that could be anything well it's a function some sort of runnable function so that could actually just be an agent or it could be a chain or it could be some other runnable function.

Now from here rather than just you know finishing with this agent we can control what happens next so down here we might have something like a search tool right so we could do rag in this in this way so we could come down here and we could say okay this is our search tool and this is going to get us information based on you know what the agent or chain has decided up here to do then from there we could continue we could finish with another either lm more likely than not so an lm and that lm may have a prompt where it consumes the the context that you got from your search tool your initial query which would have come up here from your user consumes both of those and then it will output an answer and that answer will go to the end node okay so this is a very simple example now whilst we're looking at this I just want to point out a few things so here this circle is a node here this arrow is a edge and everything in our graph basically consists of nodes and edges in some combination now this is a incredibly simple agent let's have a look at something maybe a little more realistic I'll keep it simple still but still a little more realistic so we could start with an agent up here and this agent actually has a few tools that it contains but it will use them via function calling so it won't actually execute the tool itself it will just output the input to a to a tool that we should use so I'm going to connect to this agent the schemas for a final answer because I'm going to create a slightly different final answer than usual I'm going to have a final answer where you get a like an output from your agent and also a citation so it's a you know it's like a json output in a particular format and we want to enforce that and I'm also going to input that search tool from before okay now this agent is going to output which one of these tools we should use and the parameters that we should pass to it now because there are two tools here we have like two alternative paths that we could go down and this is a another component of line graph graphs which is called a conditional edge now a conditional edge is what it's what it sounds like it's an edge or a couple of edges that are conditional on a on some sort of condition being satisfied okay so the condition being satisfied here will be whether our agent has decided to go with the search or the final answer tool now to implement a conditional edge we need something called a router and that router is basically an if else segment then based on that we can go well one of two ways we can go to our search node over here or we go to our final answer which is actually going to go to the end node now with the search we're going to perform a search and return an answer what will that answer look like well we will be outputting some context okay so the output of this is some text basically let's put context now that isn't an answer by itself that's it's just some text that we've pulled from somewhere it hasn't been formatted into a natural language answer so we now need to pass this through another lm we will say rather than agent because an agent is nice when we have multiple tools in this case we just have one tool that we want to use and that is the final answer tool and all the final answer tool is doing is creating that format for us so we're going to say okay you need to provide an answer and a citation and from there we have our final answer and that will go to the end node now this again is a very simple graph there's nothing particularly complex about this but it is just far more flexible and far more easy to work with than what at least what i'm used to in line chain and i have not been using this for a long time so i think there's a lot of potential to using line graph and just learning it and i think when it comes to building agents so far this is by far my preferred method or preferred library now that is a quick overview let's have a look at how we actually implement all of this in code so we're going to go into this notebook here there'll be a link in the both the description and the comments for you to open this same notebook and you can follow along now there are a few things additional things that we need to do here and this first thing is to install this library here this pygraph viz pygraph viz is not needed to use line graph it is needed to visualize graphs that are built in line graph so if you're if you're just developing with this library you don't you don't need this but yeah for this example you know taking you through things it's it's very nice to be able to visualize what you're building and we have a few libraries that we need here as it's all the line chain ones we are going to use open ai here so yeah we will initialize that and we move on to our graph state now there are multiple ways of doing this you can you can create your own graph state which is what we're doing here or you can use a built-in i think it's like message state that are built into the library and you can just use those it depends on you know what you're wanting to do i i prefer this method because you can define what is in there and it's just easier to to understand and this has taken some inspiration from a very good video from sam witween on the same topic on on line graph he did a really good intro actually i would recommend that as well so what we are doing here is we're defining this agency and as we go through each node and as we pass through the graph we have this agent state that you know goes with us all right so as we go through all of the new information for example from our search tool that we retrieve will be stored in here and you'll recognize this so intermediate steps if you've used line chain agents before it's very typical so intermediate steps are okay between the user asking a question or typing some sort of query and the output that you get there are there can be many multiple steps as we have seen in that graph and we saw the information from them in here and another thing that we have here is this agent out this is output from an agent there's nothing nothing fancy there the other thing is the inputs so that's the input from the user and we would also have chat history in here as well i'm not including here because i just want to strip things down to like be as simple as possible but that being said we're going to build like a what i think is a very cool agent using line graph pretty soon i'm working on it that will be interesting far more complex and i think it will show us a bit more of what this library can actually be useful but this is a good intro so the first thing i want to do within the graph is define those different nodes okay so the two tool nodes that i want to create are the search node or search tool and the final answer tool so here we are you know we would usually implement a rag pipeline here but i'm just going to emulate that so i'm going to say okay this is the information that we're going to retrieve from our rag you know our emulated rag pipeline it's from an archive paper on some embeddings in there we have the title of that paper we have the summary we have the authors and the source so it's going to be up to our llm the final answer llm or the initial agent to decide how to build the citation when it gets this information and of course the answer as well so yes that is our information for our emulated search tool and we are going to define our define the search tool which is just here when we define a tool we use this tool decorator here we name the tool we pass okay what do we what needs to go into that tool so we just have a query here at this side of the schema and we give a description here now this description is okay it's for us but it's also actually for the llm okay it's for our agent it will read this and decide which tools use and also how to use it based on what we put here now in here you'd put your you know your right stuff but i'm just going to return the sort of emulated content there then we also have that final answer tool now final answer tool it doesn't do anything right it like it there's what it doesn't do anything like you can see here it's just returning empty the reason it doesn't do anything is because it doesn't need to do anything all i'm using this for is to tell my llm or agent to use this structure when it's outputting a final answer so the llm or agent is set up to use this as a tool but when it uses the tool all it does is generates the what should be the input to that tool it doesn't actually execute it so i mean that's all we need we just need this format we give it a little description here explaining what we want in both of those fields and the llm will produce that for us okay it just gives us our the final answer in that structure that we need both of these we're going to be executing uh via opening our tools which is uh like the latest version of their function calling and you can kind of see how that works here so if we yeah we see search tool um we pass this information to our llm so it's going to see okay this is the search tool to use it it will need to enter the word search then we have this description so this is the the function this is the inputs to that function and then it tells us searches for information on topic ai so on and so on right it's just what i wrote up here okay so we have that then we're going to come down a little further and we're going to initialize our first agent now that first agent i'm actually going to be using the core line chain to just implement a very simple or typical openai tools agent all it will have is a prompt which i'm just getting from the line chain hub it will have our lm which is a you know it's a chat model from openai gpt 3.5 i'm not actually sure what the default is now i assume it's still 3.5 uh yes gpt 3.5 turbo and we have our tools okay so the final answer tool and the search tool now you'll need to pass in your api key here and then you can run okay and we can just test that very quickly to confirm that it works so i'm going to ask it what are ehi embeddings uh yes we we run that and okay you can see it outputs this far is this tool agent action and this is what we're going to be using uh in our our router for that those conditional edges later so we're going to be looking at the the tool item there and we're going to then be taking this tool input taking that as a keyword arguments for our function so query and in that query we'll be passing ehi embeddings now of course we're just emulating right here so it's not actually going to do anything other than return that text for us but that is what we would actually need so okay we keep going uh oh yeah i'm just showing you here what we'd actually be outputting there so we'd be taking the function we have our arguments and we have the name okay cool then okay what else do we have we're going to define nodes for our graph so we defined the the tools and the agent and now we just need to define the functions that are going to run as nodes within our graph so we have the runner query agent that's going to consume our state and it is going to run our query agent so the query agent i defined where did i uh here okay so i see you know the the one that makes decision between the final answer and search then we're going to execute our search so this is a function for rag obviously you know it's emulated again we have our router which is what we use with our conditional edge to decide which direction to go in and that kind of covers that first component of our graph so basically all of this but we also have this component here so these connections okay so search so we have defined this but we do need to define this final answer llm now the final answer llm something that is quite useful that we can do is we can create our llm we bind a tool so the final answer tool to our llm and then we can say that the llm must call that tool so this is just to help us reduce the likelihood of it hallucinating and doing something else other than use the tool because we want to enforce that it does output that sort of structure that we'd like so we can do that and that is it's it's pretty cool i'll show you how so we are here we have our llm this is the one we defined earlier it's just our chat llm and then we bind tools to it we just bind one tool it's the final answer tool and then we enforce the tool that it has to use we're going to say you must use the final answer tool okay and yeah that's the llm we do need to define a function to handle that so we have our rag final answer we're taking the user input from our state also taking the context from the the previous steps from the intermediate steps feeding both of those into a very simple prompt here where we just have the context and question we invoke on our prompt so we run the llm and that will output a function call okay to our to our final answer tool okay and then we just return that now the final one that we are also going to add so i didn't visualize this before is this handle error so what we do have with the the current the query agent is that it's not forced to make a function call so sometimes basically when you say something like hi or hello like you know a very short message that is very conversational the agent will want to respond in a you know without using any tools and i'm sure you could prompt this to be very rare but nonetheless it will still happen sometimes and to handle that we can actually create another function this is going to still use this final answer llm and basically what it will do is the router will take a look at our output from our agent it should see either search or final answer but if it doesn't see those tools being used we're going to assume there's an error and then we're going to enforce the use of that final answer okay just like that it you know it's it's pretty straightforward so we do that uh now we've you know we've built all those nodes we've got all that logic now it's time for us to you know put it all together to construct a graph so we initialize our graph first we do that with this state graph object and we're passing our agent state that we defined earlier and then what we do is we add some nodes okay our query agent which is you know our query agent we have the search tool we have this error handling tool and then we have our rag final answer like formatter basically it takes our original user query the output from the rag tool puts them together and produces a final answer for us then we also define where in the graph we begin which is with our query agent okay so we set the entry point okay we run that okay then what we need to do is define our edges to define our edges what we do is we we add an edge i think it's literally add edge and we just say where we are coming from so you know we could be going from the the agent here so this would be our x and where we are going to so in this case it would be our our router and yeah i mean that that's how we do that because we have defined the nodes in our graph using strings it's just that's just what we do we will also define most of these as those strings so the agent for example we defined as query agent it's a string so that's that is exactly what we would put inside x here the one exception that we always have to this is our end node here so the end is actually is it's a function or it's an object and there's no string value linked to that so we just pass in the actual end method which we'll see in a moment so let's do that you can see some examples of what i was talking about so we add an edge between our search node and our rag final answer node and then we can add another one between our rag final answer and our end node okay so here i have some repetition actually so let's remove that okay so we import that end node there we add our edges which is what i'm doing here so these are the like the single edges like they will be taken so if you go to the search node you will next go to the rag final answer node if you go to the error node you will go to the end node and same here the one other thing that we have here is this conditional edge right so this is what i mentioned before for a conditional edge you can you can go in different directions depending on a particular condition now the starting point for that is the query agent that will then go to our router which will decide which direction we go okay so that router is going to output a string which is going to be either search error or final answer and based on that output we'll either go to the search node the error node or we will go to the end node okay it's pretty simple once we've done all that we can compile our graph so we run that and then the thing that we installed earlier that i mentioned is this so that we can visualize these things and you can kind of well you can see what we've built here okay so we have our starting point our entry point which is actually just a query agent here we go to our router that decides where we should go uh we if there's an error we go to the error that will form our force that structured output we can also go to a final answer tool or we can go to the search tool go to the search tool we do our you know emulated rag in this node then we pass the state which includes both the context and our original query into this final answer llm and then we end and that's it it's pretty simple now let's see how that let's see how it runs okay so we can see the path that is deciding to take here so we have we're asking what is ai okay so our search tool is defined as it should be used when someone is asking about ai so we have our starting point which is our query agent goes through to the router as it always will and our router decides that we should use the search tool right and and it's it's deciding that based on the output from our query agent so all it's doing is passing the output from the query agent and then deciding okay which direction do we go we then execute our search and then we go to our final answer lm and we finish and we can see here you can see what we get right so we have the answer you know there's some answer and then we also have our source now the source here that it's using is not actually our ehi embedding source because that okay it's embeddings in the same way that we use rag in ai to uh do more do right to search but it doesn't describe what ai is right so the lm is actually not using our information even though we've given it that information and instead it is using something that is remembered which is the wikipedia page on artificial intelligence and this link should actually work they usually do yeah yeah this uh it remember it's always surprises me how much lms remember like just random blog sites or websites obviously this one is a less impressive one but they come up with some crazy links sometimes cool so we have that now let's try something else i'm going to ask what are ehi embeddings and we can test the citation ability of our lm now let's see what it see what it gives us okay so it's using that context we provided it it's uh you know this is basically just information from that context from the emulated rag pipeline and then we have the source here and that source again will work okay you see here that's the paper and this is a relatively new paper and we're using an older gpt 3.5 model so i don't believe this is in the training data of the current turbo model although they are changing them so it might but in any case you can also use the older models and it will do the same thing all right that's cool uh i can ask you just ask more questions tell me about uh these embeddings it will do and it will give me a you know citation again i would have i suppose in in the citation here i would have liked to actually generate these citations so like you know put together the authors and and whatever else in there and yes you could you could prompt it to do so but obviously in this use case it's a bit of a silly use case because in reality you would probably just pull in the source from the document that has been used as we're only returning one document that is uh is you know it's pretty easy but anyway uh we we have that and this is kind of useful when it's you know returning stuff from its own memory you can actually see where you know what it has been trained on in order to you know find that information or in order to remember that information which is kind of cool then okay this is where it's like this is where it would usually break right if we didn't have that error so you can see that we're handling this error if we didn't have that it would just try and output you know a normal sentence here we see that it actually forces it to use that answer source format which is pretty cool and and then i can literally like beg this agent to not use this format and it still will uh which is you know i i that that is cool that's it's useful uh depending on what you're building of course but it's a nice little nice little thing that we have there so yeah that is line graph i think you know this is a very basic example there is a lot more that you can do with line graph i just build far more complicated agents than what we've done here but the same time you can also build these simple agents and you can make them pretty custom like we did with you know the the rag emulator and the final answer and also the final answer you know error handling right we build all that and it is not that complicated and you can you can add many nodes and many different edges and build something quite sophisticated without too much difficulty and the code and one one thing that you know a lot of people say and i you know i understand to some degree with lang chain is that the code is very convoluted there's like a million ways to do one single thing and i'm not saying that is perfect here but i feel with line graph it is much more refined and you this you know we've just shown you you to build a graph you need to add the edges you add the nodes and yes there are many different ways of building those nodes but the logic is pretty intuitive and easy to follow and simply very extensible right i would use basically the same functions whether i'm building this very simple agent or some super big research agent that you know has a million different sources of information you know we would be using you know roughly the same functions without too much difference we would just be you know putting a lot more into it and that is something that i quite like so far with lang graph and i think the other thing is just the ability to really control what your agent is doing with lang chain everything is kind of hidden behind abstractions and there are still abstractions here i you know i will not lie but they are they feel far more useful and far less frustrating than line chain abstractions which i appreciate and although this can be slightly you know complex to get started with after a few hours i think it's it's becomes quite intuitive and that is something that i i just like about this library so for now i am building agents with line graph rather than line chain or the core line chain of course there are still a lot of line chain components in here and you know i'm sure i will continue using them for a long time into the future but this is the way that i'm building the the logic or the paths within agents and i think it works pretty well so that is it for this introduction to line graph as i mentioned there will be more line graph videos we'll do we'll build some more complicated stuff but yes that is it for this introduction so i hope this has been a useful and interesting video so thank you very much for watching and i will see you again in the next one bye you you you