Today we're going to be taking a look at how we can use line graph to build more advanced agents. Specifically we're going to be focusing on the line graph that is compatible with the v2 of line chain. So it's all the most recent methods and ways of doing things in the library and we're going to be building a research agent which gives us a bit more complexity in the agent graph that we're going to be building.
Now when I say research agent what I am referring to is essentially these multi-step AI agents similar to you know like a conversational react agent but rather than aiming to provide fast sort of back and forth conversation they aim to provide more in-depth detailed responses to a user. So in most cases what we would find is that people don't mind a research agent taking a little bit longer to respond if that means that it is going to reference multiple sources and provide all of this well researched information back to us.
So we can afford to wait a few extra seconds and because we have less time pressure with these research agents that also means that we can design them to work through multiple research steps where it's referencing these different sources and it can also go through those different sources multiple times you know we can do a google search you can reference archive papers and it can keep doing that and so on and so on.
Now at the same time we might also want to make our research agent conversational so we would still in many cases want to allow a user to chat with the research agent. So as part of the graph that we're going to build for the research agent we also want you know just something simple where it can respond.
Okay so let's start by taking a quick look at the graph that we're going to be building. So we have the obviously the start of the graph over here which is where we start and this goes down into what we call the oracle down here. Now the oracle is like our decision maker it's an llm with a prompt and with access to each of these different nodes that you can see.
So the right search filter, right search, fetch archive, web search, and final answer. So the oracle can decide okay given the user's query which comes in from up here what should I do? Okay so for example if I say something like hello how are you? Hopefully what it will do is it will go over to the final answer over here and then we're going to just provide an answer and end the execution of the graph.
But if I ask something that requires a little more detail for example I ask about a particular llm what we might want to do first is okay we do a web search find out about the llm that will return information to the oracle and the oracle will decide okay do I have enough information here or is there anything in these results are particularly interesting like for example is there an archive paper mentioned in these google results.
If so it's you know it might just decide to give us an answer straight away or it might decide oh okay let's go and refer to our archive papers database which we do have within the rag components so one of these two and we also can fetch the archive paper directly but this is just a summary.
So we could refer to any of these so maybe it wants to have a look at the summary okay so have a look at summary it says okay this is relevant cool now what I'm going to do is I'm going to refer to my rag search with a filter okay so the filter that it has here allows it to filter for a specific paper and that might return you know all of the information it needs and then that point it would come over to our final answer and basically build us this little research report and submit everything to us.
So that's what we're building there are a fair few steps here and yeah a fair few nodes that we need to build in order for all this to work but before we do jump into actually building that graph I want to talk a little bit more about graphs for agents and just line graph at a higher level.
So using almost like a graph-based approach to building agents is relatively new at least I haven't seen it for a very long time and what we had before was just more like okay we have this way of executing agents or we have this way and the most popular of those sort of agent execution frameworks that we've seen is called React.
Now what React does is encourages an LLM to break down its generation steps into these iterative reasoning and action steps. The reasoning step encourages the LLM to outline the steps it should take in order to fulfill some objective and that is what you can see with the thought steps here and then the action or acting steps are where the LLM will call a particular tool or function as we can see with the act steps here.
So when that tool is called so for example here we have this search Apple remote we of course return some observation from that which is what we're getting here and that is fed back into the LLM so it now has more information and that is very typical. That sort of React framework has been around for a long or at least a fairly long time and has by and large been the most popular of agent types and this agent type in some form or another found its way into most of the popular libraries so Lang chain, LLM index, Haystack, they all ended up with React or React-like agents.
Now the way that most of those frameworks implemented React is with this object-oriented approach which is nice because it just kind of works very easily and you just plug in a few parameters like you plug in your system prompt, you plug in some tools and it can just go but it doesn't leave much in the way of flexibility and also just for us to understand what is actually going on you know we're not really doing anything we're just defining an object in one of these frameworks and then the rest is done for us so it means that we miss out on a lot of the logic that is going on behind the scenes and makes it very hard to adjust that for our own particular use cases and an interesting solution to that problem is to rather than building agents in this object-oriented approach to instead view agents as graphs so even the React agent we can represent it as a graph which is what we're doing here so in this React-like agent graph we have our input from the user up at the top that goes into our LLM right then we are asking the LLM to kind of reason and act in the same step here it's React-like it's not necessarily React and what it is doing is saying okay I have this input and I have these tools available to me which is either tool A, tool B or a final answer output which of those should I use in order to produce an answer so maybe it needs to use tool A so it would go down generate the action to use tool A and we would return the observation back to the LLM then it can go ahead it can maybe use another tool again right it could use tool B get more information and then you can say okay I'm done now I will go through to my final answer and return the output to the user so you know that's just a React-like agent built as a graph and when you really look at it it's not all that different from the research agent that we're building again it's similar in some ways to a React agent but we just have far more flexibility in what we're doing here so you know if we really wanted we could add another step or we could modify this graph we could say okay if you do a web search you always after doing a web search must do a rep search and then only after doing that you can come back to the oracle right we could do things to modify that graph make it far more specific to our use case and we can also just see what is going on so for that reason I really like the graph-based approach to building agents I think it is probably the way to go when you need something more custom you need transparency and you just want that you know degree of control over what you're doing so that brings us over to LandGraph so LandGraph is from LangChain and the whole point of LandGraph is to allow you to build agents as graphs okay as far as I'm aware it's the most popular of the frameworks that are built for graph-based agents and it allows us to get far more fine-grained control over what we are building so let's just set a very high level have a quick look at a few different components that we would find in LandGraph now to understand LandGraph and of course just behold our agent as well I'm going to be going through this notebook here you can find a link to this notebook in either the description video or I will leave a link in the comments below as well so getting started we will just need to install a few prerequisites so I'm going to go ahead and do that so we have this install graphviz libgraphviz basically all of the things that we need here are just so we can visualize the the graph that we're going to be building you don't need to install these if you're just building stuff with LandGraph it's purely if you want to see the graph that you're building which I think is to be fair very useful when you are developing something just so you can understand what you've actually built because sometimes it's not that clear from the code so visualizing things helps a ton now there are quite a few python libraries that we're going to be using here because we're quite doing a lot of stuff so we're going to go ahead and install those we have hungerface datasets obviously for the data that we're going to be putting into the red components we have a pinecone for the red components openai the llm of course just line chain in general note that we are using v2 of line chain here we have LandGraph semantic router for the encoders serve api for the google search api same here and then pygraph is for visualizing the asian graph now before we start building different components i will just very high level again go through what each of these components that you're going to do so we have the archive paper fetch component here so what this is going to do is given an archive paper id is going to return the abstract of that paper to ilm web search over here is going to provide our lm with access to google search for more general purpose queries we have the reg search component here so we're going to be constructing a knowledge base containing ai archive papers and this tool is the access route for our lm to this information and also very similar is the reg search with filter component and this is exactly the same it is accessing the same knowledge base but it also adds a filter parameter so that for example in the web search if the agent finds a archive id that it would like to filter for it can do so with this tool then finally we also have the final answer component over here so this is basically a custom format for our final answer and that custom format looks like this so it's going to output a introduction research steps report conclusion and any sources that the agent use and it will output all this in a in a json format and we just simply reformat that into the format there this sort of plain text format after the fact so we're going to set up the knowledge base first to do that we're going to use pinecone and also this ai archive two chunks data set so it's basically a pre-built pre-chunked data set of ai archive papers take a moment to download and what you're going to see in there is stuff like this so this is actually i think it's the authors and the abstract of the mixture of experts paper this is all being constructed using semantic chunking so yeah you're basically the chunks that you'll see they vary in size but they for the most part should be relatively concise in what they are talking about which is ideally what we want with chunks so first thing we'll need to do is build a knowledge base for that we need the embedding model so we're going to be using open ai's text embedding through small so you will need an open ai api key which you would get from here we enter and then we also get a pinecone api key as well okay so we've entered that now we are going to be connecting to i think us east one is the free region so we should use that and what we're going to do is just create a new index so to create an index we do need dimensionalities from our encoder so we get that one five three six and then we create this index okay so dimensionality is with index name you can make that whatever you like the metric should be cosine or dot product i think you might also be able to use euclidean with embed three and then we're just specifying that we'd like to use serverless here so we run all of that i already have this index created so well you can see that here basically i already have uh the vectors are in there ten thousand and yeah that's already great so the ten thousand i have here comes from here so you can index the full data set but i think it's two hundred thousand records which will just take a little bit of time again you can it's up to you but it's time also the the cost of embedding so it's up to you but ten thousand is fine for this example so you just need to run this i'm not going to rerun it because i already have my records in there and with that our knowledge base is ready and we can just move on to all of the the graph stuff so the first thing is the graph state now the core of a graph in line graph is the agent state the state is a mutable object that we use to track the current state of the agent execution as we pass through the graph we can include different parameters within this agent state but in our example i want to be very minimal and just include what we what we really need for this example so in here we have the input which is the actually the input from the user we have the chat history so you know we do want to have this as more of a conversational research agent we have the chat history in there and intermediate steps which is where i'm going to be tracking what is going on within the graph so we have all of that the i say probably the main confusing thing here would be this operator add thing here and how we are constructing intermediate steps so essentially this operator add tells line graph or the graph within line graph that when we are passing things back to the intermediate steps parameter of our state we don't replace intermediate steps with the new information we actually add that information to a list okay so it's probably the the main slightly different thing to be aware of there now we're going to go ahead and start building our custom tools so as i mentioned before we have those different components first of those is relatively straightforward so we'll build that first we have the archive paper fetch so the fetch archive tool is where we given a archive id we're going to return the abstract for that paper and all we do is we import requests let's uh we're going to test it with this with a mixture of paper here and all we really need is this okay so we're just sending a get request to the export archive site and we pass in the archive id and when we do that we should get something like this now this isn't everything let me expand this it's relatively big you see there's quite a lot going on there so what we need to do is within this mess we need to extract what we actually want which is the abstract which is somewhere in here i'm not entirely sure where uh but we can extract that with this regex here okay so we use that and there we go that's our abstract so relatively straightforward tool it just takes in the archive id and we're going to get the abstract for a paper now the way that line graph works is that it expects all this to be built into a tool which is essentially a function like this which consumes a particular value which is the archive id in this case and it is decorated with the tool decorator from line chain we specify the name of our tool here which is i'm just using i'm keeping things simple we're using the same name for the tool as we use as for the name of the function so i'm going to run that and with that our tool is ready and we can test it so to run any tool we have to run invoke then we pass input and to input we pass a dictionary which must align with the parameters that we've set for that particular function so we do that the next component that we want to be using is web search so web search we're going to be using the set api and for that again we do actually need an api key so to get that api key i don't remember exactly where it is i'll give you the link okay so you'll need to create an account and go to here so surf api dot com slash manage api key then you'll get an api key and just enter in here okay so basically we do a search like this so we have a query we're querying for coffee here and we'll get all of our results in a dictionary here now what we do want to do here is restructure this into a string that is a bit more readable so that's what i'm doing here and let's see what we get so using that we're going to get something like this right cool so we have that and again we're going to put all that into a tool one thing that i actually didn't mention before is that we also use a dark string in our tool to provide a description to our lm of you know when to use the tool how to use the tool and so on so same thing we initialize that now the rag tools we have two of them we have the just the right search and we also have the right search with a filter so let's go through that we are going to actually come to that in a moment we'll create the tools first so to create the tool i have a query this is the filter one so we have query which is natural language string and we also have the archive id that we'd like to filter for all we do is we encode the query to create a query vector we query we include our filter so we have that here and then from that we're going to use this format rag context to format the responses into again another string okay so you see that here so title content archive id and any related papers to that what we return we also have the same here we have rag search and it does the exact same thing but it just doesn't have the archive id filter and it also returns a fewer top k here i'm not actually sure why i did that but yeah i'm going to keep it but you can adjust those if you'd like to return more stuff you can but yeah we have that so finally we have the final answer which is you know the way that i've done it here is is basically another tool and the reason that i've set it up as a tool is because we have this particular structure that i want our llm to output everything in we're also using the doc string here to describe a little bit of what we want for all of this so yeah we have all that but yeah that's it so we construct that and actually this bit here isn't even used you can just remove that it doesn't matter i'm returning nothing here and the reason i'm returning nothing is because we're actually going to return what the llm generates to use this tool out of our graph okay and then we have another function you'll see in a moment that will take that structure and reformat it as we would like it we could also do that restructuring in here and just return a string but i've just left it outside the graph for now okay so that is all of our components so we have covered basically all of these here they're all done next let's take a look at the oracle which is the main decision maker in our graph oracle is built from a few main components okay so we have obviously lm the prompt which you can see here and of course just binding that lm to all of our tools and then putting that all together so there's a few parts we'll go through very quickly here the first is yes the prompt so we are using blank chains you know prompting uh system here so using the chat prop template we also include these messages placeholder because that's where we're going to put in our chat history and we're also going to add in a few other variables as well so we have system prompt and chat history followed by our most recent user input which comes in here and then we follow that up with the scratch pad now the scratch pad is essentially where we're going to be putting all the intermediate steps for our agent so yeah you can imagine okay you have your system prompt you have the chat history you have the most recent interaction from the user and then following that we're going to be adding assistant messages that are saying okay i'm going to go do this search i've got this information so on and so on so yep that is our prompt then we want to set up our lm now the lm we're just using gb4o here there's nothing yeah nothing special going on there but the one thing that is important if i come down to i need to import something quickly so we run that okay so the part of this is important is we have this tool choice option here and what that tool choice any is doing is essentially telling our lm that we have to use a tool all right because otherwise the lm may use a tool okay it may use one of the components or it may just generate some text and we don't want it to be generating text we want it to always be using a tool and the reason for that is even if it does just want to respond it has to go through the final answer tool to respond so we're just forcing it to always use a tool so that is what our tool choice any does here and we can go through the rest of this quickly as well so we have yes the lm the tools the scratch pad here is basically looking at all of the intermediate steps that we've been taking and reformatting them into a string here so we see that the name of the tool that was used the input to that tool and the result of that tool okay so the output from that tool and we just put all that together into the string which goes into that agent scratch pad section so this bit here and then here we're using the line chain expression language to basically put everything together so our oracle consists of these input parameters we have input chat history and scratch pad these are all fed into our prompt okay so the prompt if i come to here you can see that we have chat history input and scratch pad the exact same parameters that we're using here so they need to align so yep they populate our prompt and then the prompt is passed over to our lm which also has access to the tools that we have defined here okay and that's it so that is our that's our oracle now we can test it very quickly to just confirm that it works so we run this okay and we basically get a ton of output here and this is the output that we're getting from our model okay so the output the uh ai message which is just empty because what we really want here is we want to see what the oracle is deciding to use so we can go here right you can see the name is reg search so it's deciding to use the reg search tool here we are not trying this to give us facts about dogs so it's not perfect usage but anyway we have the reg search tool and the query so the input query is interesting facts about dogs and it's going to go and search for that okay there we go we can and you can keep rerunning that and seeing what it comes out with it will probably vary every now and again because there is of course some degree of randomness in there now our agent the oracle is going to be outputting a decision to use a tool so when it outputs that decision to use a tool we want to be having a look at what is output and saying okay it wants to use the right search tool let's go and execute the right search tool it wants to go use the web search tool we go and execute that so that is what our router will be doing so that is what we can see down here okay so the router is literally consuming the state it's looking at the most recent intermediate step which will be will have been output by our run oracle function here right you can see that it returns the intermediate steps which is the action out and we're just going to be returning the name of the tool that we got from the oracle so we'd run that we don't need these extra cells and with that the only remaining thing that we need to you know turn into a function and which we will use to add to our graph in a moment is the run tool function now the run tool function we could basically we could split all of our tools into multiple functions kind of like how we have our own function for the run oracle here but all of these tools can be executed using the same bit of code so there's no real reason to do that so instead i just define a single function that will handle all of those tools so again that's taking the state looking at the most recent intermediate step we look at the tool name and the tool input which has been generated by our oracle i'm going to print this out so that we can see what is actually happening when this is running and then we would here tool string to function is just this dictionary here this is basically going to take the tool name map it to a function and then we're going to invoke it with the input that we have from here so we run all that then we create this agent action object which basically is just a way of us describing or you're having a object that describes what happened or what tool we use what the inputs to that tool were and the observation i the log that we got back from it and then after that is done we pass all of that information back to our intermediate sets so we run that and now we can define our graph so we have you know all the components there they're already now it's time to define the graph so for the graph we already have the agent state that we've defined so we actually need that agent state to initialize our graph so we use a state graph object and then once we have our initialized graph that graph is empty right it doesn't know you know all the stuff we've just done or the components we've defined so we need to then tell it which components we have defined and start adding all of those so we do that using the graph add node method and what we're going to do is just take our our string and we're going to map it to the function that would run that tool okay so or run that tool or run that component okay so for oracle we would be hitting the run oracle function for these ones here so rag search filter rag search so on and so on as i mentioned before they can all be executed using the exact same bit of code so that's exactly what we do here we just pass in the run tool function so we do that we define all of our nodes there then the next step is to define okay which one of those nodes is our entry point you know where does the graph start okay and that is of course our oracle right we saw the graph earlier here so we're always starting okay we start with the start where we import our query and that goes directly to the oracle so in reality the oracle is our entry point okay then we have the following our oracle we don't just have you know one direction to go we have many different directions and the way that we set that up is we use our router and we have what are called conditional edges which are basically these dotted lines here so we add those conditional edges the source for where we're starting from there is our oracle and the thing that decides which path we should take is our router so our oracle outputs the tool name and our router basically passes that tool name and then directs us in whichever direction we should go then we need to add edges from our tools back to the oracle so if you see on here we have these you know the full lines here that's basically saying okay if we are at one of these tools we need to go back to the oracle like it can't go anywhere else so it's not a conditional edge it isn't dotted it is a a normal edge i.e when you're at this component this is where you're going next and all of these components except from final answer go back to the oracle and that is what we have defined here so we say for tool object in tools if it is not the final answer we're going to add an edge from the tool name of the tool object name back to the oracle that's what we do and then finally we have the final edge which goes from our final answer to the end node which is exactly what you can see from here all the way over to our end node there okay and that is the definition of our graph we can just confirm that it looks about right which we do here okay actually the oracle has a conditional edge over to the end i'm not sure why that is but for the most part this is what we're looking for so we're gonna stick with that and yeah i mean everything is compiled we can see the graph looks about right and yeah we can we just go ahead and try it out now so let's see what that looks like we'll go to the first question again not on topic for our ai researcher but it's we'll just try it so tell me something interesting about dogs let's run that okay and we can see all the steps that are you know being processed and that is because i have put a load of print statements in throughout the code so we can see that it's hitting the oracle it's going to rag search and it's invoking this query interesting facts about dogs right it's probably not finding much so then it's going to the web search well actually sorry it goes back to the run oracle then it goes to web search performs the same and at this point it probably has some interesting information so we go back to the oracle and the oracle says okay we have the information let's go and invoke the final answer and then the final answer as you can see right we have this introduction and we'll go through so on and so on we have the research steps and basically we have all like that full format that i mentioned before so with that full format i'm going to define a function that will build the report and just format it into a nicer like more easy to read format so that is within this boat report it's going to consume the output from our graph and it's going to restructure everything into this here so let's see what it came up with for the first question okay so we can see there's quite a bit in there given the question as well so introduction dogs are fascinating creatures i've been companions of humans for thousands of years so on and so on you know it's a real introduction the research steps that were performed so it actually says okay i went to archive for academic papers or research related to interesting facts about dogs then it performed a web search to gather general knowledge and fun facts about dogs from various reputable sources then it gives you the little report here which is i think looks pretty good and then we have a little conclusion finally we have our sources okay so we can actually see from the sources i didn't really rely so much on the archive papers which is not surprising given that they are ai archive papers but we can see most of these are i well i assume all of these are actually coming from the web search tool so yeah that is our first little research report let's try something a little more on topic although so quite general and broad we're going to ask it to tell us about ai so let's run this so it goes with right search first the query is just ai then we go back to the oracle then we have web search then back to the oracle again we have right search filter so it's ready you know it's it's going for it here so it's looking at this paper which i assume is gone from the references of the previous oh probably from the web search step or maybe even the right search step as well so yep it's going for this archive paper then we're going for another archive paper here and finally we have our final answer so this there's a lot of information coming in from here so let's see what we let's see what we get there all right nice so we have nice little introduction the three steps that perform so specialist search on ai using archive database web search to gather general information and then filtered specific archive papers to extract detailed insights on ai related topics cool now we have the report so i'm not gonna go through all of that um but just a high level it looks kind of relevant so nice has some like recent stuff in here gpt4 we have chat gpt and pi3 in there so it's getting some relatively recent information then we have the sources so this is probably the most interesting bit to me so you know what have we got here so an in-depth survey of large language model based ai agents seems pretty relevant and cognitive architectures for language agents another kind of interesting sounding paper we have the wikipedia page for ai and google cloud what is what is ai so yeah some i think good sources there let's try one more so we'll get a little more specific on this one so i want to ask what is rag let's see what we see what we get so we are running the oracle it goes to rag search asking about rag go back to the oracle then we have web search again asking about rag goes back to the oracle again and from now it's like okay final answer so it seems to have had enough with that let's see what we get nice okay so rag is an advanced technique in the field of ai integrating external knowledge sources cool research steps specialized search using the rag search tool perform the web search to obtain general knowledge and additional perspectives on rag from various sources compiled and synthesized information to provide a comprehensive understanding of rag we have the little report here looks reasonable two main components retriever and generator nice and yes addressing the limitations of traditional lms so on and so on and then we have all of these sources here so we have all from autogen simply retrieve aws nvidia google cloud and ibm okay so the autogen here is that oh and both of these sorry both of these seem to be the archive papers that they they have found and then the rest of these are i assume from the web search so it looks pretty good so you can already see that well langreth was pretty nice in allowing us to build a relatively complex agent structure and it was at least in my opinion far more transparent in what you're building like we know what prompts are where we know what tools we're using and we can modify the order and the structure of you know when and where we are using those tools which is is nice to have and something that is kind of hidden away when you're using you know the more typical or the past approach of like react agents as objects or classes in python and yeah we've seen how that all goes together quite nicely to build a research agent which from what we've seen like given we didn't really work much on tweaking it it worked pretty well like it was going to different sources it was doing like the web search the rag search and then like going through and filtering for specific papers to get more information and you know that was with just a few quick tries so it's hard to say how it would perform use with you know a broader set of questions but just from those quick experiments that seems pretty cool in my opinion now i think this sort of approach to pairing both graphs and agents it's just nice for all the reasons i just mentioned it works well and it's not just line graph that does it i mean line graph is probably the best known library but i'm also aware that haystack have something in there at least and i believe llama index either have or they might be putting something together i'm not sure but i've heard something about llama index in that space as well so they're probably things i'll look into in the future but for now i'm gonna leave it with line graph and this research agent so yeah i hope all this has been useful and interesting thank you very much for watching and i will see you again in the next one bye