back to index

LangChain Agent Executor Deep Dive


Chapters

0:0 LangChain v0.3 Agent Executor
9:26 Creating an Agent with LCEL
13:53 Executing Tool Calls
16:58 Agentic Final Answers
25:58 Building a Custom Agent Executor
32:47 Executing Multiple Tool Calls

Whisper Transcript | Transcript Only Page

00:00:00.000 | in this chapter we're going to be taking a deep dive into agents with the lang chain and we're
00:00:06.880 | going to be covering what an agent is we're going to talk a little bit conceptually about agents the
00:00:15.200 | react agent and the type of agent that we're going to be building and based on that knowledge we are
00:00:20.560 | actually going to build out our own agent execution logic which we refer to as the agent executor so
00:00:29.920 | in comparison to the previous video on agents in lang chain which is more of an introduction
00:00:36.160 | this is far more detailed we'll be getting into the weeds a lot more with both what agents are and
00:00:43.680 | also agents within lang chain now when we talk about agents a significant part of the agent is actually
00:00:53.680 | relatively simple code logic that iteratively runs lm calls and processes their outputs potentially
00:01:04.400 | running or executing tools the exact logic for each approach to building an agent will actually vary
00:01:12.720 | pretty significantly but we'll focus on one of those which is the react agent now react is it's a very
00:01:22.160 | common pattern and although being relatively old now most of the tool agents that we see used by open ai
00:01:30.080 | and essentially every lm company they all use a very similar pattern now the react agent follows a pattern
00:01:38.480 | like this okay so we would have our user input up here okay so our input here is a question all right
00:01:46.800 | aside from the apple remote what other device can control the program apple remote was originally
00:01:51.600 | designed to interact with now probably most elements would actually be able to answer this directly now
00:01:56.800 | this is from the paper which was a few years back now in this scenario assuming our lm didn't already know
00:02:04.560 | the answer there are multiple steps that an lm or an agent might take in order to find out the answer
00:02:11.520 | okay so the first of those is we say our question here is what other device can control the program
00:02:18.400 | apple remote was originally designed to interact with so the first thing is okay what was the program that
00:02:24.320 | the apple remote was originally designed to interact with that's the first question we have here so
00:02:30.080 | what we do is i need to search apple remote and find a program it was useful this is a reasoning
00:02:35.840 | step so the lm is reasoning about what it needs to do i need to search for that and find a program
00:02:42.000 | of useful so we are taking an action this is a tool call here okay so we're going to use the search
00:02:48.720 | tool and our query will be apple remote and the observation is the response we get from executing
00:02:54.320 | that tool okay so the response here would be the apple remote is designed to control the front row media
00:03:00.240 | center so now we know the program apple remote was originally designed to interact with
00:03:04.880 | now we're going to go through another iteration okay so this is one iteration
00:03:12.320 | of our reasoning action and observation so when we're talking about react here although again
00:03:19.920 | this sort of pattern is very common across many agents when we're talking about react the name actually
00:03:26.800 | is reasoning or the first two characters of re's reasoning followed by action okay so that's where the react
00:03:35.520 | comes from so this is one of our react agent loops iterations we're going to go and do another one
00:03:42.000 | so next step we have this information the lm is now provided with this information now we want to do a
00:03:48.720 | search for front row okay so we do that this is the reasoning step we perform the action search front row
00:03:56.720 | okay tool search query front row observation this is the response front row is controlled by an apple remote
00:04:04.240 | or keyboard function keys all right cool so we know keyboard function keys are the other device that we
00:04:12.560 | were asking about up here so now we have all the information we need we can provide an answer to our
00:04:20.240 | user so we go through another iteration here reasoning in action our reasoning is i can now provide the answer
00:04:29.200 | of keyboard function keys of keyboard function keys to the user okay great so then we use the answer tool
00:04:36.080 | it's like final answer in more common tool agent use and the answer would be keyboard function keys which we then
00:04:46.560 | output to our user okay so that is the react loop okay so looking at this
00:04:56.800 | how where are we actually calling an lm and what and in what way are we actually calling an lm so
00:05:03.440 | we have our reasoning step our lm is generating the text here right so lm is generating okay what should i do
00:05:12.240 | then our lm is going to generate input parameters to our action step here
00:05:20.640 | that will those input parameters and the tool being used will be taken by our code logic our agent executor
00:05:27.760 | logic and they will be used to execute some code in which we will get an output that output might be
00:05:34.560 | taken directly to our observation or our lm might take that output and then generate an observation based on
00:05:41.040 | that it depends on how you've implemented everything so our lm could potentially be new be being used at
00:05:50.480 | every single set there and of course that will repeat through every iteration so we have further
00:05:58.080 | iterations down here so you're potentially using lm multiple times throughout this whole process which of
00:06:04.560 | course in terms of latency and token costs it does mean that you're going to be paying more for an agent
00:06:11.040 | than you are with just a standard lm but that that is of course expected because you have all of these
00:06:16.720 | different things going on but the idea is that what you can get out of an agent is of course much
00:06:23.040 | better than what you can get out of an lm alone so when we're looking at all of this
00:06:28.640 | all of this iterative
00:06:30.720 | chain of thought and tool use
00:06:33.360 | all this needs to be controlled by what we call the agent executor
00:06:37.840 | okay which is our code logic which is hitting our lm processing its outputs
00:06:42.640 | and repeating that process until we get to our answer so
00:06:46.240 | breaking that part down what does it actually look like
00:06:50.400 | it looks kind of like this so we have our user input goes into our lm
00:06:54.880 | okay and then we move on to the reasoning and action steps
00:06:59.680 | is the action the answer if it is the answer so as we saw
00:07:05.120 | here where is the answer if the action is the answer so true we would just go straight to our outputs
00:07:13.440 | otherwise we're going to use our select tool agent executor is going to handle all this it's going to
00:07:19.280 | execute our tool and then from that we get our you know three reasoning action observation inputs and
00:07:26.560 | outputs and then we're feeding all that information back into our lm okay in which case we go back
00:07:33.280 | through that loop so we could be looping for a little while until we get to that final output
00:07:38.240 | okay so let's go across to the code when we're going to the agent executor notebook we'll open that up in
00:07:45.600 | colab and we'll go ahead and just install our prerequisites
00:07:49.680 | nothing different here just line chain linesmith optionally as before
00:07:56.240 | again optionally line chain api key if you do want to use linesmith okay and then we'll come down to our
00:08:04.880 | first section where it's going to define a few quick tools i'm not necessarily going to go through these
00:08:11.440 | these because we've already covered them in the agent introduction but very quickly
00:08:17.360 | line chain core tools we're just importing this tool decorator which
00:08:21.280 | transforms each of our functions here into what we would call a structured tool object this thing here
00:08:29.680 | okay which we can see i'm just having a quick look here and then if we wanted to we can extract all of the sort of key information from that structure tool using these parameters here
00:08:41.280 | or attributes so name description
00:08:46.080 | which gives us essentially how the llm
00:08:49.040 | should use our function okay so i'm going to keep pushing through that
00:08:55.840 | now very quickly again we did cover this in the intro video so i don't want to
00:09:02.880 | necessarily go over again in too much detail but our agent executor logic is going to need this part
00:09:09.920 | so we're going to be getting a string from our llm we're going to be loading that into
00:09:15.520 | to a dictionary object and we're going to be using that to actually execute our tool as we do here using keyword arguments
00:09:24.480 | okay like that okay so with the tools out of the way let's take a look at how we create our agent so
00:09:31.760 | when i say agent here i'm specifically talking about the part that is generating our reasoning step
00:09:39.920 | then generating which tool and what the input parameters to that tool will be
00:09:47.200 | then the rest of that is not actually covered by the agent okay the rest of that would be covered by
00:09:52.000 | agent execution logic which would be taking the tool to be used the parameters executing the tool
00:09:59.040 | getting the response aka the observation and then iterating through that until the llm is satisfied and
00:10:05.840 | we have enough information to answer a question so looking at that our agent will look something like this
00:10:13.440 | it's pretty simple so we have our input parameters including the chat history user query we have our
00:10:19.040 | input parameters including the chat history use query and actually would also have any intermediate
00:10:24.800 | steps that have happened in here as well we have our prompt template and then we have our llm
00:10:30.320 | binded with tools so let's see how all this would look starting with we'll define our prompt template
00:10:38.160 | searching like this we have our system message you're a helpful assistant when answering these
00:10:44.320 | questions you should use on talk to provide after using a tool the tool app will provide in the scratch
00:10:48.640 | pad below okay which we're naming here if you have an answer in scratch pad you should not use any more
00:10:55.280 | tools and send answer directly to the user okay so we have that as our system message we could obviously
00:11:01.760 | modify that based on what we're actually doing then following our system message we're going to have our
00:11:07.920 | chat history so any previous interactions between user and the ai then we have our current message
00:11:14.720 | from the user okay we should be fed into the input field there and then following this we have our
00:11:20.720 | agent stretch pad or the intermediate thoughts so this is where things like the llm deciding okay this
00:11:27.760 | is what i need to do this is how i'm going to do it aka the tool call and this is the observation that's
00:11:34.000 | where all of that information will be going all right so each of those you want to pass in as a message
00:11:40.160 | okay and the way that will look is that any
00:11:42.960 | tool call generation from the llm so when the llm is saying use this tool please
00:11:48.880 | that will be a assistant message and then the responses from our tool so the observations
00:11:57.200 | they will be returned as tool messages great so we'll run that to define our prompt template we're
00:12:04.800 | going to define our lm so we're going to be using jupy 40 mini with a temperature of zero because we want
00:12:13.040 | less creativity here particularly when we're doing tool calling that there's just no need for us to use
00:12:18.640 | a high temperature here so we need to enter our openai api key which we would get from platform
00:12:23.200 | openai.com you enter this then we're going to continue and we're just going to add tools
00:12:30.320 | to our lm here okay these and we're going to bind them here then we have tool choice any so tool choice
00:12:41.120 | any we'll see in a moment i'll go through this a little bit more in a second but that's going to
00:12:46.400 | essentially force a tool call now you can also put required which is actually a bit more a bit clearer
00:12:53.200 | but i'm using any here so i'll stick with it so these are our tools we're going through we have our
00:12:59.040 | inputs into the agent runnable we have our prompt template and then that will get fed into our lm
00:13:06.720 | so let's run that now we would invoke the agent part of everything here with this okay so let's see
00:13:14.080 | what it outputs this is important so i'm asking what is 10 percent obviously that should use the
00:13:19.200 | addition tool and we can actually see that happening so the agent message content is actually empty here
00:13:26.080 | this is where you'd usually get an answer but if we go and have a look we have additional keyword args
00:13:32.400 | in there we have tool calls and then we have function arguments okay so we're calling a function
00:13:39.040 | arguments for that function are this okay so we can see this is string again the way that we would
00:13:45.360 | pass that as we do json loads and that becomes a dictionary and then we can see which function is
00:13:50.640 | being called and it is the add function and that is all we need in order to actually execute our
00:13:56.720 | function or our tool okay we can see it's a lot more detail here now what do we do from here we're going to
00:14:06.720 | map the tool name to the tool function and then we're just going to execute the tool function with
00:14:10.960 | the generated args i.e those i'll also just point out quickly that here we are getting the dictionary
00:14:18.400 | directly which i think is coming from somewhere else and this which is probably which is here okay so
00:14:25.280 | even that step here where we're passing this out we don't necessarily need to do that because i think
00:14:31.280 | on the lang chain side they're doing it for us so we're already getting that so json loads we don't
00:14:38.400 | necessarily need here okay so we're just creating this tool name to function mapping dictionary here
00:14:45.440 | so we're taking the well the tool names and we're just mapping those back to our tool functions and this
00:14:51.120 | is coming from our tools lists so that tools list that we defined here okay and we can even just see
00:14:58.240 | quickly and that will include everything or each of the tools you define there okay that's all it is
00:15:04.240 | now we're going to execute using our name to tool mapping okay so this here will get us the function
00:15:13.600 | so it will go as this function and then to that function we're going to pass the arguments that we
00:15:20.800 | generated okay let's do what looks like all right so the response so the observation is 20. now we are
00:15:32.080 | going to feed that back into our lm using the tool message and we're actually going to put a little bit
00:15:39.120 | of text around this to make it a little bit nice so we don't necessarily need to do this to be completely
00:15:44.160 | honest we could just return the answer directly i don't understand i don't even think there would
00:15:52.240 | really be any difference so we could do either in some cases that could be very useful in other cases
00:15:59.200 | like here it doesn't really make too much difference particularly because we have this tool call id and
00:16:04.320 | what this tool call id is doing is it's being used by open ai is being read by the lm so that the lm knows
00:16:12.080 | that the response we got here is actually mapped back to the the tool execution that it's identified here
00:16:21.840 | because you see that we have this id right we have an id here the lm is going to see the id it's going
00:16:28.480 | to see the id that we pass back in here and it's going to see those two are connected so you can see
00:16:33.760 | okay this is the tool i called and this is a response i got from it because of that you don't necessarily
00:16:39.120 | need to say which tool you used here but you can it depends on what you're doing
00:16:46.960 | okay so what do we get here we have okay just running everything again we've added our tool call
00:16:54.640 | so that's the original ai message that includes okay use the add tool and then we have the tool execution
00:17:00.080 | tool message which is the observation we map those to the agent stretch pad and then what do we get we
00:17:07.200 | have an ai message but the content is empty again which is interesting because we we said to our lm up here
00:17:15.760 | if you have an answer to the district in the stretch pad you should not use any more tools and send
00:17:19.920 | answer directly to the user so why why is our lm not answering well the reason for that is down here we
00:17:31.920 | specify tool choice equals any which again is the same as tool choice required which is telling the lm that
00:17:41.840 | it cannot actually answer directly it has to use a tool and i usually do this all right i would usually
00:17:48.720 | put tool choice equals any or all required and force the lm to use a tool every single time so then the
00:17:56.800 | question is if it has to use a tool over time how does it answer our user well we'll see in a moment
00:18:05.600 | first i just want to show you the two options essentially that we have the second is what i
00:18:11.360 | would usually use but let's let's start with the first so the first option is that we set tool choice
00:18:17.040 | equal to auto and this tells the lm that you can either use a tool or it can answer the user directly
00:18:24.320 | using the final answer or using that content field so if we run that like we're specifying tool choices
00:18:32.000 | auto we run that let's invoke okay initially you see ah wait there's still no content that's because we
00:18:40.320 | we didn't add anything into the agent scratchpad here there's no information right it's all empty
00:18:46.320 | um actually it's empty because sorry so here you have the chat history that's empty we didn't specify
00:18:53.440 | the agent scratchpad and the reason that we can do that is because we're using if you look here we're
00:18:59.200 | using get so essentially it's saying try and get agent scratchpad from this dictionary but if it hasn't
00:19:05.120 | been provided we're just going to give an empty list so that's what that's why we don't need to
00:19:10.640 | specify it here but that means that oh okay the the agent doesn't actually know anything here it hasn't
00:19:17.440 | used the tool yet so we're going to just go through our iteration again right so we're going to get our
00:19:24.240 | tool output we're going to use that to create the tool message and then we're going to add our tool
00:19:30.240 | call from the ai and the observation we're going to pass those to the agent scratchpad and this time
00:19:36.880 | we'll see we run that okay now we get the content okay so now it's not calling you see here there's
00:19:44.720 | no tool call or anything going on we just get content so that is this is a standard way of doing or
00:19:54.880 | building a tool calling agent the other option which i mentioned this is what i would usually go
00:19:59.840 | with so number two here i would usually create a final answer tool so why would we even do that
00:20:10.080 | why would we create a final answer tool rather than just you know this method is actually perfectly
00:20:15.200 | you know it works so why would we not just use this there are a few reasons the main ones are that
00:20:24.000 | with option two where we're forcing tool calling this removes possibility of an agent using that
00:20:30.720 | content field directly and the reason at least the reason i found this good when building agents in the
00:20:37.360 | past is that occasionally when you do want to use a tool it's actually going to go with the content field
00:20:43.680 | and it can get quite annoying and use the content field quite frequently when you actually do want
00:20:48.880 | it to be using one of the tools and this is particularly noticeable with smaller models
00:20:56.880 | with bigger models it's not as common although it does so happen now the second thing that i quite
00:21:04.400 | like about using a tool as your final answer is that you can enforce a structured output in your answers so this is something we saw in i think the first yes the first line
00:21:18.000 | chain example where we were using the structured output tool of line chain and what that actually
00:21:25.680 | is that the structured outputs feature of line chain it's actually just a tool call right so it's forcing
00:21:31.280 | a tool call from your lm it's just abstracted away so you don't realize that that's what it's doing but that is what it's doing
00:21:37.840 | so i find that structured outputs are very useful particularly when you have a lot of code around
00:21:46.000 | your agent so when that output needs to go downstream into some logic that can be very useful because you can
00:21:55.120 | you have a reliable output format that you know is going to be output and it's also incredibly useful if
00:22:03.200 | you have multiple outputs or multiple fields that you need to generate for so those can be very useful
00:22:10.880 | now to implement this so to implement option two we need to create a final answer tool we as without
00:22:19.760 | other tools we're actually going to provide description and you can or you cannot do this so you can
00:22:25.920 | you can also just return none and actually just use the generated action
00:22:33.520 | as the essentially what you're going to send out of your agent execution logic or you can actually just
00:22:41.360 | execute the tool and just pass that information directly through perhaps in some cases you might
00:22:46.720 | have some additional post processing for your final answer maybe you do some checks to make sure it hasn't
00:22:52.080 | said anything weird you could add that in this tool here but yeah in this case we're just trying to pass
00:22:59.360 | those two directly so let's run this we've added where are we finance it we've added the final answer tool
00:23:09.280 | to our named tool mapping so our agent can now use it we redefine our agent setting tool choice to any
00:23:16.880 | because we're forcing the tool choice here and let's go with what is 10 plus 10 see what happens
00:23:24.080 | okay we get this right we can also one thing nice thing here is that we don't need to check is our app
00:23:30.240 | in the content field or is it in the tool course field we know it's going to be in the tool course
00:23:34.640 | field because we're forcing that tool use which is quite nice so okay we know we're using the add tool
00:23:40.480 | and these are the arguments great we go or go through our process again we're going to create our
00:23:46.960 | tool message and then we're going to add those messages into our scratchpad or intermediate sets
00:23:53.040 | and then we can see again ah okay content field is empty that is expected we we're forcing tool
00:24:00.240 | users no way that this can be this can be or have anything inside it but then if we come down here
00:24:07.120 | to our tool course nice final answer arbs answer 10 plus 10 equals 20. all right we also have this
00:24:17.280 | tools use where's tools use coming from okay well i mentioned before that you can add additional things
00:24:23.760 | or or outputs when you're using this tool use for your final answer so if you just come up here to here
00:24:32.320 | you can see that i asked the lm to use that tools use field which i defined here it's a list of strings
00:24:39.920 | use this to tell me what tools you used in your answer all right so i'm getting the normal answer
00:24:45.840 | but i'm also getting this information as well which is kind of nice so that's where that is coming from
00:24:50.320 | see that okay so we have our actual answer here and then we just have some additional information
00:24:56.240 | okay and we've also defined the type here it's just a list of strings which is really nice it's giving us a
00:25:01.440 | lot of control over what we're outputting which is perfect that's you know when you're building with
00:25:06.880 | agents the biggest problem in most cases is control of your lm so here we're getting a honestly pretty
00:25:18.720 | unbelievable amount of control over what our lm is going to be doing which is perfect for when you're
00:25:25.440 | building in the real world so this is everything that we need this is our answer and we would of
00:25:32.960 | course be passing that downstream into whatever logic our ai application would be using okay so maybe
00:25:42.160 | that goes directly to a front end and we're displaying this as our answer and we're maybe providing some
00:25:48.720 | information about okay where did this answer come from or maybe there's some additional steps downstream
00:25:54.800 | where we're actually doing some more processing or transformations but yeah we have that that's
00:25:59.840 | great now everything we've just done here we've been executing everything one by one
00:26:05.520 | and that's to help us understand what process we go through when we're building an agent executor
00:26:14.320 | but we're not going to want to do that all the time are we most of the time we probably want to abstract
00:26:22.480 | all this away and that's what we're going to do now so we're going to build essentially everything we've
00:26:29.120 | just taken we're going to abstract take that and abstract it away into a custom agent executor class
00:26:36.000 | so let's have a quick look at what we're doing here although it's literally just what we we just did
00:26:42.400 | okay so custom agent executor we initialize it we set this match situations i'll talk about this in a moment
00:26:50.480 | we initialize it that is going to set our chat history to just being empty okay because it's a new
00:26:57.600 | agent and there should be no chat history in this case then we actually define our agent right so that
00:27:02.960 | part of the logic that is going to be taking out inputs and generating what to do next aka what tool call to do
00:27:10.400 | okay and we set everything as attributes of our class and then we're going to define an invoke method
00:27:17.520 | this invoke method is going to take an input which is just a string so it's going to be our message from the user
00:27:24.000 | and what it's going to do is it's going to iterate through essentially everything we just did
00:27:33.040 | okay until we hit the the final answer tool okay so well what does that mean we have our tool call
00:27:41.360 | right which is it we're just invoking our agent all right so it's going to generate what tool to use and
00:27:46.720 | what parameters should go into that okay and that's a that's an ai message so we would append that to our agent
00:27:55.040 | scratch pad and then we're going to use the information from our tool call so the name of the tool and the args and also the id
00:28:02.000 | we're going to use all that information to execute our tool and then provide the observation back to our lm
00:28:11.440 | okay so execute our tool here we then format the tool output into a tool message see here that i'm just using
00:28:19.840 | the output directly i'm not adding that additional information there we do need to always pass in the
00:28:27.760 | tool call id so that our lm knows which output is mapped to which tool i didn't mention this before in
00:28:34.960 | in this video at least but that is that's important when we have multiple tool calls happening in parallel
00:28:40.240 | because that can happen when we have multiple tool calls happening in parallel let's say we have
00:28:44.800 | 10 tool calls all those responses might come back at different times so then the order of those can get
00:28:50.880 | messed up so we wouldn't necessarily always see that it's a ai message beginning a tool call followed by
00:29:00.080 | the answer to that tool call instead it might be ai message followed by like 10 different tool call
00:29:06.880 | responses so you need to have those ids in there okay so then we pass our tool output back to our agent
00:29:16.320 | scratchpad or intermediate steps i'm sending a print in here so that we can see what's happening whilst
00:29:22.000 | everything is running then we increment this count number we'll talk about that in a moment so coming
00:29:27.840 | past that we say okay if the tool name here is final answer that means we should stop okay so once we get
00:29:37.440 | to final answer that means we can actually extract our final answer from the the final tool call okay and
00:29:44.400 | in this case i'm going to say that we're going to extract the answer from the tool call or the observation
00:29:53.360 | we're going to extract the answer that was generated we're going to pass that into our chat history so
00:29:59.120 | we're going to have our user message this is the one the user came up with followed by our answer which
00:30:05.360 | is just the natural answer field and that's going to be an ai message but then we're actually going to be
00:30:11.120 | including all of the information so this is the the answer natural language answer and also the tools
00:30:18.800 | used output we're going to be feeding all of that out to some downstream process as preferred so we have
00:30:26.720 | that now one thing that can happen if we're not careful is that our agent executor might may run many
00:30:36.720 | many many times and particularly if we've done something wrong in our logic as we're building
00:30:41.840 | these things it can happen that maybe we've not connected the observation back up into our agent
00:30:50.080 | executor logic and in that case what we might see is our agent executor runs again and again and again
00:30:55.360 | and i mean that's fine we're going to stop it but if we don't realize straight away and we're doing a lot of
00:31:02.320 | lm calls that can get quite expensive quite quickly so what we can do is we can set a limit all right
00:31:08.480 | so that's what we've done up here with this max iterations we said okay if we go past three max
00:31:13.120 | iterations by default i'm going to say stop all right so that's that's why we have the count here while
00:31:20.000 | count is less than the max iterations we're going to keep going once we hit the number of max iterations
00:31:26.640 | we stop okay so the while loop will will just stop looping okay so it just protects us in case of that
00:31:33.600 | and it also potentially maybe at some point your agent might be doing too much to answer a question
00:31:40.320 | so this will force it to stop and just provide an answer although if that does happen i just realize
00:31:46.000 | there's a bit of a fault in the logic here if that does happen we wouldn't necessarily have the answer
00:31:52.160 | here right so we would probably want to handle that nicely but in this scenario a very simple use case
00:31:59.440 | we're not going to see that happening so we initialize our custom agent executor and then we invoke it
00:32:07.200 | okay and let's see what happens all right there we go so that just wrapped everything
00:32:15.440 | into a single single invoke so everything is handled for us we could say okay what is 10 you know we can
00:32:25.200 | modify that and say 7.4 for example and that will go through we'll use the multiply tool instead and then
00:32:33.840 | we'll come back to the final answer again okay so we can see that with this custom agent executor we've built
00:32:41.840 | an agent and we have a lot more control over everything that is going on in here one thing
00:32:48.480 | that we would probably need to add in this scenario is right now i'm assuming that only one tool call
00:32:56.080 | will happen at once and it's also why i'm asking here i'm not asking a complicated question because i
00:33:00.960 | don't want it to go and try and execute multiple tool calls at once which which can happen so let's just try
00:33:08.720 | this okay so this is actually completely fine so this did just execute it one after the other
00:33:15.120 | so you can see that when asking this more complicated question it first did the exponentiate tool followed
00:33:24.000 | by the add tool and then it actually gave us our final answer which is cool also told us we use both
00:33:29.840 | those tools which it did but one thing that we should just be aware of is that from open ai open ai can
00:33:38.480 | actually execute multiple tool calls in parallel so by specifying that we're just using this zero here
00:33:46.240 | we're actually assuming that we're only ever going to be calling one tool at any one time which is not
00:33:51.680 | always going to be the case so you would probably need to add a little bit of extra logic there in case of scenarios if you're building in an agent that is
00:33:59.680 | it's likely to be running parallel tool calls but yeah you can see here actually
00:34:04.480 | it's completely fine so it's running one after the other okay so with that
00:34:09.200 | we've built our agent executor i know there's a lot to that and of course you can just use the very
00:34:16.320 | abstract agent executor in linechain but i think it's very good to understand what is actually going on to build our own agent executor in this case and it sets you up nicely for building more complicated or
00:34:28.560 | or use case specific agent logic as well so that is it for this chapter