Back to Index

End-to-end AI Agent Project with LangChain | Full Walkthrough


Chapters

0:0 End-to-End LangChain Agent
1:39 Setup the AI App
4:37 API Setup
11:28 API Token Generator
15:57 Agent Executor in API
34:3 Async SerpAPI Tool
40:8 Running the App

Transcript

Now we're on to the final capstone chapter. We're going to be taking everything that we've learned so far and using it to build a actual chat application. Now the chat application is what you can see right now. And we can go into this and ask some pretty interesting questions.

And because it's an agent, because as I access these tools, it will be able to answer them for us. So we'll see inside our application that we can ask questions that require tool use, such as this. And because of the streaming that we've implemented, we can see all this information in real time.

So we can see that serve API tool is being used, that these are the queries. We saw all that was in parallel as well. So each one of those tools were being used in parallel. We've modified the code a little bit to enable that. And we see that we have the answer.

We can also see the structured output being used here. So we can see our answer followed by the tools used here. And then we could ask follow up questions as well. Because this is conversational. So we say, how is the weather in each of those cities? Okay. That's pretty cool.

So this is what we're going to be building. We are, of course, going to be focusing on the API, the back end. I'm not a front end engineer, so I can't take you through that, but the code is there. So for those of you that do want to go through the front end code, you can, of course, go and do that.

But we'll be focusing on how we build the API that powers all of this using, of course, everything that we've learned so far. So let's jump into it. The first thing I'm going to want to do is clone this repo. So we'll copy this URL. This is repo, Aurelio Labs, Linechain course.

And you just clone your repo like so. I've already done this, so I'm not going to do it again. Instead, I'll just navigate to the Linechain course repo. Now, there's a few set up things that you do need to do. All of those can be found in the README.

So we just open a new tab here. And I'll open the README. Okay, so this explains everything we need. We have, if you were running this locally already, you will have seen this or you will have already done all of this. But for those of you that haven't, we'll go through it quickly now.

So you will need to install the UV library. So this is how we manage our Python environment, our packages. We use UV. On Mac, you would install it like so. If you're on Windows or Linux, just double check how you'd install over here. Once you have installed this, you would then go to install Python.

So UV Python install. Then we want to create our vnv, our virtual environment, using that version of Python. So vvnv here. Then, as we can see here, we need to activate that virtual environment, which I did miss from here. So let me quickly add that. So you just run that.

For me, I'm using fish. So I just add fish onto the end there. But if you're using bash or ZSH, I think you can just run that directly. And then, finally, we need to sync, i.e. install all of our packages using UV Sync. And you see that will install everything for you.

Great. So we have that and we can go ahead and actually open cursor or VSCode. And then we should find ourselves within cursor or VSCode. So in here, you'll find a few things that we will need. So first is environment variables. So we can come over to here and we have OpenAI API key, Learning Chain API key, and Serp API API key.

Create a copy of this and you'd make this your .env file. Or if you want to run it with Source, you can, well, I like to use mac.env when I'm on Mac and I just add export onto the start there and then enter my API keys. Now, I actually already have these in this local.mac.env file, which over in my terminal, I would just activate with Source again like that.

Now, we'll need that when we are running our API and application later. But for now, let's just focus on understanding what the API actually looks like. So navigating into the 09 capstone chapter, we'll find a few things. What we're going to focus on is the API here. And we have a couple of notebooks that help us just understand, okay, what are we actually doing here?

So let me give you a quick overview of the API first. So the API, we're using FastAPI for this. We have a few functions in here. The one we'll start with is this. Okay, so this is our post endpoint for invoke. And this essentially sends something to our LLM and begins a streaming response.

So we can go ahead and actually start the API and we can just see what this looks like. So we'll go into chapters 09 capstone API after setting our environment variables here. And we just want to do uv run uvicorn main colon app reload. We don't need to reload, but if we're modifying the code, that can be useful.

Okay, and we can see that our API is now running on localhost port 8000. And if we go to our browser, we can actually open the dops for our API. So we go to 8000 slash dops. Again, we just see that we have that single invoke method. It strikes the content.

And it gives us a small amount of information there. Now, we could try it out here. So if we say, say hello, we can run that. And we'll see that we get a response. We get this. Okay. Now, the thing that we're missing here is that this is actually being streamed back to us.

Okay. So this is not a just a direct response. This is a stream. To see that, we're going to navigate over to here, to this streaming test notebook. And we'll run this. So we are using requests here. We are not just doing a, you know, the standard post request because we want to stream the output and then print the output as we are receiving them.

Okay. So that's why this looks a little more complicated than just a typical request.post or request.get. So what we're doing here is we're starting our session, which is our post request. And then we're just iterating through the content as we receive it from that request. When we receive a token, because sometimes this might be none, we print that.

Okay. And we have that flush equals true as we have used in the past. So let's define that. And then let's just ask a simple question. What is five plus five? Okay. And we, we saw that that was, it was pretty quick. So it generated this response first, and then it went ahead and actually continued streaming with all of this.

Okay. And we can see that there are these special tokens that are being provided. This is to help the front end basically decide, okay, what should go where. So here where we're showing these multiple steps of tool use and the parameters, the way the front end is deciding how to display those is it's just, it's being provided a single stream, but it has the step tokens has a step has set name.

Then it has the parameters followed by the sort of ending of the step token. And it's looking at each one of these. And then the one step name that it treats differently is where it will see the final answer step name. When it sees the final answer step name, rather than displaying this tool use interface, it instead begins streaming the tokens directly at like a typical chat interface.

And if we look at what we actually get in our final answer, it's not just the answer itself, right? So we have the answer here. This is streamed into that typical chat output, but then we also have tools use. And then this is added into the little boxes that we have below the chat here.

So it's quite a lot going on just within this little stream. Now we can try with some other questions here. So we're going to say, okay, tell me about the latest news in the world. You can see that there's a little bit of a wait here whilst it's waiting to get the response.

And then yeah, that's streaming a lot of stuff quite quickly. Okay. So there's a lot coming through here. Okay. And then we can ask other questions like, okay, this one here, how called is it in Oslo right now? Or is five multiplied by five? All right. So these two are going to be executed in parallel and then it will, after it has the answers for those, the agent will use the other multiply tool to multiply those two values together and all of that will get streamed.

Okay. And then as we saw earlier, we have the, what is the current date and time in these places? Same thing. So three questions, there are three questions here. What is the current date and time in Dubai? What's the current date and time in Tokyo? And what's the current date and time in Berlin?

Those three questions get executed in parallel against the API search tool. And then all answers get returned within that final answer. Okay. So that is how our API is working. Now let's dive a little bit into the code and understand how it is working. So there are a lot of important things here.

There's some complexity, but at the same time, we've tried to make this as simple as possible as well. So let's just fast API syntax here with the app post invoke. So to start invoke endpoint, we consume some, some content, which is a string. And then if you remember from the agent executed deep dive, which is what we've implemented here or a modified version of that, we have to initialize our async IO queue and our streamer, which is the queue callback handler, which I believe is exactly the same as what we defined in that earlier chapter.

There's no differences there. So we define that. And then we return this streaming response object. All right. Again, this is a fast API thing. This is so that you are streaming a response. That streaming response has a few attributes here, which again are fast API things or just generic API things.

So some headers giving instructions to the API and then the media type here, which is text event stream. You can also use, I think it's text plane possibly as well, but I believe this standard here would be to use event stream. And then the more important part for us is this token generator.

Okay. So what is this token generator? Well, it is this function that we defined up here. Now, if you, again, if you remember that earlier chapter at the end of the chapter, we sell a, a for loop where we were printing out different tokens in various formats. So we kind of post-processing them before deciding how to display them.

That's exactly what we're doing here. So in this block here, we're looping through every token that we're receiving from our streamer. We're looping through and we're just saying, okay, if this is the end of a step, we're going to yield this end of step token, which we, we saw here.

Okay. So it's this end of, end of step token there. Otherwise, if this is a tool call. So again, we've got that water operator here. So what we're doing is saying, okay, get the tool calls out from our current message. If there is something there. So if this is not none, we're going to execute what's inside here.

And what is being executed inside here is we're checking for the tool name. If we have the tool name, we return this. Okay. So we have the start of step token, the start of the step name token, the tool name or step name, whichever of those you want to call it.

And then the end of the step name token. Okay. And then this, of course, comes through to the front end like that. Okay. That's what we have there. Otherwise, we should only be seeing the tool name returned as part of first token for every step. After that, it should just be tool arguments.

So in this case, we say, okay, if we have those tool or function arguments, we're going to just return them directly. So then that is a part that would stream all of this here. Okay. Like these would be individual tokens, right? For example. Right. So we might have the open curly brackets followed by query could be a token.

Latest could be a token. World could be a token. News could be a token, et cetera. Okay. So that is why it's happening there. This should not get executed, but we have a, we just handle that just in case. So we have any issues with tokens being returned there.

We're just going to print this error and we're going to continue with the streaming, but that should not really be happening. Cool. So that is our token streaming loop. Now, the way that we are picking up tokens from our stream object here is of course, through our agent execution logic, which is happening in parallel.

Okay. So all of this is asynchronous. We have this async definition here. So all of this is happening asynchronously. So what has happened here is here, we have created a task, which is the agent executor invoke. And we passing our content, we're passing that streamer, which we're going to be pulling tokens from.

And we also set verbose to true. We can actually remove that, but that would just allow us to see additional output in our terminal window if we want it. I don't think there's anything particularly interesting to look at in there, but particularly if you are debugging, that can be useful.

So we create our task here, but this does not begin the task. All right. This is a, it's async IO create task, but this does not begin until we await it down here. So what is happening here is essentially this code here is still being run or in like a, we're in an asynchronous loop here, but then we await this task.

As soon as we await this task, tokens will start being placed within our queue, which then get picked up by the streamer object here. So then this begins receiving tokens. I know async code is always a little bit more confusing given the strange order of things, but that is essentially what is happening.

You can imagine all of this is essentially being executed all at the same time. So we have that. Is there anything else to go through here? I don't think so. It's all sort of boilerplate stuff for FastAPI rather than the actual AI code itself. So we have that as our streaming function.

Now let's have a look at the agent code itself. Okay. So agent code, where would that be? So we're using this agent executor invoke and we're importing this from the agent file. So we can have a look in here for this. Now you can see straight away, we're pulling in our API keys here.

Just make sure that you do have those. Now, all of our cell. Okay. This is what we've seen before in that agent executor deep dive chapter. This is all practically the same. So we have our LLM. We've set those configurable fields as we did in the earlier chapters. That configurable field is for our callbacks.

We have our prompt. This has been modified a little bit. So essentially just telling it, okay, make sure you use the tools provided. We say you must use the final answer tool to provide a final answer to the user. And one thing that I added that I noticed every now and again.

So I have explicitly said use tools answer the user's current question, not previous questions. So I found with this setup, it will occasionally, if I just have a little bit of small talk with the agent and beforehand, I was asking questions about, okay, like what was the weather in this place or that place, the agent will kind of hang on to those previous questions and try and use a tool again to answer.

And that is just something that you can more or less prompt out of it. Okay. So we have that. This is all exactly the same as before. Okay. So we have our chat history to make this conversational. We have our human message and then our agent scratch pad. so that our agent can think through multiple tool use messages.

Great. So we also have the article class. So this is to process results from SERP API. We have our SERP API function here. I will talk about that a little more in a moment, because this is also a little bit different to what we covered before. What we covered before with SERP API, if you remember, was synchronous because we were using the SERP API client directly or the SERP API tool directly from Langchain.

And because we want everything to be asynchronous, we have had to recreate that tool in a asynchronous fashion, which we'll talk about a little bit later. But for now, let's move on from that. We can see our final answer being used here. So this is, I think we defined the exact same thing before, probably in that deep dive chapter again, where we have just the answer and the tools that have been used.

Great. So we have that. One thing that is a little different here is when we are defining our name to tool function. So this takes a tool name and it maps it to a tool function. So when we have synchronous tools, we actually use tool funk here. Okay. So rather than tool coroutine, it would be tool funk.

However, we are using asynchronous tools. And so this is actually tool coroutine. And this is why, this is why if you, if you come up here, I've made every single tool asynchronous. Now that is not really necessary for a tool like final answer because there is no, there's no API calls happening.

API call is a very typical scenario where you do want to use async. Because if you make an API call with a synchronous function, your code is just going to be waiting for the response from the API while the API is processing and doing whatever it's doing. So that is an ideal scenario where you would want to use async because rather than your code just waiting for the response from the API, it can instead go and do something else whilst it's waiting.

All right. So that's an ideal scenario where you'd use async, which is why we would use it, for example, with a SERP API tool here. But for final answer and for all of these calculator tools that we've built, there's actually no need to have these as async because our code is just running through its executing this code.

There's no waiting involved. So it doesn't necessarily make sense to have these asynchronous. However, by making them asynchronous, it means that I can do tool coroutine for all of them rather than saying, oh, if this tool is synchronous, use tool.funk. Whereas if this one is async, use tool.coroutine. So it just simplifies the code for us a lot more.

But yeah, not directly necessary, but it does help us write cleaner code here. This is also true later on because we actually have to await our tool calls, which we can see over here. All right. So we have to await those tool calls. That would get messier if we were using the like some sync tools, some async tools.

So we have that. We have our Q callback handler. This is, again, that's the same as before. So I'm not going to go through. I'm not going to go through that. We covered that in the earlier deep dive chapter. We have our execute tool function here. Again, that is asynchronous.

This just helps us, you know, clean up code a little bit. This would, I think in the deep dive chapter, we had this directly placed within our agent executor function and executor function and you can do that. It's fine. It's just a bit cleaner to kind of pull this out.

And we can also add more type annotations here, which I like. So execute tool expects us to provide an AI message, which includes a tool call within it. And it will return us a tool message. Okay. Agent executor. This is all the same as before. And we're actually not even using verbose here.

So we could fully remove it, but I will leave it. Of course, if you would like to use that, you can just add a verbose and then log or print some stuff where you need it. Okay. So what do we have in here? We have our streaming function. So this is what actually calls our agent, right?

So we have a query. This will call our agent just here. And we could even make this a little clearer. So for example, this could be configured agent, because this is, this is not the response. This is a configured agent. So I think this is maybe a little clearer.

So we are configuring our agent with our callbacks. Okay. Which is just our streamer. Then we're iterating through the tokens are returned by our agent using a stream here. Okay. And as we are iterating through this, because we pass our streamer to the callbacks here, what that is going to do is every single token that our agent returns is going to get processed through our queue callback handler here.

Okay. So this on lm new token, on lm new token, these are going to get executed. And then all of those tokens, you can see here, are passed to our queue. Okay. Then we come up here and we have this a iter. So this a iter method here is used by our generator over in our API is used by this token generator to pick up from the queue, the tokens that have been put in the queue by these other methods here.

Okay. So it's putting tokens into the queue and pulling them out with this. Okay. So that is just happening in parallel as well as this code is running here. Now, the reason that we extract the tokens out here is that we want to pull out our tokens and we append them all to our outputs.

Now, those outputs, that becomes a list of AI messages, which are essentially the AI telling us what tool to use and what parameters to pass to each one of those tools. This is very similar to what we covered in that deep dive chapter. But the one thing that I have modified here is I've enabled us to use parallel tool calls.

So that is what we see here with this, these four lines of code. We're saying, okay, if our tool call includes an ID, that means we have a new tool call or a new AI message. So what we do is we append that AI message, which is the AI message chunk to our outputs.

And then following that, if we don't get an ID, that means we're getting the tool arguments. So following that, we're just adding our AI message chunk to the most recent AI message chunk from our outputs. Okay. So what that will do is it will create that list of AI messages would be like AI message one, and then this will just append everything to that AI message one.

Then we'll get our next AI message chunk. This will then just append everything to that until we get a complete AI message and so on and so on. So what we do here is here, we've collected all our AI message chunk objects. Then finally, what we do is just transform all those AI message chunk objects into actual AI message objects, and then return them from our function, which we then receive over here.

So into the tool calls variable. Now, this is very similar to the deep dive chapter. Again, we're going through that count, that loop, where we have a max iterations, at which point we will just stop. But until then, we continue iterating through and making more tool calls, executing those tool calls, and so on.

So what is going on here? Let's see. So we got our tool calls. This is going to be a list of AI message objects. Then, what we do with those AI message objects is we pass them to this execute tool function. If you remember, what is that? That is this function here.

So we pass each AI message individually to this function, and that will execute the tool for us, and then return us that observation from the tool. Okay. So that is what you see happening here. But this is an async method. So typically, what you'd have to do is you'd have to do await execute tool, and we could do that.

So we could do a, okay, let me, let me make this a little bigger for us. Okay. And so what we could do, for example, which might be a bit clearer, is you could do tool obs equals an empty list. And what you could do is you can say, for tool call, oops, in tool calls, the tool observation is we're going to append execute tool call, which would have to be in await.

So we'd actually put the await in there. And what this would do is actually the exact same thing as what we're doing here. The difference being that we're doing this tool by tool. Okay. So we are, we're executing async here, but we're doing them sequentially. Whereas what we can do, which is better, is we can use async.io gather.

So what this does is gathers all those coroutines, and then we await them all at the same time to run them all asynchronously. They all begin at the same time, or almost exactly at the same time. And we get those responses kind of in parallel, but of course it's async.

So it's not fully in parallel, but practically in parallel. Cool. So we have that. And then that, okay, we get all of our tool observations from that. So that's all of our tool messages. And then one interesting thing here is if we, let's say we have all of our AI messages with all of our tool calls, and we just append all of those to our agent scratch pad.

All right. So let's say here, we're just like, oh, okay. Agent scratch pad. Extend. And then we would just have, okay, we'd have our tool calls. And then we do agent scratch pad, extend tool ops. All right, so what is happening here is this would essentially give us something that looks like this.

So we'd have our AI message, say, I'm just going to put, okay, we'll just put tool call IDs in here to simplify it a little bit. This would be tool call ID A. Then we would have AI message, tool call ID B. Then we'd have tool message. Let's just remove this content field.

I don't want that. And tool message, tool call ID B, right? So it would look something like this. So the order is, the tool message is not following the AI message, which you would think, okay, we have this tool call ID, that's probably fine. But actually, when we're running this, if you add these to your agent scratch pad in this order, what you'll see is your response just hangs, like nothing happens when you come through to your second iteration of your agent call.

So actually, what you need to do is these need to be sorted so that they are actually in order. And it doesn't, actually, it doesn't necessarily matter which order in terms of like A or B or C or whatever you use. So you could have this order. We have AI message, tool message, AI message, tool message, just as long as you have your tool call IDs are both together.

Or you could, you know, invert this, for example, right? So you could have this, right? And that will work as well. It's essentially just as long as you have your AI message followed by your tool message. And both of those are sharing that tool call ID. You need to make sure you have that order.

Okay. So that, of course, would not happen if we do this. And instead, what we need to do is something like this. Okay. So if I made this a little easier to read. Okay. So we're taking the tool call ID. We are pointing it to the tool observation. And we're doing that for every tool call and tool observation within like a zip of those.

Okay. Then what we're saying is for each tool call within our tool calls, we are extending our agent scratchpad with that tool call followed by the tool observation message, which is the tool message. So this would be our, this is the AI message. And that is the tool messages down there.

Okay. So that is why it's happening and that is how we get this correct order, which will run. Otherwise, things will not run. So that's important to be aware of. Okay. Now we're, we're almost done. I know there's, we've just been through quite a lot. So we continue, we increment our count as we were doing before.

And then we need to check for the final answer tool. Okay. And because we're running these tools in parallel. Okay. Because we're allowing multiple tool calls in one step. We can't just look at the most recent tool and look if it is, it has the name final answer. Instead, we need to iterate through all of our tool calls and check if any of them have the name final answer.

If they do, we say, okay, we extract that final answer call. We extract the final answer as well. So this is the direct text content. And we say, okay, we have found the final answer. So this will be set to true. Okay. Which should happen every time. But let's say if our agent gets stuck in a loop of calling multiple tools, this might not happen before we break based on the max iterations here.

So we might end up breaking based on max iterations rather than we found a final answer. Okay. So that can happen. So anyway, if we find that final answer, we break out of this for loop here. And then of course, we do need to break out of our wow loop, which is here.

So we say, if we found the final answer, break. Okay. Cool. So we have that. Finally, after all of that. So this is our, you know, we've executed our tool, our agent steps and iterations has processed. We've been through those. Finally, we come down to here where we say, okay, we're going to add that final output to our chat history.

So this is just going to be the text content. All right. So this here, get direct answer. But then what we do is we return the full final answer call. The full final answer call is basically this here. All right. So this answer and tools used, but of course populated.

So we're saying here that if we have a final answer, okay, if we have that, we're going to return the final answer call, which was generated by our LLM. Otherwise, we're going to return this one. So this is in the scenario that maybe the agent got caught in the loop and just kept iterating.

If that happens, we'll say it will come back with, okay, no answer found. And it will just return. Okay. We didn't use any tools, which is not technically true, but it's, this is like a exception handling event. So it ideally shouldn't happen, but it's not really a big deal if we're saying, okay, there were no tools used in my opinion.

Anyway, cool. So we have all of that and yeah, we just, we initialize our agent executor and then, I mean, that, that is our agent execution code. The one last thing we want to go through is the SERP API tool, which we will do in a moment. Okay. So SERP API, let's see what, let's see how we build our SERP API tool.

Okay. So we'll start with the synchronous SERP API. Now, the reason we're starting with this is that it's actually, it's just a bit simpler. So I'll show you this quickly before we move on to the async implementation, which is what we're using within our app. So we want to get our SERP API API key.

So I'll run that and we just enter it at the top there. And this will run. So we're going to use the SERP API SDK first. We're importing Google search, and these are the input parameters. So we have our API key we're using, we say once you use Google, we, our question is our query.

So Q for query. We're searching for the latest news in the world and it will return quite a lot of stuff. You can see there's a ton of stuff in there, right? Now, what we want is contained within this organic results key. So we can run that and we'll see, okay, it's talking about, you know, various things.

Pretty recent stuff at the moment. So we can tell, okay, that is, that is in fact working. Now this is quite messy. So what I would like to do first is just clean that up a little bit. So we define this article base model, which is Pydantic. And we're saying, okay, from a set of results.

Okay. So we're going to iterate through each of these. We're going to extract the title, source, link, and the snippet. So you can see title, source, link, and snippet here. Okay. So that's all useful. We'll run that. And what we do is we go through each of the results in organic results.

And we just load them into our article using this class method here. And then we can see, okay, let's have a look at what those look like. It's much nicer. Okay. We get this nicely formatted object here. Cool. That's great. Now, all of this, what we just did here.

So this is using sub-APIs SDK, which is great, super easy to use. The problem is that they don't offer a async SDK, which is a shame, but it's not that hard for us to set up ourselves. So typically with asynchronous requests, what we can use is the AIOHttp library.

It's, well, you can see what we're doing here. So this is equivalent to requests.get. Okay. That's essentially what we're doing here. And the equivalent is literally this. Okay. So this is the equivalent using requests that we are running here, but we're using async code. So we're using AIOHttp client session and then session.get, okay, with this async with here.

And then we just await our response. So this is all, yeah, this is what we do rather than this to make our code async. So it's really simple. And then the output that we get is exactly the same, right? So we still get this exact same output. So that means, of course, that we can use that articles method like this in the exact same way.

And we get, we get the same result. There's no need to make this article from sub-API result async because again, like this, this bit of code here is fully local. It's just our Python running everything. So this does not need to be async. Okay. And we can see that we get literally the exact same result there.

So with that, we have everything that we would need to build a fully asynchronous sub-API tool, which is exactly what we do here for Langchain. So we import those tools. And I mean, there's nothing, is there anything different here? No. This is exactly what we just said. But I will run this because I would like to show you very quickly this.

Okay. So this is how we were initially calling our tools in previous chapters because we were okay mostly with using the, the synchronous tools. However, you can see that the func here is just empty. All right. So if I do type, just a non-type, that is because, well, this is an async function.

Okay. It's an async tool. Sorry. So it was defined with async here. And what happens when you do that is you get this coroutine object. So rather than func, which is, it isn't here, you get that coroutine. If we then modified this, which would be kind of, okay, let's just remove all the asyncs here and the await.

If we modify that like so, and then we look at the cert API structure tool, we go across, we see that we now get that func. Okay. So that is, that is just the difference between an async structure tool versus a sync structure tool via course on async. Okay.

Now we have coroutine again. So important to be aware of that. And of course we, we run using the cert API coroutine. So that is, that's how we build the cert API tool. And there's nothing, I mean, that is exactly what we did here. So I don't need to, I don't think we need to go through that any further.

So yeah, I think that is basically all of our code behind this API. With all of that, we can then go ahead. So we have our API running already. Let's go ahead and actually run also our front end. So we're going to go to documents, Aurelio, line chain course.

And then we want to go to chapters 09 capstone app, and you will need to have NPM installed. So to do that, what do we do? We can take a look at this answer. For example, this is probably what I would recommend. Okay. So I would run brew install node followed by brew install NPM.

If you're on Mac, of course it's different. If you're on Linux or windows, once you have those, you can do NPM install. And this will just install all of the, oops, sorry, NPM install. And this will just install all of the node packages that we need. And then we can just run NPM run dev.

Okay. And now we have our app running on Locos 3000. So we can come over to here, open that up, and we have our application. Can ignore this. So in here, we can begin just asking questions. Okay. So we can start with a quick question. What is five plus five?

And we see, so we have our streaming happening here. It said the agent wants to use the add tool, and these are the input parameters to that add tool. And then we get the streamed response. So this is the final answer tool where we're outputting that answer key and value.

And then here we're outputting that tools used key and value, which is just an array of the tools being used, which just functions add. So we have that. Then let's ask another question. This time we'll trigger SERP API with tell me about the latest news in the world. Okay.

So we can see us using SERP API and the query is latest world news, and then it comes down here and we actually get some citations here, which is kind of cool. So you can also come through to here. Okay. And it takes us through to here. So that's pretty cool.

Unfortunately, I just lost my chat. So fine. Let me, I can ask that question again. Okay. We can see that tools use SERP API there. Now let's continue with the next question from our notebook, which is how cold is the NOSLA right now? What is five multiplied by five?

And what do you get when multiplying those two numbers together? I'm just going to modify that to say in Celsius so that I can understand thinking. Okay. So for this one, we can see what did we get? So current temperature in NOSLAO, we got multiply five by five, which is our second question.

And then we've also got subtract. Interesting that I don't know why it did that. It's kind of weird. So it decided to use, oh, ah, okay. So this is, okay. So then here it was, okay. That kind of makes sense. Does that make sense? Roughly. Okay. So I think the, the conversion for Fahrenheit Celsius is say like subtract 32.

Okay. Yes. So to go from Fahrenheit to Celsius, you are doing basically Fahrenheit minus 32. And then you're multiplying by this number here, which the, I assume the AI did not, I roughly did. Okay. So subtracting 36 by 32 would have given us four and it gave us approximately two.

So if you think, okay, multiply by this, it's practically multiplying by 0.5. So halving the value and that would give us roughly two degrees. So that's what this was doing here. Kind of interesting. Okay, cool. So we've gone through, we have seen how to build a fully fledged chat application using what we've learned throughout the course.

And we've built quite a lot. If you think about this application, you're getting the real time updates on what tools are being used, the parameters being input to those tools. And then that is all being returned in a streamed output and even in a structured output for your final answer, including the answer and the tools that we use.

So of course, you know, what we built here is fairly limited, but it's super easy to extend this. Like you could, maybe something that you might want to go and do is take what we've built here, like fork this application and just go and add different tools to it and see what happens.

Because this is very extensible. You can do a lot with it, but yeah, that is the end of the course. Of course, this is just the beginning of whatever it is you're wanting to learn or build with AI, treat this as the beginning and just go out and find all the other cool, interesting stuff that you can go and build.

So I hope this course has been useful, informative, and gives you an advantage in whatever it is you're going out and going out of this build. So thank you very much for watching and taking the course and sticking through right to the end. I know it's pretty long, so I appreciate it a lot and I hope you get a lot out of it.

Thanks. Bye. Bye.