Okay, so moving on to our next chapter, getting started with linechain. In this chapter, we're going to be introducing linechain by building a simple LM powered assistant that will do various things for us, it will be multimodal, generating some text, generating images, generate some structured outputs, it will do a few things.
Now to get started, we will go over to the course repo. All of the code, all the chapters are in here. There are two ways of running this, either locally or in Google Colab. We would recommend running in Google Colab because it's just a lot simpler with environments, but you can also run it locally.
And actually for the capstone, we will be running it locally. There's no way of us doing that in Colab. So if you would like to run everything locally, I'll show you how quickly now, if you would like to run in Colab, which I would recommend at least for the first notebook chapters, just skip ahead.
There will be chapter points in the timeline of the video. So for running, running it locally, we've just come down to here. So this actually tells you everything that you need. So you will need to install UV, right? So this is the package manager that we recommend by the Python and package management library.
You don't need to use UV. You know, it's, it's up to you. UV is, is very simple. It works really well. so I would recommend that. So you would install it with this command here. This is on Mac. So it will be different. Otherwise, if you are on Windows or otherwise, you can look at the installation guide there and it will tell you what to do.
And so before we actually do this, what I will do is go ahead and just clone this repo. So we'll come into here. I'm going to create like a temp directory for me because I already have the linechain course in there. And what I'm going to do is just git clone linechain course.
Okay. So you will also need to install a git if you don't have that. Okay. So we have that. Then what we'll do is copy this. Okay. So this will install Python 3.12.7 for us with this command. Then this will create a new vnv within that or using Python 3.12.7 that we've installed.
And then uvsync will actually be looking at the pyproject.toml file. That's like the package installation for the repo and using that to install everything that we need. Now, we should actually make sure that we are within the linechain course directory. And then yes, we can run those three. And there we go.
So everything should install with that. Now, if you are in cursor, you can just do cursor. Or we can run code.ifnvscode. I'll just be running this. And then I've opened up the course. Now, within that course, you have your notebooks. And then you just run through these, making sure you select your kernel, Python environment, and making sure you're using the correct vnv from here.
So that should pop up already as this vnv bin Python. And you'll click that. And then you can run through. When you are running locally, don't run these. You don't need to. You've already installed everything. So you don't. This specifically is for Colab. So that is running things locally.
Now let's have a look at running things in Colab. So for running everything in Colab, we have our notebooks in here. We click through and then we have each of the chapters through here. So starting with the first chapter, the introduction, which is where we are now. So what you can do to open this in Colab is either just click this Colab button here, or if you really want to, for example, maybe this is not loading for you.
What you can do is you can copy the URL at the top here. You can go over to Colab. You can go to open GitHub and then just paste that in there and press enter. And there we go. We have our notebook. Okay. So we're in now. What we will do first is just install the prerequisites.
So we have line chain, just a lot of line chain packages here, line chain core, line chain open AI, because we're using open AI and line chain community, which is needed for running what we're running. Okay. So that has installed everything for us. So we can move on to our first step, which is initializing our LM.
So we're going to be using GPT-40 mini, which is slightly small, but fast, but also cheaper model. That is also very good from open AI. So what we need to do here is get an API key. Okay. So for getting that API key, we're going to go to open AI's website and you can see here that we're opening platform.openai.com.
And then we're going to be going to settings, organization, API keys. So you can copy that. I'll just click it from here. Okay. So I'm going to go ahead and create a new secret key to actually, just in case you're kind of looking for where this is. It's settings, organization, API keys again.
Okay. Create a new API key. I'm going to call it line train course. I'll just put it on the semantic router. That's just my organization. You, you put it wherever you want it to be. And then you would copy your API key. You can see mine here. I'm obviously going to revert you out before you see this, but you can try and use it if you really like.
So I'm going to copy that and I'm going to place it into this little box here. You could also just place it, put your full API key in here. It's up to you, but this little box just makes things easier. Now that what we've basically done there is just pass in our API key.
We're setting our open AI model, GPT-4-0-mini, and what we're going to be doing now is essentially just connecting and setting up our LLM parameters with line chain. So we run that, we say, okay, we're using a GPT-4-0-mini, and we're also setting ourselves up to use two different LLMs here, or two of the same LLM with slightly different settings.
So the first of those is an LLM with a temperature setting of zero. The temperature setting basically controls almost the randomness of the output of your LLM. And the way that it works is when an LLM is predicting the sort of next token or next word in sequence, it'll provide a probability actually for all of the tokens within the LLM's knowledge base or what the LLM has been trained on.
So what we do when we set a temperature of zero is we say, you are going to give us the token with highest probability according to you. Okay. Whereas when we set a temperature of 0.9, what we're saying is, okay, there's actually an increased probability of you giving us a token that according to your generated output is not the token with highest probability according to the LLM.
But what that tends to do is give us more sort of creative outputs. So that's what the temperature does. So we are creating a normal LLM and then a more creative LLM with this. So what are we going to be building? We're going to be taking a article draft.
So like a draft article from the Aurelio learning page, and we're going to be using the LLM chain to generate various things that we might find helpful as where, you know, we have this article draft and we're editing it and just kind of like finalizing it. So what are those going to be?
You can see them here. We have the title for the article, the description, an SEO friendly description specifically. The third one, we're going to be getting the LLM to provide us advice on existing paragraph and essentially writing a new paragraph for us from that existing paragraph. And what it's going to do, this is the structured output part, is it's going to write a new version of that paragraph for us.
And it's going to give us advice on where we can improve our writing. Then we're going to generate a thumbnail or hero image for our article. So, you know, nice image that you would put at the top. So here, we're just going to input our article. You can put something else in here if you like.
Essentially, this is just a big article that's written a little while back on agents. And now we can go ahead and stop preparing our prompts, which are essentially the instructions for our LLM. So Linechain comes with a lot of different like utilities for prompts. And we're going to dive into them in a lot more detail, but I do want to just give you the essentials now, just so you can understand what we're looking at, at least conceptually.
So prompts for chat agents are, at a minimum, broken up into three components. Those are the system prompt. This provides instructions to our LLM on how it should behave, what its objective is, and how it should go about achieving that objective. Generally, system prompts are going to be a bit longer than what we have here, depending on the use case.
Then we have our user prompts. So these are user written messages. Usually, sometimes we might want to pre-populate those if we want to encourage a particular type of conversational patterns from our agent. But for the most part, yes, these are going to be user generated. Then we have our AI prompts.
So these are, of course, AI generated. And again, in some cases, we might want to generate those ourselves beforehand or within a conversation if we have a particular reason for doing so. But for the most part, you can assume that these are actually user and AI generated. Now, the LineChain provides us with templates for each one of these prompt types.
Let's go ahead and have a look at what these look like within LineChain. So to begin, we are looking at this one. So we have our system message prompt template and human messages, the user that we saw before. So we have these two system prompt, keeping it quite simple here.
You are an AI system that helps generate article titles, right? So our first component where we want to generate is article title. So we're telling the AI that's what we want it to do. And then here, right? So here we're actually providing kind of like a template for a user input.
So yes, as I mentioned, user input can be fully generated by a user. It might be kind of not generated by a user. It might be setting up a conversation beforehand, which a user would later use. Or in this scenario, we're actually creating a template. And what the user will provide us will actually just be inserted here inside article.
And that's why we have this import variables. So what this is going to do is, okay, we have all of these instructions around here. They're all going to be provided to OpenAI as if it is the user saying this. But it will actually just be this here that a user will be providing, okay?
And we might want to also format this a little nicer. It kind of depends. This will work as it is. But we can also put, you know, something like this to make it a little bit clearer to the LM. Okay, what is the article? Where are the prompts? So we have that.
And you can see in this scenario, there's not that much difference between what the system prompt and the user prompt is doing. And this is, it's a particular scenario. It varies when you get into the more conversational stuff, as we will do later. You'll see that the user prompt is generally more fully user generated or mostly user generated.
And much of these types of instructions, we might actually be putting into the system prompt. It varies. And we'll see throughout the course, many different ways of using these different types of prompts in various different places. Then, you'll see here. So I just want to show you how this is working.
We can use this format method on our user prompt here to actually insert something within the article input here. So we're going to go user prompt format, and then we'll pass in something for article. Okay. And we can also maybe format this a little nicer, but I'll just show you this for now.
So we have our human message. And then inside the content, this is the text that we had. All right, you can see that we have all of this, right? And this is what we wrote before. We wrote all of this, except from this part. We didn't write this. Instead of this, we had article.
All right. So let's format this a little nicer so that we can see. Okay. So this is exactly what we wrote up here. Exactly the same, except from now we have test string instead of article. So later, when we insert our article, it's going to go inside there. It's all we're doing.
It's like it's an F string in Python. Okay. And this is, again, this is one of the things where people might complain about Langchain. You know, this sort of thing can be, you know, it seems excessive because you could just do this with an F string. But there are, as we'll see later, particularly when you're streaming, just really helpful features that come with using Langchain's kind of built-in prompt templates or at least message objects that we'll see.
So, you know, we need to keep that in mind. Again, as soon as it's more complicated, Langchain can be a bit more useful. So chat prompt template. This is basically just going to take what we have here, our system prompt, user prompt. You could also include some AI prompts in there.
And what it's going to do is merge both of those. And then when we do format, what it's going to do is put both of those together into a chat history. Okay. So let's see what that looks like first in a more messy way. Okay. So you can see we have just the content, right?
So it doesn't include the whole, you know, before we had human message, we're not include, we're not seeing anything like that here. Instead, we're just seeing the string. So now let's switch back to print. And we can see that what we have is our system message here. It's just prefixed with this system.
And then we have human and it's prefixed by human. And then it continues. Right. So that's, that's all it's doing. It's just kind of merging those in some sort of chat. Like we could also put in like AI messages and they would appear in there as well. Okay. So we have that.
Now that is our prompt template. Let's put that together with an LLM to create what would be in the past line chain be called an LLM chain. Now we wouldn't necessarily call it an LLM chain because we're not using the LLM chain abstraction. It's not super important if that doesn't make sense.
We will go into it in more detail later, particularly in the, in the LLL chapter. So what this chain will do, you know, think LLM chain is just chains where we're chaining together these multiple comparements. It will perform the SEPs prompt formatting. So that's what I just showed you.
LLM generation. So sending our prompt to OpenAI, getting a response and getting that output. So you can also add another SEP here if you want to format that in a particular way. We're going to be outputting that in a particular format so that we can feed it into the next set more easily.
But there are also things called output parses, which parse your output in a more dynamic or complicated way, depending on what you're doing. So this is our first look at LLL. I don't want us to focus too much on the syntax here because we will be doing that later, but I do want you to just understand what is actually happening here and logically, what are we writing?
So all we really need to know right now is we define our inputs with the first dictionary segment here. All right. So this is, you know, our inputs, which we have defined already. Okay. So if we come up to our user prompt here, we said the input variable is our article, right?
And we might have also added input variables to the system prompt here as well. In that case, you know, let's say we had your AI assistant called name, right? And then what we would have to do down here is we would also have to pass that in. Right. So also we would have article, but we would also have name.
So basically we just need to make sure that in here we're including the variables that have defined as input variables for our, our first prompts. Okay. So we can actually go ahead and let's add that. Uh, so we can see it's in action. So we'll run this again and just include that or, or re-initialize our first prompt.
So we see that, and if we just have a look at what that means for this format function here, it means we'll also need to pass in a name. Okay. And call it Joe. Okay. So Joe, the AI, right? So you are an AI system called Joe now. Okay.
So we have Joe, our AI that is going to be fed in through these input variables. Then we have this pipe operator. The pipe operator is basically saying whatever is to the left of the pipe operator, which in this case would be this is going to go into whatever is on the right of the pipe operator.
It's that simple. Again, we'll, we'll dive into this and kind of break it apart in the LSL chapter, but for now that's all we need to know. So this is going to go into our first prompt that is going to perform everything. It's going to add the name and the article that we've provided into our first prompt.
And then it's going to output that, right? It's going to output that we have our pipe operator here. So the output of this is going to go into the input of our next. So creative LM, then that is going to generate some tokens. It's going to generate our output.
That output is going to be an AI message. And as you saw before, if I take this bit out within those message objects, we have this content field. OK, so we are actually going to extract the content field out from our AI message to just get the content. And that is what we do here.
So we get the AI message out from our LM and then we're extracting the content from that AI message object. And we're going to pass it into a dictionary that just contains article title like so. Okay, we don't need to do that. We can just get the AI message directly.
I just want to show you how we are using this sort of chain in LCL. So once we have set up our chain, we then call it or execute it using the invoke method. Into that, we will need to pass in those variables. So we have our article already, but we also gave our AI a name now.
So let's add that and we'll run this. OK, so Joe has generated us a article title, Unlocking the Future, the Rise of Neurosymbolic AI Agents. Cool. Much better name than what I gave that article, which was AI Agents are Neurosymbolic Systems. No, I don't think I did too bad.
OK, so we have that. Now, let's continue, and what we're going to be doing is building more of these types of LLM chain pipelines where we're feeding in some prompts, we're generating something, getting something and doing something with it. So, as mentioned, we have the title. We're now moving on to the description, so we want to generate description.
So we have our human message prompt template. So this is actually going to go into a similar format as before. We probably also want to redefine this because I think I'm using the same system message there. So let's go ahead and modify that. Or what we could also do is let's just remove the name now because I've shown you that.
So what we could do is you're an AI assistant that helps build good articles, right? Build good articles. And we could just use this as our generic system prompt now. So let's say that's our new system prompt. Now we have our user prompt. You're tasked with creating a description for the article.
The article is here for you to examine article. Here is the article title. OK, so we need the article title now as well and our input variables. And then we're going to output an SEO friendly article description. And we're just saying, just to be certain here, do not output anything other than the description.
So, you know, sometimes an LLM might say, hey, look, this is what I've generated for you. The reason I think this is good is because so on and so on and so on. All right. If you're programmatically taking some output from an LLM, you don't want all of that fluff around what the LLM has generated.
You just want exactly what you've asked it for. OK, because otherwise you need to pass out with code and it can get messy and also just far less reliable. So we're just saying do not output anything else. Then we're putting all of these together. So system prompt and the second user prompt.
This one here. Putting those together into a new chat prompt template. And then we're going to feed all that in to another LL chain as we have here to generate our description. So let's go ahead. We invoke that as before. We'll just make sure we add in the article title that we got from before.
And let's see what we get. OK, so we have this. Explore the transformative potential of neuro-symbolic AI agents in. A little bit long, to be honest. But yeah, you can see what it's doing here. Right. And of course, we could then go in and we see this is kind of too long.
We're like, oh, yeah, SEO friendly description. Not really. So we can modify this. Output the SEO friendly description. Make sure we don't exceed. Let me put that on a new line. Make sure we don't exceed say 200 characters. Or maybe it's even less to SEO. I don't. I don't have a clue.
I'll just say 120 characters. I do not outplay anything other than the description. All right. So we could just go back, modify our prompting, see what that generates again. OK, so much shorter. Probably too short now. But that's fine. Cool. So we have that. We have a summary. Process that.
And that's now in this dictionary format that we have here. Cool. Now the third step. We want to consume that first article variable with our full article. And we're going to generate a few different output fields. So for this, we're going to be using the structured output feature. So let's scroll down.
And we'll see what that is or what that looks like. So structured output is essentially we're forcing that limit. Like it has to output a dictionary with these particular fields. OK, and we can modify this quite a bit. But in this scenario, what I want to do is I want there to be an original paragraph.
Right. So I just want it to regenerate the original paragraph because I'm lazy. And I don't want to extract it out. Then I want to get the new edited paragraph. This is the LN generated improved paragraph. And then we want to get some feedback because we don't want to just automate ourselves.
We want to augment ourselves and get better with AI rather than just being like, ah, you do, you do this. So that's what we do here. And you can see that here we're using this Pydantic object. And what Pydantic allows us to do is define these particular fields. And it also allows us to assign these descriptions to a field.
And Lionchain is actually going to go ahead, read all of this. Right. Even reads. So, for example, we could put integer here and we could actually get a numeric score for our paragraph. Right. And we can try that. Right. So let's just try that quickly. I'll show you. So numeric, numeric score.
In fact, let's even just ignore, let's not put anything here. So I'm going to put constructive feedback on the original paragraph, but I just put into it. So let's see what happens. Okay. So we have that. And what I'm going to do is I'm going to get our creative LM.
I'm going to use this with structured output method, and that's actually going to modify that LM class, create a new LM class that forces that LM to use this structure for the output. Right. So passing in paragraph into here, using this, we're creating this new structured LM. So let's run that and see what happens.
Okay. So we're going to modify our chain accordingly. Maybe what I can do is also just remove this bit for now. So we can just see what the structured LM outputs directly. And let's see. Okay. So now you can see that we actually have that paragraph object, right? The one we defined up here, which is kind of cool.
And then in there, we have the original paragraph, right? So this is where this is coming from. I definitely remember writing something that looks a lot like that. So I think that is correct. We have the edited paragraph. So this is, okay, what it thinks is better. And then interestingly, the feedback is three, which is weird, right?
Because here we said the constructive feedback on the original paragraph. But what we're doing when we use this with structured output, but what Langchain is doing is, is essentially performing a tool call to OpenAI. And what a tool call can do is force a particular structure in the output of an LM.
So when we say feedback has to be an integer, no matter what we put here, it's going to give us an integer. Because how do you provide constructive feedback with an integer? It doesn't really make sense. But because we've set that limitation, that restriction here, that is what it does.
It just gives us the numeric value. So I'm going to shift that to string and then let's rerun this, see what we get. Okay. We should now see that we actually do get constructive feedback. All right, so yeah, we can see it's quite, quite long. So the original paragraph effectively communicates limitations with neural AI systems in performing certain tests.
However, it could benefit from slightly improved clarity and conciseness. For example, the phrase was becoming clear can be made more direct by changing it to became evident. Yeah, true. Thank you very much. So yeah, now we actually get that feedback, which is pretty nice. Now let's add in this final step to our chain.
Okay. And it's just going to pull out our paragraph object here and extract into a dictionary. We don't necessarily need to do this. Honestly, I actually kind of prefer it within this paragraph object. But just so we can see how we would pass things on the other side of the chain.
Okay, so now we can see we've extracted that out. Cool. So we have all of that interesting feedback again. But let's leave it there for the text part of this. Now let's have a look at the sort of multimodal features that we can work with. So this is, you know, maybe one of those things that's kind of seems a bit more abstracted, a little bit complicated where it maybe could be improved.
But, you know, we're not going to really be focusing too much on the multimodal stuff. We'll be focusing on language. But I did want to just show you very quickly. So we want this article to look better. Okay. We want to generate a prompt based on the article itself that we can then pass to DALI, the image generation model from OpenAI, that will then generate an image like a like a thumbnail image for us.
Okay. So the first step of that is we're actually going to get an element to generate that. All right. So we have our prompt that we're going to use for that. So I'm going to say generate a prompt with less than 500 characters to generate an image based on the following article.
Okay. So that's our prompt, you know, super simple. We're using the generic prompt template here. You can use that. You can use user prompt template. It's up to you. This is just like the generic prompt template. Then what we're going to be doing is based on what this outputs, we're then going to feed that in to this generate and display image function via the image prompt parameter.
That is going to use the DALI API wrapper from line chain. It's going to run that image prompt, and we're going to get a URL out from that essentially. And then we're going to read that using SK image here. All right. So it's going to read that image URL, going to get the image data, and then we're just going to display it.
Okay. So pretty straightforward. Now, again, this is a L cell thing here that we're doing, and we have this runnable Lambda thing when we're running functions within L cell, we need to wrap them within this runnable Lambda. I, you know, I don't want to go too much into what this is doing here because we do cover in the L cell chapter, but it's just, you know, all you really need to know is we have a custom function, wrap it in runnable Lambda.
And then what we get from that, we can use within this here, the L cell syntax. So what are we doing here? Let's figure this out. We are taking our original, that image prompt that we defined just up here, right? Input variable to that is article. Okay. We have our article data being input here, feeding that into our prompt.
From there, we get our message that we then feed into our LM. From the LM, it's going to generate us a, like an image prompt, like a prompt for generating our image. For this article, we can even let's, let's print that out so that we can see what it generates.
Because I'm also kind of curious. Okay. So we'll just run that. And then let's see. It will feed in that content into our runnable, which is basically this function here. And we'll see what it generates. Okay. Don't expect anything amazing from Dali. It's not, it's not the best to be honest, but we, at least we see how to use it.
Okay. So we can see the prompt that was used here. Create an image that visually represents the concept of neuro symbolic agents. Depicts a futuristic interface where large language interact with traditional code, symbolizing integration of, oh my gosh. Something computation include elements like a brain to represent neural networks.
Gears or circuits or symbolic logic. And a web of connections illustrating vast use cases of AI agents. Oh my gosh. Look at all that. Big prompt. Then we get this. So, you know, Dali is interesting. I would say we could even take this. Let's just see what that comes up with in something like mid journey.
And you can see these way cooler images that we get from just another image generation model. Far better, but pretty cool, honestly. So in terms of generation images, the phrasing that the prompt itself is actually pretty good. The image, you know, it could be better, but that's it. Right.
So with all of that, we've seen a little introduction to what we might building with line chain. So that's it for our introduction chapter. As I mentioned, we don't want to go too much into what each of these things is doing. I just really want to focus on. Okay.
This is kind of how we're building something with line chain. This is the overall flow, but we don't really want to be focusing too much on. Okay. What exactly LSL is doing or what exactly, you know, this prompt thing is that we're setting up. We're going to be focusing much more on all of those things and much more in the upcoming chapters.
So for now, we've just seen a little bit of what we can build before diving in, in more detail. So for now, we're going to be focusing on the next step. So for now, we're going to be focusing on the next step. So for now, we're going to be focusing on the next step.
So for now, we're going to be focusing on the next step. So for now, we're going to be focusing on the next step. So for now, we're going to be focusing on the next step. So for now, we're going to be focusing on the next step. So for now, we're going to be focusing on the next step.