back to index

LangChain v0.3 — Getting Started


Chapters

0:0 Getting Started with LangChain
0:46 Local Setup
3:32 Colab Setup
4:43 Initializing our OpenAI LLMs
9:6 LLM Prompting
10:31 LangChain Prompt Templates
15:20 Creating a LLM Chain with LCEL
20:31 Another Text Generation Pipeline
23:43 Structured Outputs in LangChain
28:27 Image Generation in LangChain

Whisper Transcript | Transcript Only Page

00:00:00.080 | Okay, so moving on to our next chapter, getting started with linechain.
00:00:04.080 | In this chapter, we're going to be introducing linechain by building a simple LM powered
00:00:10.080 | assistant that will do various things for us, it will be multimodal, generating some
00:00:14.920 | text, generating images, generate some structured outputs, it will do a few things.
00:00:20.160 | Now to get started, we will go over to the course repo.
00:00:24.920 | All of the code, all the chapters are in here.
00:00:28.240 | There are two ways of running this, either locally or in Google Colab.
00:00:32.700 | We would recommend running in Google Colab because it's just a lot simpler with environments,
00:00:37.480 | but you can also run it locally.
00:00:39.820 | And actually for the capstone, we will be running it locally.
00:00:43.740 | There's no way of us doing that in Colab.
00:00:47.020 | So if you would like to run everything locally, I'll show you how quickly now, if you would
00:00:51.860 | like to run in Colab, which I would recommend at least for the first notebook chapters, just
00:00:57.940 | skip ahead.
00:00:58.860 | There will be chapter points in the timeline of the video.
00:01:03.400 | So for running, running it locally, we've just come down to here.
00:01:07.180 | So this actually tells you everything that you need.
00:01:10.180 | So you will need to install UV, right?
00:01:14.640 | So this is the package manager that we recommend by the Python and package management library.
00:01:20.300 | You don't need to use UV.
00:01:22.560 | You know, it's, it's up to you.
00:01:24.020 | UV is, is very simple.
00:01:25.640 | It works really well.
00:01:26.640 | so I would recommend that.
00:01:29.020 | So you would install it with this command here.
00:01:32.100 | This is on Mac.
00:01:33.100 | So it will be different.
00:01:34.680 | Otherwise, if you are on Windows or otherwise, you can look at the installation guide there and
00:01:40.200 | it will tell you what to do.
00:01:41.100 | And so before we actually do this, what I will do is go ahead and just clone this repo.
00:01:49.020 | So we'll come into here.
00:01:50.820 | I'm going to create like a temp directory for me because I already have the linechain course
00:01:56.820 | in there.
00:01:57.320 | And what I'm going to do is just git clone linechain course.
00:02:00.680 | Okay.
00:02:01.360 | So you will also need to install a git if you don't have that.
00:02:05.320 | Okay.
00:02:05.780 | So we have that.
00:02:08.000 | Then what we'll do is copy this.
00:02:10.120 | Okay.
00:02:10.560 | So this will install Python 3.12.7 for us with this command.
00:02:15.300 | Then this will create a new vnv within that or using Python 3.12.7 that we've installed.
00:02:23.540 | And then uvsync will actually be looking at the pyproject.toml file.
00:02:30.280 | That's like the package installation for the repo and using that to install everything that
00:02:36.780 | we need.
00:02:37.140 | Now, we should actually make sure that we are within the linechain course directory.
00:02:41.660 | And then yes, we can run those three.
00:02:44.340 | And there we go.
00:02:46.560 | So everything should install with that.
00:02:49.440 | Now, if you are in cursor, you can just do cursor.
00:02:55.640 | Or we can run code.ifnvscode.
00:02:59.520 | I'll just be running this.
00:03:01.120 | And then I've opened up the course.
00:03:03.980 | Now, within that course, you have your notebooks.
00:03:07.080 | And then you just run through these, making sure you select your kernel, Python environment,
00:03:11.280 | and making sure you're using the correct vnv from here.
00:03:15.180 | So that should pop up already as this vnv bin Python.
00:03:18.980 | And you'll click that.
00:03:20.360 | And then you can run through.
00:03:21.960 | When you are running locally, don't run these.
00:03:24.800 | You don't need to.
00:03:25.620 | You've already installed everything.
00:03:26.840 | So you don't.
00:03:28.020 | This specifically is for Colab.
00:03:30.000 | So that is running things locally.
00:03:32.320 | Now let's have a look at running things in Colab.
00:03:37.080 | So for running everything in Colab, we have our notebooks in here.
00:03:41.340 | We click through and then we have each of the chapters through here.
00:03:45.520 | So starting with the first chapter, the introduction, which is where we are now.
00:03:51.500 | So what you can do to open this in Colab is either just click this Colab button here,
00:03:56.960 | or if you really want to, for example, maybe this is not loading for you.
00:04:03.380 | What you can do is you can copy the URL at the top here.
00:04:07.360 | You can go over to Colab.
00:04:09.360 | You can go to open GitHub and then just paste that in there and press enter.
00:04:16.660 | And there we go.
00:04:18.640 | We have our notebook.
00:04:21.360 | Okay.
00:04:21.560 | So we're in now.
00:04:22.820 | What we will do first is just install the prerequisites.
00:04:26.540 | So we have line chain, just a lot of line chain packages here, line chain core, line chain
00:04:32.720 | open AI, because we're using open AI and line chain community, which is needed for running
00:04:38.440 | what we're running.
00:04:39.500 | Okay.
00:04:40.440 | So that has installed everything for us.
00:04:43.300 | So we can move on to our first step, which is initializing our LM.
00:04:50.220 | So we're going to be using GPT-40 mini, which is slightly small, but fast, but also cheaper
00:04:57.180 | model.
00:04:57.700 | That is also very good from open AI.
00:05:01.040 | So what we need to do here is get an API key.
00:05:04.600 | Okay.
00:05:05.220 | So for getting that API key, we're going to go to open AI's website and you can see here
00:05:11.140 | that we're opening platform.openai.com.
00:05:13.780 | And then we're going to be going to settings, organization, API keys.
00:05:17.820 | So you can copy that.
00:05:19.500 | I'll just click it from here.
00:05:21.080 | Okay.
00:05:21.940 | So I'm going to go ahead and create a new secret key to actually, just in case you're kind of
00:05:28.080 | looking for where this is.
00:05:29.300 | It's settings, organization, API keys again.
00:05:32.620 | Okay.
00:05:33.080 | Create a new API key.
00:05:34.700 | I'm going to call it line train course.
00:05:36.960 | I'll just put it on the semantic router.
00:05:41.620 | That's just my organization.
00:05:43.140 | You, you put it wherever you want it to be.
00:05:46.320 | And then you would copy your API key.
00:05:49.060 | You can see mine here.
00:05:50.320 | I'm obviously going to revert you out before you see this, but you can try and use it if
00:05:54.240 | you really like.
00:05:54.960 | So I'm going to copy that and I'm going to place it into this little box here.
00:05:58.960 | You could also just place it, put your full API key in here.
00:06:04.980 | It's up to you, but this little box just makes things easier.
00:06:08.140 | Now that what we've basically done there is just pass in our API key.
00:06:13.200 | We're setting our open AI model, GPT-4-0-mini, and what we're going to be doing now is essentially
00:06:19.960 | just connecting and setting up our LLM parameters with line chain.
00:06:23.980 | So we run that, we say, okay, we're using a GPT-4-0-mini, and we're also setting ourselves
00:06:31.700 | up to use two different LLMs here, or two of the same LLM with slightly different settings.
00:06:37.800 | So the first of those is an LLM with a temperature setting of zero.
00:06:42.000 | The temperature setting basically controls almost the randomness of the output of your LLM.
00:06:49.780 | And the way that it works is when an LLM is predicting the sort of next token or next word
00:06:58.900 | in sequence, it'll provide a probability actually for all of the tokens within the LLM's knowledge
00:07:04.660 | base or what the LLM has been trained on.
00:07:06.740 | So what we do when we set a temperature of zero is we say, you are going to give us the
00:07:13.880 | token with highest probability according to you.
00:07:16.860 | Okay.
00:07:17.300 | Whereas when we set a temperature of 0.9, what we're saying is, okay, there's actually an increased
00:07:24.240 | probability of you giving us a token that according to your generated output is not the token with
00:07:32.440 | highest probability according to the LLM.
00:07:34.780 | But what that tends to do is give us more sort of creative outputs.
00:07:39.220 | So that's what the temperature does.
00:07:40.220 | So we are creating a normal LLM and then a more creative LLM with this.
00:07:46.660 | So what are we going to be building?
00:07:49.000 | We're going to be taking a article draft.
00:07:53.200 | So like a draft article from the Aurelio learning page, and we're going to be using the LLM chain
00:08:00.420 | to generate various things that we might find helpful as where, you know, we have this article
00:08:06.840 | draft and we're editing it and just kind of like finalizing it.
00:08:10.100 | So what are those going to be?
00:08:11.580 | You can see them here.
00:08:12.640 | We have the title for the article, the description, an SEO friendly description specifically.
00:08:19.160 | The third one, we're going to be getting the LLM to provide us advice on existing paragraph
00:08:24.560 | and essentially writing a new paragraph for us from that existing paragraph.
00:08:28.820 | And what it's going to do, this is the structured output part, is it's going to write a new version
00:08:34.360 | of that paragraph for us.
00:08:35.300 | And it's going to give us advice on where we can improve our writing.
00:08:38.180 | Then we're going to generate a thumbnail or hero image for our article.
00:08:43.460 | So, you know, nice image that you would put at the top.
00:08:45.900 | So here, we're just going to input our article.
00:08:50.220 | You can put something else in here if you like.
00:08:52.260 | Essentially, this is just a big article that's written a little while back on agents.
00:09:00.180 | And now we can go ahead and stop preparing our prompts, which are essentially the instructions
00:09:05.640 | for our LLM.
00:09:06.480 | So Linechain comes with a lot of different like utilities for prompts.
00:09:13.480 | And we're going to dive into them in a lot more detail, but I do want to just give you
00:09:16.960 | the essentials now, just so you can understand what we're looking at, at least conceptually.
00:09:22.240 | So prompts for chat agents are, at a minimum, broken up into three components.
00:09:27.760 | Those are the system prompt.
00:09:29.860 | This provides instructions to our LLM on how it should behave, what its objective is, and
00:09:34.680 | how it should go about achieving that objective.
00:09:37.220 | Generally, system prompts are going to be a bit longer than what we have here, depending on
00:09:42.840 | the use case.
00:09:43.480 | Then we have our user prompts.
00:09:45.140 | So these are user written messages.
00:09:48.380 | Usually, sometimes we might want to pre-populate those if we want to encourage a particular
00:09:52.600 | type of conversational patterns from our agent.
00:09:57.300 | But for the most part, yes, these are going to be user generated.
00:10:01.140 | Then we have our AI prompts.
00:10:03.280 | So these are, of course, AI generated.
00:10:05.820 | And again, in some cases, we might want to generate those ourselves beforehand or within
00:10:12.420 | a conversation if we have a particular reason for doing so.
00:10:15.500 | But for the most part, you can assume that these are actually user and AI generated.
00:10:20.380 | Now, the LineChain provides us with templates for each one of these prompt types.
00:10:27.360 | Let's go ahead and have a look at what these look like within LineChain.
00:10:31.740 | So to begin, we are looking at this one.
00:10:35.900 | So we have our system message prompt template and human messages, the user that we saw before.
00:10:43.240 | So we have these two system prompt, keeping it quite simple here.
00:10:46.500 | You are an AI system that helps generate article titles, right?
00:10:50.000 | So our first component where we want to generate is article title.
00:10:53.680 | So we're telling the AI that's what we want it to do.
00:10:58.020 | And then here, right?
00:10:59.580 | So here we're actually providing kind of like a template for a user input.
00:11:06.680 | So yes, as I mentioned, user input can be fully generated by a user.
00:11:15.380 | It might be kind of not generated by a user.
00:11:19.140 | It might be setting up a conversation beforehand, which a user would later use.
00:11:23.820 | Or in this scenario, we're actually creating a template.
00:11:28.520 | And what the user will provide us will actually just be inserted here inside article.
00:11:34.300 | And that's why we have this import variables.
00:11:37.100 | So what this is going to do is, okay, we have all of these instructions around here.
00:11:43.220 | They're all going to be provided to OpenAI as if it is the user saying this.
00:11:48.300 | But it will actually just be this here that a user will be providing, okay?
00:11:54.040 | And we might want to also format this a little nicer.
00:11:56.440 | It kind of depends.
00:11:57.360 | This will work as it is.
00:11:58.900 | But we can also put, you know, something like this to make it a little bit clearer to the LM.
00:12:04.980 | Okay, what is the article?
00:12:06.760 | Where are the prompts?
00:12:08.620 | So we have that.
00:12:11.220 | And you can see in this scenario, there's not that much difference between what the system prompt
00:12:16.280 | and the user prompt is doing.
00:12:17.400 | And this is, it's a particular scenario.
00:12:19.200 | It varies when you get into the more conversational stuff, as we will do later.
00:12:23.400 | You'll see that the user prompt is generally more fully user generated or mostly user generated.
00:12:31.640 | And much of these types of instructions, we might actually be putting into the system prompt.
00:12:37.640 | It varies.
00:12:38.760 | And we'll see throughout the course, many different ways of using these different types of prompts
00:12:43.780 | in various different places.
00:12:45.940 | Then, you'll see here.
00:12:48.180 | So I just want to show you how this is working.
00:12:50.740 | We can use this format method on our user prompt here to actually insert something within the article input here.
00:13:00.180 | So we're going to go user prompt format, and then we'll pass in something for article.
00:13:04.740 | Okay.
00:13:05.300 | And we can also maybe format this a little nicer, but I'll just show you this for now.
00:13:10.140 | So we have our human message.
00:13:11.720 | And then inside the content, this is the text that we had.
00:13:14.920 | All right, you can see that we have all of this, right?
00:13:16.980 | And this is what we wrote before.
00:13:18.280 | We wrote all of this, except from this part.
00:13:20.600 | We didn't write this.
00:13:22.280 | Instead of this, we had article.
00:13:24.280 | All right.
00:13:25.260 | So let's format this a little nicer so that we can see.
00:13:29.660 | Okay.
00:13:30.720 | So this is exactly what we wrote up here.
00:13:32.800 | Exactly the same, except from now we have test string instead of article.
00:13:37.340 | So later, when we insert our article, it's going to go inside there.
00:13:41.940 | It's all we're doing.
00:13:42.740 | It's like it's an F string in Python.
00:13:45.780 | Okay.
00:13:46.120 | And this is, again, this is one of the things where people might complain about Langchain.
00:13:49.820 | You know, this sort of thing can be, you know, it seems excessive because you could just do
00:13:54.900 | this with an F string.
00:13:55.740 | But there are, as we'll see later, particularly when you're streaming, just really helpful features
00:14:01.380 | that come with using Langchain's kind of built-in prompt templates or at least message objects
00:14:09.580 | that we'll see.
00:14:10.380 | So, you know, we need to keep that in mind.
00:14:15.680 | Again, as soon as it's more complicated, Langchain can be a bit more useful.
00:14:19.380 | So chat prompt template.
00:14:21.480 | This is basically just going to take what we have here, our system prompt, user prompt.
00:14:26.800 | You could also include some AI prompts in there.
00:14:29.780 | And what it's going to do is merge both of those.
00:14:33.240 | And then when we do format, what it's going to do is put both of those together into a chat
00:14:40.400 | history.
00:14:40.920 | Okay.
00:14:41.740 | So let's see what that looks like first in a more messy way.
00:14:45.600 | Okay.
00:14:46.740 | So you can see we have just the content, right?
00:14:50.720 | So it doesn't include the whole, you know, before we had human message, we're not include,
00:14:55.240 | we're not seeing anything like that here.
00:14:56.920 | Instead, we're just seeing the string.
00:14:59.160 | So now let's switch back to print.
00:15:01.780 | And we can see that what we have is our system message here.
00:15:06.900 | It's just prefixed with this system.
00:15:08.680 | And then we have human and it's prefixed by human.
00:15:11.160 | And then it continues.
00:15:12.360 | Right.
00:15:12.880 | So that's, that's all it's doing.
00:15:14.020 | It's just kind of merging those in some sort of chat.
00:15:16.140 | Like we could also put in like AI messages and they would appear in there as well.
00:15:19.900 | Okay.
00:15:20.320 | So we have that.
00:15:21.740 | Now that is our prompt template.
00:15:24.660 | Let's put that together with an LLM to create what would be in the past line chain be called
00:15:30.540 | an LLM chain.
00:15:31.620 | Now we wouldn't necessarily call it an LLM chain because we're not using the LLM chain abstraction.
00:15:37.040 | It's not super important if that doesn't make sense.
00:15:39.140 | We will go into it in more detail later, particularly in the, in the LLL chapter.
00:15:45.980 | So what this chain will do, you know, think LLM chain is just chains where we're chaining
00:15:52.580 | together these multiple comparements.
00:15:54.020 | It will perform the SEPs prompt formatting.
00:15:57.500 | So that's what I just showed you.
00:15:59.100 | LLM generation.
00:16:01.500 | So sending our prompt to OpenAI, getting a response and getting that output.
00:16:07.180 | So you can also add another SEP here if you want to format that in a particular way.
00:16:12.080 | We're going to be outputting that in a particular format so that we can feed it into
00:16:15.860 | the next set more easily.
00:16:16.920 | But there are also things called output parses, which parse your output in a more dynamic or
00:16:24.460 | complicated way, depending on what you're doing.
00:16:26.360 | So this is our first look at LLL.
00:16:29.240 | I don't want us to focus too much on the syntax here because we will be doing that later, but
00:16:34.880 | I do want you to just understand what is actually happening here and logically, what are we writing?
00:16:43.200 | So all we really need to know right now is we define our inputs with the first dictionary
00:16:49.880 | segment here.
00:16:50.820 | All right.
00:16:51.700 | So this is, you know, our inputs, which we have defined already.
00:16:56.000 | Okay.
00:16:56.840 | So if we come up to our user prompt here, we said the input variable is our article, right?
00:17:04.500 | And we might have also added input variables to the system prompt here as well.
00:17:08.240 | In that case, you know, let's say we had your AI assistant called name, right?
00:17:18.020 | And then what we would have to do down here is we would also have to pass that in.
00:17:33.480 | Right.
00:17:33.980 | So also we would have article, but we would also have name.
00:17:39.080 | So basically we just need to make sure that in here we're including the variables that
00:17:45.760 | have defined as input variables for our, our first prompts.
00:17:49.600 | Okay.
00:17:50.200 | So we can actually go ahead and let's add that.
00:17:52.440 | Uh, so we can see it's in action.
00:17:54.680 | So we'll run this again and just include that or, or re-initialize our first prompt.
00:18:01.640 | So we see that, and if we just have a look at what that means for this format function
00:18:07.460 | here, it means we'll also need to pass in a name.
00:18:09.940 | Okay.
00:18:10.640 | And call it Joe.
00:18:11.940 | Okay.
00:18:12.700 | So Joe, the AI, right?
00:18:14.820 | So you are an AI system called Joe now.
00:18:16.840 | Okay.
00:18:17.480 | So we have Joe, our AI that is going to be fed in through these input variables.
00:18:22.600 | Then we have this pipe operator.
00:18:24.420 | The pipe operator is basically saying whatever is to the left of the pipe operator, which in
00:18:30.220 | this case would be this is going to go into whatever is on the right of the pipe operator.
00:18:35.320 | It's that simple.
00:18:36.200 | Again, we'll, we'll dive into this and kind of break it apart in the LSL chapter, but for
00:18:41.200 | now that's all we need to know.
00:18:42.180 | So this is going to go into our first prompt that is going to perform everything.
00:18:48.100 | It's going to add the name and the article that we've provided into our first prompt.
00:18:51.660 | And then it's going to output that, right?
00:18:53.920 | It's going to output that we have our pipe operator here.
00:18:56.220 | So the output of this is going to go into the input of our next.
00:19:00.200 | So creative LM, then that is going to generate some tokens.
00:19:06.400 | It's going to generate our output.
00:19:07.820 | That output is going to be an AI message.
00:19:11.320 | And as you saw before, if I take this bit out within those message objects, we have this
00:19:19.820 | content field.
00:19:20.860 | OK, so we are actually going to extract the content field out from our AI message to just
00:19:28.480 | get the content.
00:19:29.260 | And that is what we do here.
00:19:30.900 | So we get the AI message out from our LM and then we're extracting the content from that AI
00:19:35.540 | message object.
00:19:36.420 | And we're going to pass it into a dictionary that just contains article title like so.
00:19:41.440 | Okay, we don't need to do that.
00:19:43.060 | We can just get the AI message directly.
00:19:44.920 | I just want to show you how we are using this sort of chain in LCL.
00:19:50.520 | So once we have set up our chain, we then call it or execute it using the invoke method.
00:19:58.400 | Into that, we will need to pass in those variables.
00:20:01.160 | So we have our article already, but we also gave our AI a name now.
00:20:04.900 | So let's add that and we'll run this.
00:20:07.120 | OK, so Joe has generated us a article title, Unlocking the Future, the Rise of Neurosymbolic
00:20:17.060 | AI Agents.
00:20:18.380 | Cool.
00:20:18.860 | Much better name than what I gave that article, which was AI Agents are Neurosymbolic Systems.
00:20:24.920 | No, I don't think I did too bad.
00:20:27.540 | OK, so we have that.
00:20:30.360 | Now, let's continue, and what we're going to be doing is building more of these types
00:20:35.880 | of LLM chain pipelines where we're feeding in some prompts, we're generating something,
00:20:42.280 | getting something and doing something with it.
00:20:44.760 | So, as mentioned, we have the title.
00:20:48.120 | We're now moving on to the description, so we want to generate description.
00:20:51.160 | So we have our human message prompt template.
00:20:53.400 | So this is actually going to go into a similar format as before.
00:20:58.520 | We probably also want to redefine this because I think I'm using the same system message there.
00:21:04.920 | So let's go ahead and modify that.
00:21:08.120 | Or what we could also do is let's just remove the name now because I've shown you that.
00:21:15.000 | So what we could do is you're an AI assistant that helps build good articles, right?
00:21:23.880 | Build good articles.
00:21:25.320 | And we could just use this as our generic system prompt now.
00:21:30.680 | So let's say that's our new system prompt.
00:21:32.920 | Now we have our user prompt.
00:21:34.280 | You're tasked with creating a description for the article.
00:21:36.200 | The article is here for you to examine article.
00:21:38.360 | Here is the article title.
00:21:40.360 | OK, so we need the article title now as well and our input variables.
00:21:43.960 | And then we're going to output an SEO friendly article description.
00:21:47.560 | And we're just saying, just to be certain here, do not output anything other than the description.
00:21:52.360 | So, you know, sometimes an LLM might say, hey, look, this is what I've generated for you.
00:21:57.160 | The reason I think this is good is because so on and so on and so on.
00:22:00.200 | All right.
00:22:00.600 | If you're programmatically taking some output from an LLM, you don't want all of that fluff
00:22:06.040 | around what the LLM has generated.
00:22:07.720 | You just want exactly what you've asked it for.
00:22:11.080 | OK, because otherwise you need to pass out with code and it can get messy
00:22:14.520 | and also just far less reliable.
00:22:16.920 | So we're just saying do not output anything else.
00:22:20.040 | Then we're putting all of these together.
00:22:21.400 | So system prompt and the second user prompt.
00:22:23.960 | This one here.
00:22:24.600 | Putting those together into a new chat prompt template.
00:22:28.280 | And then we're going to feed all that in to another LL chain as we have here to
00:22:33.880 | generate our description.
00:22:37.000 | So let's go ahead.
00:22:38.040 | We invoke that as before.
00:22:39.560 | We'll just make sure we add in the article title that we got from before.
00:22:43.160 | And let's see what we get.
00:22:45.640 | OK, so we have this.
00:22:47.400 | Explore the transformative potential of neuro-symbolic AI agents in.
00:22:50.680 | A little bit long, to be honest.
00:22:54.120 | But yeah, you can see what it's doing here.
00:22:56.040 | Right.
00:22:56.280 | And of course, we could then go in and we see this is kind of too long.
00:22:59.240 | We're like, oh, yeah, SEO friendly description.
00:23:02.440 | Not really.
00:23:03.880 | So we can modify this.
00:23:06.280 | Output the SEO friendly description.
00:23:08.200 | Make sure we don't exceed.
00:23:13.000 | Let me put that on a new line.
00:23:15.240 | Make sure we don't exceed say 200 characters.
00:23:18.440 | Or maybe it's even less to SEO.
00:23:20.040 | I don't.
00:23:20.440 | I don't have a clue.
00:23:21.640 | I'll just say 120 characters.
00:23:23.720 | I do not outplay anything other than the description.
00:23:26.120 | All right.
00:23:26.360 | So we could just go back, modify our prompting, see what that generates again.
00:23:31.160 | OK, so much shorter.
00:23:33.000 | Probably too short now.
00:23:34.200 | But that's fine.
00:23:35.400 | Cool.
00:23:35.720 | So we have that.
00:23:36.680 | We have a summary.
00:23:37.720 | Process that.
00:23:38.440 | And that's now in this dictionary format that we have here.
00:23:41.560 | Cool.
00:23:42.920 | Now the third step.
00:23:45.000 | We want to consume that first article variable with our full article.
00:23:50.680 | And we're going to generate a few different output fields.
00:23:54.600 | So for this, we're going to be using the structured output feature.
00:23:59.400 | So let's scroll down.
00:24:01.320 | And we'll see what that is or what that looks like.
00:24:05.160 | So structured output is essentially we're forcing that limit.
00:24:09.160 | Like it has to output a dictionary with these particular fields.
00:24:14.840 | OK, and we can modify this quite a bit.
00:24:17.560 | But in this scenario, what I want to do is I want there to be an original paragraph.
00:24:22.680 | Right.
00:24:22.920 | So I just want it to regenerate the original paragraph because I'm lazy.
00:24:26.040 | And I don't want to extract it out.
00:24:27.640 | Then I want to get the new edited paragraph.
00:24:31.480 | This is the LN generated improved paragraph.
00:24:34.520 | And then we want to get some feedback because we don't want to just automate ourselves.
00:24:39.480 | We want to augment ourselves and get better with AI rather than just being like,
00:24:44.760 | ah, you do, you do this.
00:24:46.200 | So that's what we do here.
00:24:48.520 | And you can see that here we're using this Pydantic object.
00:24:52.040 | And what Pydantic allows us to do is define these particular fields.
00:24:55.400 | And it also allows us to assign these descriptions to a field.
00:24:59.160 | And Lionchain is actually going to go ahead, read all of this.
00:25:02.520 | Right.
00:25:03.400 | Even reads.
00:25:04.200 | So, for example, we could put integer here and we could actually get a numeric score for our paragraph.
00:25:11.400 | Right.
00:25:11.640 | And we can try that.
00:25:12.280 | Right.
00:25:12.520 | So let's just try that quickly.
00:25:14.360 | I'll show you.
00:25:14.920 | So numeric, numeric score.
00:25:18.200 | In fact, let's even just ignore, let's not put anything here.
00:25:22.920 | So I'm going to put constructive feedback on the original paragraph, but I just put into it.
00:25:26.440 | So let's see what happens.
00:25:27.320 | Okay.
00:25:28.280 | So we have that.
00:25:29.480 | And what I'm going to do is I'm going to get our creative LM.
00:25:33.560 | I'm going to use this with structured output method, and that's actually going to modify
00:25:37.320 | that LM class, create a new LM class that forces that LM to use this structure for the output.
00:25:43.960 | Right.
00:25:44.280 | So passing in paragraph into here, using this, we're creating this new structured LM.
00:25:50.040 | So let's run that and see what happens.
00:25:53.320 | Okay.
00:25:53.960 | So we're going to modify our chain accordingly.
00:25:57.400 | Maybe what I can do is also just remove this bit for now.
00:26:02.520 | So we can just see what the structured LM outputs directly.
00:26:06.280 | And let's see.
00:26:06.840 | Okay.
00:26:09.880 | So now you can see that we actually have that paragraph object, right?
00:26:14.440 | The one we defined up here, which is kind of cool.
00:26:16.280 | And then in there, we have the original paragraph, right?
00:26:19.720 | So this is where this is coming from.
00:26:21.800 | I definitely remember writing something that looks a lot like that.
00:26:26.200 | So I think that is correct.
00:26:27.880 | We have the edited paragraph.
00:26:29.400 | So this is, okay, what it thinks is better.
00:26:31.400 | And then interestingly, the feedback is three, which is weird, right?
00:26:37.400 | Because here we said the constructive feedback on the original paragraph.
00:26:41.480 | But what we're doing when we use this with structured output,
00:26:45.400 | but what Langchain is doing is, is essentially performing a tool call to
00:26:49.400 | OpenAI.
00:26:50.120 | And what a tool call can do is force a particular structure in the output of an LM.
00:26:55.800 | So when we say feedback has to be an integer, no matter what we put here,
00:27:00.200 | it's going to give us an integer.
00:27:02.200 | Because how do you provide constructive feedback with an integer?
00:27:05.560 | It doesn't really make sense.
00:27:07.240 | But because we've set that limitation, that restriction here, that is what it does.
00:27:13.400 | It just gives us the numeric value.
00:27:16.440 | So I'm going to shift that to string and then let's rerun this, see what we get.
00:27:20.840 | Okay.
00:27:21.080 | We should now see that we actually do get constructive feedback.
00:27:24.520 | All right, so yeah, we can see it's quite, quite long.
00:27:27.640 | So the original paragraph effectively communicates limitations with neural AI systems in performing
00:27:32.760 | certain tests.
00:27:33.560 | However, it could benefit from slightly improved clarity and conciseness.
00:27:38.040 | For example, the phrase was becoming clear can be made more direct by changing it to became evident.
00:27:44.120 | Yeah, true.
00:27:46.200 | Thank you very much.
00:27:47.560 | So yeah, now we actually get that feedback, which is pretty nice.
00:27:52.360 | Now let's add in this final step to our chain.
00:27:56.200 | Okay.
00:27:57.880 | And it's just going to pull out our paragraph object here and extract into a dictionary.
00:28:03.000 | We don't necessarily need to do this.
00:28:04.600 | Honestly, I actually kind of prefer it within this paragraph object.
00:28:07.320 | But just so we can see how we would pass things on the other side of the chain.
00:28:14.840 | Okay, so now we can see we've extracted that out.
00:28:18.040 | Cool.
00:28:18.600 | So we have all of that interesting feedback again.
00:28:23.480 | But let's leave it there for the text part of this.
00:28:27.960 | Now let's have a look at the sort of multimodal features that we can work with.
00:28:32.920 | So this is, you know, maybe one of those things that's kind of seems a bit more abstracted,
00:28:37.480 | a little bit complicated where it maybe could be improved.
00:28:40.920 | But, you know, we're not going to really be focusing too much on the multimodal stuff.
00:28:44.680 | We'll be focusing on language.
00:28:46.280 | But I did want to just show you very quickly.
00:28:48.920 | So we want this article to look better.
00:28:50.680 | Okay.
00:28:51.960 | We want to generate a prompt based on the article itself that we can then pass to DALI, the image
00:29:03.080 | generation model from OpenAI, that will then generate an image like a like a thumbnail image for us.
00:29:09.080 | Okay.
00:29:09.320 | So the first step of that is we're actually going to get an element to generate that.
00:29:14.760 | All right.
00:29:15.000 | So we have our prompt that we're going to use for that.
00:29:18.040 | So I'm going to say generate a prompt with less than 500 characters to generate an image based on
00:29:24.120 | the following article.
00:29:25.080 | Okay.
00:29:26.040 | So that's our prompt, you know, super simple.
00:29:28.920 | We're using the generic prompt template here.
00:29:31.320 | You can use that.
00:29:32.040 | You can use user prompt template.
00:29:34.120 | It's up to you.
00:29:35.320 | This is just like the generic prompt template.
00:29:38.040 | Then what we're going to be doing is based on what this outputs, we're then going to feed
00:29:44.440 | that in to this generate and display image function via the image prompt parameter.
00:29:50.280 | That is going to use the DALI API wrapper from line chain.
00:29:54.440 | It's going to run that image prompt, and we're going to get a URL out from that essentially.
00:29:59.640 | And then we're going to read that using SK image here.
00:30:03.000 | All right.
00:30:03.320 | So it's going to read that image URL, going to get the image data, and then we're just going
00:30:07.400 | to display it.
00:30:08.120 | Okay.
00:30:09.240 | So pretty straightforward.
00:30:11.880 | Now, again, this is a L cell thing here that we're doing, and we have this runnable
00:30:18.440 | Lambda thing when we're running functions within L cell, we need to wrap them within this runnable
00:30:24.680 | Lambda.
00:30:25.080 | I, you know, I don't want to go too much into what this is doing here because we do cover
00:30:31.080 | in the L cell chapter, but it's just, you know, all you really need to know is we have a custom
00:30:35.960 | function, wrap it in runnable Lambda.
00:30:38.360 | And then what we get from that, we can use within this here, the L cell syntax.
00:30:44.600 | So what are we doing here?
00:30:46.600 | Let's figure this out.
00:30:47.480 | We are taking our original, that image prompt that we defined just up here, right?
00:30:53.000 | Input variable to that is article.
00:30:55.560 | Okay.
00:30:56.200 | We have our article data being input here, feeding that into our prompt.
00:31:01.800 | From there, we get our message that we then feed into our LM.
00:31:06.520 | From the LM, it's going to generate us a, like an image prompt, like a prompt for generating our
00:31:12.360 | image.
00:31:13.240 | For this article, we can even let's, let's print that out so that we can see what it generates.
00:31:19.000 | Because I'm also kind of curious.
00:31:20.680 | Okay.
00:31:22.040 | So we'll just run that.
00:31:23.720 | And then let's see.
00:31:25.560 | It will feed in that content into our runnable, which is basically this function here.
00:31:31.960 | And we'll see what it generates.
00:31:33.320 | Okay.
00:31:34.200 | Don't expect anything amazing from Dali.
00:31:37.240 | It's not, it's not the best to be honest, but we, at least we see how to use it.
00:31:42.200 | Okay.
00:31:42.600 | So we can see the prompt that was used here.
00:31:46.040 | Create an image that visually represents the concept of neuro symbolic agents.
00:31:49.800 | Depicts a futuristic interface where large language interact with traditional code,
00:31:55.080 | symbolizing integration of, oh my gosh.
00:31:57.720 | Something computation include elements like a brain to represent neural networks.
00:32:02.760 | Gears or circuits or symbolic logic.
00:32:06.360 | And a web of connections illustrating vast use cases of AI agents.
00:32:11.720 | Oh my gosh.
00:32:12.840 | Look at all that.
00:32:13.560 | Big prompt.
00:32:16.040 | Then we get this.
00:32:17.160 | So, you know, Dali is interesting.
00:32:19.160 | I would say we could even take this.
00:32:21.160 | Let's just see what that comes up with in something like mid journey.
00:32:25.720 | And you can see these way cooler images that we get from just another image generation model.
00:32:31.240 | Far better, but pretty cool, honestly.
00:32:33.320 | So in terms of generation images, the phrasing that the prompt itself is actually pretty good.
00:32:39.800 | The image, you know, it could be better, but that's it.
00:32:43.880 | Right.
00:32:44.200 | So with all of that, we've seen a little introduction to what we might building with line chain.
00:32:50.760 | So that's it for our introduction chapter.
00:32:53.000 | As I mentioned, we don't want to go too much into what each of these things is doing.
00:32:57.960 | I just really want to focus on.
00:33:00.920 | Okay.
00:33:01.240 | This is kind of how we're building something with line chain.
00:33:05.400 | This is the overall flow, but we don't really want to be focusing too much on.
00:33:10.440 | Okay.
00:33:11.000 | What exactly LSL is doing or what exactly, you know, this prompt thing is that we're setting up.
00:33:18.120 | We're going to be focusing much more on all of those things and much more in the upcoming chapters.
00:33:24.040 | So for now, we've just seen a little bit of what we can build before diving in, in more detail.
00:33:31.720 | So for now, we're going to be focusing on the next step.
00:33:33.880 | So for now, we're going to be focusing on the next step.
00:33:35.880 | So for now, we're going to be focusing on the next step.
00:33:37.880 | So for now, we're going to be focusing on the next step.
00:33:39.880 | So for now, we're going to be focusing on the next step.
00:33:41.880 | So for now, we're going to be focusing on the next step.
00:33:43.880 | So for now, we're going to be focusing on the next step.
00:33:45.880 | So for now, we're going to be focusing on the next step.