back to indexLangChain v0.3 — Getting Started

Chapters
0:0 Getting Started with LangChain
0:46 Local Setup
3:32 Colab Setup
4:43 Initializing our OpenAI LLMs
9:6 LLM Prompting
10:31 LangChain Prompt Templates
15:20 Creating a LLM Chain with LCEL
20:31 Another Text Generation Pipeline
23:43 Structured Outputs in LangChain
28:27 Image Generation in LangChain
00:00:00.080 |
Okay, so moving on to our next chapter, getting started with linechain. 00:00:04.080 |
In this chapter, we're going to be introducing linechain by building a simple LM powered 00:00:10.080 |
assistant that will do various things for us, it will be multimodal, generating some 00:00:14.920 |
text, generating images, generate some structured outputs, it will do a few things. 00:00:20.160 |
Now to get started, we will go over to the course repo. 00:00:24.920 |
All of the code, all the chapters are in here. 00:00:28.240 |
There are two ways of running this, either locally or in Google Colab. 00:00:32.700 |
We would recommend running in Google Colab because it's just a lot simpler with environments, 00:00:39.820 |
And actually for the capstone, we will be running it locally. 00:00:47.020 |
So if you would like to run everything locally, I'll show you how quickly now, if you would 00:00:51.860 |
like to run in Colab, which I would recommend at least for the first notebook chapters, just 00:00:58.860 |
There will be chapter points in the timeline of the video. 00:01:03.400 |
So for running, running it locally, we've just come down to here. 00:01:07.180 |
So this actually tells you everything that you need. 00:01:14.640 |
So this is the package manager that we recommend by the Python and package management library. 00:01:29.020 |
So you would install it with this command here. 00:01:34.680 |
Otherwise, if you are on Windows or otherwise, you can look at the installation guide there and 00:01:41.100 |
And so before we actually do this, what I will do is go ahead and just clone this repo. 00:01:50.820 |
I'm going to create like a temp directory for me because I already have the linechain course 00:01:57.320 |
And what I'm going to do is just git clone linechain course. 00:02:01.360 |
So you will also need to install a git if you don't have that. 00:02:10.560 |
So this will install Python 3.12.7 for us with this command. 00:02:15.300 |
Then this will create a new vnv within that or using Python 3.12.7 that we've installed. 00:02:23.540 |
And then uvsync will actually be looking at the pyproject.toml file. 00:02:30.280 |
That's like the package installation for the repo and using that to install everything that 00:02:37.140 |
Now, we should actually make sure that we are within the linechain course directory. 00:02:49.440 |
Now, if you are in cursor, you can just do cursor. 00:03:03.980 |
Now, within that course, you have your notebooks. 00:03:07.080 |
And then you just run through these, making sure you select your kernel, Python environment, 00:03:11.280 |
and making sure you're using the correct vnv from here. 00:03:15.180 |
So that should pop up already as this vnv bin Python. 00:03:21.960 |
When you are running locally, don't run these. 00:03:32.320 |
Now let's have a look at running things in Colab. 00:03:37.080 |
So for running everything in Colab, we have our notebooks in here. 00:03:41.340 |
We click through and then we have each of the chapters through here. 00:03:45.520 |
So starting with the first chapter, the introduction, which is where we are now. 00:03:51.500 |
So what you can do to open this in Colab is either just click this Colab button here, 00:03:56.960 |
or if you really want to, for example, maybe this is not loading for you. 00:04:03.380 |
What you can do is you can copy the URL at the top here. 00:04:09.360 |
You can go to open GitHub and then just paste that in there and press enter. 00:04:22.820 |
What we will do first is just install the prerequisites. 00:04:26.540 |
So we have line chain, just a lot of line chain packages here, line chain core, line chain 00:04:32.720 |
open AI, because we're using open AI and line chain community, which is needed for running 00:04:43.300 |
So we can move on to our first step, which is initializing our LM. 00:04:50.220 |
So we're going to be using GPT-40 mini, which is slightly small, but fast, but also cheaper 00:05:01.040 |
So what we need to do here is get an API key. 00:05:05.220 |
So for getting that API key, we're going to go to open AI's website and you can see here 00:05:13.780 |
And then we're going to be going to settings, organization, API keys. 00:05:21.940 |
So I'm going to go ahead and create a new secret key to actually, just in case you're kind of 00:05:50.320 |
I'm obviously going to revert you out before you see this, but you can try and use it if 00:05:54.960 |
So I'm going to copy that and I'm going to place it into this little box here. 00:05:58.960 |
You could also just place it, put your full API key in here. 00:06:04.980 |
It's up to you, but this little box just makes things easier. 00:06:08.140 |
Now that what we've basically done there is just pass in our API key. 00:06:13.200 |
We're setting our open AI model, GPT-4-0-mini, and what we're going to be doing now is essentially 00:06:19.960 |
just connecting and setting up our LLM parameters with line chain. 00:06:23.980 |
So we run that, we say, okay, we're using a GPT-4-0-mini, and we're also setting ourselves 00:06:31.700 |
up to use two different LLMs here, or two of the same LLM with slightly different settings. 00:06:37.800 |
So the first of those is an LLM with a temperature setting of zero. 00:06:42.000 |
The temperature setting basically controls almost the randomness of the output of your LLM. 00:06:49.780 |
And the way that it works is when an LLM is predicting the sort of next token or next word 00:06:58.900 |
in sequence, it'll provide a probability actually for all of the tokens within the LLM's knowledge 00:07:06.740 |
So what we do when we set a temperature of zero is we say, you are going to give us the 00:07:13.880 |
token with highest probability according to you. 00:07:17.300 |
Whereas when we set a temperature of 0.9, what we're saying is, okay, there's actually an increased 00:07:24.240 |
probability of you giving us a token that according to your generated output is not the token with 00:07:34.780 |
But what that tends to do is give us more sort of creative outputs. 00:07:40.220 |
So we are creating a normal LLM and then a more creative LLM with this. 00:07:53.200 |
So like a draft article from the Aurelio learning page, and we're going to be using the LLM chain 00:08:00.420 |
to generate various things that we might find helpful as where, you know, we have this article 00:08:06.840 |
draft and we're editing it and just kind of like finalizing it. 00:08:12.640 |
We have the title for the article, the description, an SEO friendly description specifically. 00:08:19.160 |
The third one, we're going to be getting the LLM to provide us advice on existing paragraph 00:08:24.560 |
and essentially writing a new paragraph for us from that existing paragraph. 00:08:28.820 |
And what it's going to do, this is the structured output part, is it's going to write a new version 00:08:35.300 |
And it's going to give us advice on where we can improve our writing. 00:08:38.180 |
Then we're going to generate a thumbnail or hero image for our article. 00:08:43.460 |
So, you know, nice image that you would put at the top. 00:08:45.900 |
So here, we're just going to input our article. 00:08:50.220 |
You can put something else in here if you like. 00:08:52.260 |
Essentially, this is just a big article that's written a little while back on agents. 00:09:00.180 |
And now we can go ahead and stop preparing our prompts, which are essentially the instructions 00:09:06.480 |
So Linechain comes with a lot of different like utilities for prompts. 00:09:13.480 |
And we're going to dive into them in a lot more detail, but I do want to just give you 00:09:16.960 |
the essentials now, just so you can understand what we're looking at, at least conceptually. 00:09:22.240 |
So prompts for chat agents are, at a minimum, broken up into three components. 00:09:29.860 |
This provides instructions to our LLM on how it should behave, what its objective is, and 00:09:34.680 |
how it should go about achieving that objective. 00:09:37.220 |
Generally, system prompts are going to be a bit longer than what we have here, depending on 00:09:48.380 |
Usually, sometimes we might want to pre-populate those if we want to encourage a particular 00:09:52.600 |
type of conversational patterns from our agent. 00:09:57.300 |
But for the most part, yes, these are going to be user generated. 00:10:05.820 |
And again, in some cases, we might want to generate those ourselves beforehand or within 00:10:12.420 |
a conversation if we have a particular reason for doing so. 00:10:15.500 |
But for the most part, you can assume that these are actually user and AI generated. 00:10:20.380 |
Now, the LineChain provides us with templates for each one of these prompt types. 00:10:27.360 |
Let's go ahead and have a look at what these look like within LineChain. 00:10:35.900 |
So we have our system message prompt template and human messages, the user that we saw before. 00:10:43.240 |
So we have these two system prompt, keeping it quite simple here. 00:10:46.500 |
You are an AI system that helps generate article titles, right? 00:10:50.000 |
So our first component where we want to generate is article title. 00:10:53.680 |
So we're telling the AI that's what we want it to do. 00:10:59.580 |
So here we're actually providing kind of like a template for a user input. 00:11:06.680 |
So yes, as I mentioned, user input can be fully generated by a user. 00:11:19.140 |
It might be setting up a conversation beforehand, which a user would later use. 00:11:23.820 |
Or in this scenario, we're actually creating a template. 00:11:28.520 |
And what the user will provide us will actually just be inserted here inside article. 00:11:34.300 |
And that's why we have this import variables. 00:11:37.100 |
So what this is going to do is, okay, we have all of these instructions around here. 00:11:43.220 |
They're all going to be provided to OpenAI as if it is the user saying this. 00:11:48.300 |
But it will actually just be this here that a user will be providing, okay? 00:11:54.040 |
And we might want to also format this a little nicer. 00:11:58.900 |
But we can also put, you know, something like this to make it a little bit clearer to the LM. 00:12:11.220 |
And you can see in this scenario, there's not that much difference between what the system prompt 00:12:19.200 |
It varies when you get into the more conversational stuff, as we will do later. 00:12:23.400 |
You'll see that the user prompt is generally more fully user generated or mostly user generated. 00:12:31.640 |
And much of these types of instructions, we might actually be putting into the system prompt. 00:12:38.760 |
And we'll see throughout the course, many different ways of using these different types of prompts 00:12:48.180 |
So I just want to show you how this is working. 00:12:50.740 |
We can use this format method on our user prompt here to actually insert something within the article input here. 00:13:00.180 |
So we're going to go user prompt format, and then we'll pass in something for article. 00:13:05.300 |
And we can also maybe format this a little nicer, but I'll just show you this for now. 00:13:11.720 |
And then inside the content, this is the text that we had. 00:13:14.920 |
All right, you can see that we have all of this, right? 00:13:25.260 |
So let's format this a little nicer so that we can see. 00:13:32.800 |
Exactly the same, except from now we have test string instead of article. 00:13:37.340 |
So later, when we insert our article, it's going to go inside there. 00:13:46.120 |
And this is, again, this is one of the things where people might complain about Langchain. 00:13:49.820 |
You know, this sort of thing can be, you know, it seems excessive because you could just do 00:13:55.740 |
But there are, as we'll see later, particularly when you're streaming, just really helpful features 00:14:01.380 |
that come with using Langchain's kind of built-in prompt templates or at least message objects 00:14:15.680 |
Again, as soon as it's more complicated, Langchain can be a bit more useful. 00:14:21.480 |
This is basically just going to take what we have here, our system prompt, user prompt. 00:14:26.800 |
You could also include some AI prompts in there. 00:14:29.780 |
And what it's going to do is merge both of those. 00:14:33.240 |
And then when we do format, what it's going to do is put both of those together into a chat 00:14:41.740 |
So let's see what that looks like first in a more messy way. 00:14:46.740 |
So you can see we have just the content, right? 00:14:50.720 |
So it doesn't include the whole, you know, before we had human message, we're not include, 00:15:01.780 |
And we can see that what we have is our system message here. 00:15:08.680 |
And then we have human and it's prefixed by human. 00:15:14.020 |
It's just kind of merging those in some sort of chat. 00:15:16.140 |
Like we could also put in like AI messages and they would appear in there as well. 00:15:24.660 |
Let's put that together with an LLM to create what would be in the past line chain be called 00:15:31.620 |
Now we wouldn't necessarily call it an LLM chain because we're not using the LLM chain abstraction. 00:15:37.040 |
It's not super important if that doesn't make sense. 00:15:39.140 |
We will go into it in more detail later, particularly in the, in the LLL chapter. 00:15:45.980 |
So what this chain will do, you know, think LLM chain is just chains where we're chaining 00:16:01.500 |
So sending our prompt to OpenAI, getting a response and getting that output. 00:16:07.180 |
So you can also add another SEP here if you want to format that in a particular way. 00:16:12.080 |
We're going to be outputting that in a particular format so that we can feed it into 00:16:16.920 |
But there are also things called output parses, which parse your output in a more dynamic or 00:16:24.460 |
complicated way, depending on what you're doing. 00:16:29.240 |
I don't want us to focus too much on the syntax here because we will be doing that later, but 00:16:34.880 |
I do want you to just understand what is actually happening here and logically, what are we writing? 00:16:43.200 |
So all we really need to know right now is we define our inputs with the first dictionary 00:16:51.700 |
So this is, you know, our inputs, which we have defined already. 00:16:56.840 |
So if we come up to our user prompt here, we said the input variable is our article, right? 00:17:04.500 |
And we might have also added input variables to the system prompt here as well. 00:17:08.240 |
In that case, you know, let's say we had your AI assistant called name, right? 00:17:18.020 |
And then what we would have to do down here is we would also have to pass that in. 00:17:33.980 |
So also we would have article, but we would also have name. 00:17:39.080 |
So basically we just need to make sure that in here we're including the variables that 00:17:45.760 |
have defined as input variables for our, our first prompts. 00:17:50.200 |
So we can actually go ahead and let's add that. 00:17:54.680 |
So we'll run this again and just include that or, or re-initialize our first prompt. 00:18:01.640 |
So we see that, and if we just have a look at what that means for this format function 00:18:07.460 |
here, it means we'll also need to pass in a name. 00:18:17.480 |
So we have Joe, our AI that is going to be fed in through these input variables. 00:18:24.420 |
The pipe operator is basically saying whatever is to the left of the pipe operator, which in 00:18:30.220 |
this case would be this is going to go into whatever is on the right of the pipe operator. 00:18:36.200 |
Again, we'll, we'll dive into this and kind of break it apart in the LSL chapter, but for 00:18:42.180 |
So this is going to go into our first prompt that is going to perform everything. 00:18:48.100 |
It's going to add the name and the article that we've provided into our first prompt. 00:18:53.920 |
It's going to output that we have our pipe operator here. 00:18:56.220 |
So the output of this is going to go into the input of our next. 00:19:00.200 |
So creative LM, then that is going to generate some tokens. 00:19:11.320 |
And as you saw before, if I take this bit out within those message objects, we have this 00:19:20.860 |
OK, so we are actually going to extract the content field out from our AI message to just 00:19:30.900 |
So we get the AI message out from our LM and then we're extracting the content from that AI 00:19:36.420 |
And we're going to pass it into a dictionary that just contains article title like so. 00:19:44.920 |
I just want to show you how we are using this sort of chain in LCL. 00:19:50.520 |
So once we have set up our chain, we then call it or execute it using the invoke method. 00:19:58.400 |
Into that, we will need to pass in those variables. 00:20:01.160 |
So we have our article already, but we also gave our AI a name now. 00:20:07.120 |
OK, so Joe has generated us a article title, Unlocking the Future, the Rise of Neurosymbolic 00:20:18.860 |
Much better name than what I gave that article, which was AI Agents are Neurosymbolic Systems. 00:20:30.360 |
Now, let's continue, and what we're going to be doing is building more of these types 00:20:35.880 |
of LLM chain pipelines where we're feeding in some prompts, we're generating something, 00:20:42.280 |
getting something and doing something with it. 00:20:48.120 |
We're now moving on to the description, so we want to generate description. 00:20:51.160 |
So we have our human message prompt template. 00:20:53.400 |
So this is actually going to go into a similar format as before. 00:20:58.520 |
We probably also want to redefine this because I think I'm using the same system message there. 00:21:08.120 |
Or what we could also do is let's just remove the name now because I've shown you that. 00:21:15.000 |
So what we could do is you're an AI assistant that helps build good articles, right? 00:21:25.320 |
And we could just use this as our generic system prompt now. 00:21:34.280 |
You're tasked with creating a description for the article. 00:21:36.200 |
The article is here for you to examine article. 00:21:40.360 |
OK, so we need the article title now as well and our input variables. 00:21:43.960 |
And then we're going to output an SEO friendly article description. 00:21:47.560 |
And we're just saying, just to be certain here, do not output anything other than the description. 00:21:52.360 |
So, you know, sometimes an LLM might say, hey, look, this is what I've generated for you. 00:21:57.160 |
The reason I think this is good is because so on and so on and so on. 00:22:00.600 |
If you're programmatically taking some output from an LLM, you don't want all of that fluff 00:22:07.720 |
You just want exactly what you've asked it for. 00:22:11.080 |
OK, because otherwise you need to pass out with code and it can get messy 00:22:16.920 |
So we're just saying do not output anything else. 00:22:24.600 |
Putting those together into a new chat prompt template. 00:22:28.280 |
And then we're going to feed all that in to another LL chain as we have here to 00:22:39.560 |
We'll just make sure we add in the article title that we got from before. 00:22:47.400 |
Explore the transformative potential of neuro-symbolic AI agents in. 00:22:56.280 |
And of course, we could then go in and we see this is kind of too long. 00:22:59.240 |
We're like, oh, yeah, SEO friendly description. 00:23:15.240 |
Make sure we don't exceed say 200 characters. 00:23:23.720 |
I do not outplay anything other than the description. 00:23:26.360 |
So we could just go back, modify our prompting, see what that generates again. 00:23:38.440 |
And that's now in this dictionary format that we have here. 00:23:45.000 |
We want to consume that first article variable with our full article. 00:23:50.680 |
And we're going to generate a few different output fields. 00:23:54.600 |
So for this, we're going to be using the structured output feature. 00:24:01.320 |
And we'll see what that is or what that looks like. 00:24:05.160 |
So structured output is essentially we're forcing that limit. 00:24:09.160 |
Like it has to output a dictionary with these particular fields. 00:24:17.560 |
But in this scenario, what I want to do is I want there to be an original paragraph. 00:24:22.920 |
So I just want it to regenerate the original paragraph because I'm lazy. 00:24:34.520 |
And then we want to get some feedback because we don't want to just automate ourselves. 00:24:39.480 |
We want to augment ourselves and get better with AI rather than just being like, 00:24:48.520 |
And you can see that here we're using this Pydantic object. 00:24:52.040 |
And what Pydantic allows us to do is define these particular fields. 00:24:55.400 |
And it also allows us to assign these descriptions to a field. 00:24:59.160 |
And Lionchain is actually going to go ahead, read all of this. 00:25:04.200 |
So, for example, we could put integer here and we could actually get a numeric score for our paragraph. 00:25:18.200 |
In fact, let's even just ignore, let's not put anything here. 00:25:22.920 |
So I'm going to put constructive feedback on the original paragraph, but I just put into it. 00:25:29.480 |
And what I'm going to do is I'm going to get our creative LM. 00:25:33.560 |
I'm going to use this with structured output method, and that's actually going to modify 00:25:37.320 |
that LM class, create a new LM class that forces that LM to use this structure for the output. 00:25:44.280 |
So passing in paragraph into here, using this, we're creating this new structured LM. 00:25:53.960 |
So we're going to modify our chain accordingly. 00:25:57.400 |
Maybe what I can do is also just remove this bit for now. 00:26:02.520 |
So we can just see what the structured LM outputs directly. 00:26:09.880 |
So now you can see that we actually have that paragraph object, right? 00:26:14.440 |
The one we defined up here, which is kind of cool. 00:26:16.280 |
And then in there, we have the original paragraph, right? 00:26:21.800 |
I definitely remember writing something that looks a lot like that. 00:26:31.400 |
And then interestingly, the feedback is three, which is weird, right? 00:26:37.400 |
Because here we said the constructive feedback on the original paragraph. 00:26:41.480 |
But what we're doing when we use this with structured output, 00:26:45.400 |
but what Langchain is doing is, is essentially performing a tool call to 00:26:50.120 |
And what a tool call can do is force a particular structure in the output of an LM. 00:26:55.800 |
So when we say feedback has to be an integer, no matter what we put here, 00:27:02.200 |
Because how do you provide constructive feedback with an integer? 00:27:07.240 |
But because we've set that limitation, that restriction here, that is what it does. 00:27:16.440 |
So I'm going to shift that to string and then let's rerun this, see what we get. 00:27:21.080 |
We should now see that we actually do get constructive feedback. 00:27:24.520 |
All right, so yeah, we can see it's quite, quite long. 00:27:27.640 |
So the original paragraph effectively communicates limitations with neural AI systems in performing 00:27:33.560 |
However, it could benefit from slightly improved clarity and conciseness. 00:27:38.040 |
For example, the phrase was becoming clear can be made more direct by changing it to became evident. 00:27:47.560 |
So yeah, now we actually get that feedback, which is pretty nice. 00:27:52.360 |
Now let's add in this final step to our chain. 00:27:57.880 |
And it's just going to pull out our paragraph object here and extract into a dictionary. 00:28:04.600 |
Honestly, I actually kind of prefer it within this paragraph object. 00:28:07.320 |
But just so we can see how we would pass things on the other side of the chain. 00:28:14.840 |
Okay, so now we can see we've extracted that out. 00:28:18.600 |
So we have all of that interesting feedback again. 00:28:23.480 |
But let's leave it there for the text part of this. 00:28:27.960 |
Now let's have a look at the sort of multimodal features that we can work with. 00:28:32.920 |
So this is, you know, maybe one of those things that's kind of seems a bit more abstracted, 00:28:37.480 |
a little bit complicated where it maybe could be improved. 00:28:40.920 |
But, you know, we're not going to really be focusing too much on the multimodal stuff. 00:28:46.280 |
But I did want to just show you very quickly. 00:28:51.960 |
We want to generate a prompt based on the article itself that we can then pass to DALI, the image 00:29:03.080 |
generation model from OpenAI, that will then generate an image like a like a thumbnail image for us. 00:29:09.320 |
So the first step of that is we're actually going to get an element to generate that. 00:29:15.000 |
So we have our prompt that we're going to use for that. 00:29:18.040 |
So I'm going to say generate a prompt with less than 500 characters to generate an image based on 00:29:26.040 |
So that's our prompt, you know, super simple. 00:29:28.920 |
We're using the generic prompt template here. 00:29:35.320 |
This is just like the generic prompt template. 00:29:38.040 |
Then what we're going to be doing is based on what this outputs, we're then going to feed 00:29:44.440 |
that in to this generate and display image function via the image prompt parameter. 00:29:50.280 |
That is going to use the DALI API wrapper from line chain. 00:29:54.440 |
It's going to run that image prompt, and we're going to get a URL out from that essentially. 00:29:59.640 |
And then we're going to read that using SK image here. 00:30:03.320 |
So it's going to read that image URL, going to get the image data, and then we're just going 00:30:11.880 |
Now, again, this is a L cell thing here that we're doing, and we have this runnable 00:30:18.440 |
Lambda thing when we're running functions within L cell, we need to wrap them within this runnable 00:30:25.080 |
I, you know, I don't want to go too much into what this is doing here because we do cover 00:30:31.080 |
in the L cell chapter, but it's just, you know, all you really need to know is we have a custom 00:30:38.360 |
And then what we get from that, we can use within this here, the L cell syntax. 00:30:47.480 |
We are taking our original, that image prompt that we defined just up here, right? 00:30:56.200 |
We have our article data being input here, feeding that into our prompt. 00:31:01.800 |
From there, we get our message that we then feed into our LM. 00:31:06.520 |
From the LM, it's going to generate us a, like an image prompt, like a prompt for generating our 00:31:13.240 |
For this article, we can even let's, let's print that out so that we can see what it generates. 00:31:25.560 |
It will feed in that content into our runnable, which is basically this function here. 00:31:37.240 |
It's not, it's not the best to be honest, but we, at least we see how to use it. 00:31:46.040 |
Create an image that visually represents the concept of neuro symbolic agents. 00:31:49.800 |
Depicts a futuristic interface where large language interact with traditional code, 00:31:57.720 |
Something computation include elements like a brain to represent neural networks. 00:32:06.360 |
And a web of connections illustrating vast use cases of AI agents. 00:32:21.160 |
Let's just see what that comes up with in something like mid journey. 00:32:25.720 |
And you can see these way cooler images that we get from just another image generation model. 00:32:33.320 |
So in terms of generation images, the phrasing that the prompt itself is actually pretty good. 00:32:39.800 |
The image, you know, it could be better, but that's it. 00:32:44.200 |
So with all of that, we've seen a little introduction to what we might building with line chain. 00:32:53.000 |
As I mentioned, we don't want to go too much into what each of these things is doing. 00:33:01.240 |
This is kind of how we're building something with line chain. 00:33:05.400 |
This is the overall flow, but we don't really want to be focusing too much on. 00:33:11.000 |
What exactly LSL is doing or what exactly, you know, this prompt thing is that we're setting up. 00:33:18.120 |
We're going to be focusing much more on all of those things and much more in the upcoming chapters. 00:33:24.040 |
So for now, we've just seen a little bit of what we can build before diving in, in more detail. 00:33:31.720 |
So for now, we're going to be focusing on the next step. 00:33:33.880 |
So for now, we're going to be focusing on the next step. 00:33:35.880 |
So for now, we're going to be focusing on the next step. 00:33:37.880 |
So for now, we're going to be focusing on the next step. 00:33:39.880 |
So for now, we're going to be focusing on the next step. 00:33:41.880 |
So for now, we're going to be focusing on the next step. 00:33:43.880 |
So for now, we're going to be focusing on the next step. 00:33:45.880 |
So for now, we're going to be focusing on the next step.