back to index

NVIDIA NeMo Guardrails: Full Walkthrough for Chatbots / AI


Chapters

0:0 Nvidia's NeMo Guardrails
2:16 How Typical Chatbots Work
5:54 Dialogue Flows
7:53 Code Intro to NeMo Guardrails
12:6 How Guardrails Works Under the Hood
14:33 NeMo Guardrails Chatbot in Python
18:28 Speaking with Guardrails Chatbot
19:50 Future NeMo Guardrails Content

Whisper Transcript | Transcript Only Page

00:00:00.000 | In the past year we've seen the unparalleled adoption of chatbots across many industries.
00:00:06.720 | There hasn't really been an obvious technology that has been adopted and become so widespread
00:00:14.240 | so quickly as chatbots have. And in fact, according to a couple of reports from Gartner,
00:00:21.360 | they actually expect chatbots to be the primary communication channel for 25% of all organizations
00:00:30.320 | by 2027, which is not really that far away. This adoption is pretty amazing, but it's also
00:00:37.760 | dangerous. Chatbots make things up and they do very convincingly and it's harder to give a chatbot
00:00:45.760 | guidelines like we would to an actual human. So if you have a human behind some chat, they've been
00:00:52.000 | trained on how to talk about your company, on what not to say, what to say, and to be polite and so
00:01:01.200 | on. It's a little more difficult with AI chatbots, particularly if we're just using the default
00:01:07.680 | approach of calling OpenAI. And when we want a chatbot to actually represent an organization,
00:01:15.600 | it's simply not enough. In short, we need something more to actually deploy conversational
00:01:20.480 | AI. To do that, we will be using Guardrails. Now Guardrails is a kind of new library from NVIDIA,
00:01:29.280 | and the main focus of this library is to help us deploy chatbots safely. But there's actually a
00:01:36.720 | lot more that we can do with it. So we can use that for things like safety, for topical guidelines,
00:01:42.960 | but we can also use it for more advanced things. We can use it to build agents. We can use it in
00:01:48.960 | retrieval augments generation, and naturally also to just define more deterministic dialogue where
00:01:55.760 | relevant. And honestly, if a company is going to production and deploying a chatbot without using
00:02:02.720 | Nemo Guardrails or some sort of alternative Guardrails system, I don't know, I'm just
00:02:08.320 | surprised that they are allowing it. Because things can go wrong very easily if you don't
00:02:14.720 | have these sort of things in place. So in most conversational AI systems at the moment,
00:02:20.640 | we kind of have this. We have this direct path between our conversational AI or our agent
00:02:27.360 | and our users. That's fine. But if something goes wrong, if the user begins asking about things
00:02:37.360 | that we don't really want our chatbot to respond to, like, for example, politics,
00:02:43.680 | or if our chatbot simply begins talking about something that we also don't want it to respond
00:02:50.240 | to, or it begins responding in a way that doesn't really represent what we would like the chatbot
00:02:55.120 | to represent, we have an issue. There's no checks here. Nothing is happening. Now, we can improve
00:03:04.400 | this scenario a little bit through prompt engineering, but prompt engineering can only
00:03:08.720 | get us so far. There's always going to be cases where issues come up. So ideally, what we want
00:03:14.800 | is something in the middle here. We want what are called Guardrails, which can check what is
00:03:22.640 | being transferred between the user and the chatbot and react accordingly. So if the user begins
00:03:29.520 | talking about politics, we can create a prebuilt message or we can instruct the bot to generate a
00:03:37.360 | message that says, sorry, I cannot talk about politics. Now, that is the core idea behind
00:03:44.640 | Guardrails. It's very simple. But what you can do with this is far more than just add some safety
00:03:51.040 | to our chatbots. What we are essentially doing here is we're creating rules, deterministic rules
00:03:57.040 | that say, okay, if the user begins talking, let's say about politics, we want to do something. So
00:04:05.680 | we can go over here and we can do some action. That action can be a safety measure or maybe
00:04:14.320 | in the case of our user is asking a question about maybe our product. So we have a product
00:04:22.720 | question here. If they do that, we don't really want to say, oh, sorry, I can't talk about our
00:04:29.200 | product, obviously, but we may want to do something different than just generate an answer.
00:04:35.680 | We may want to, for example, bring in some information from our database so that our
00:04:43.280 | chatbot can answer the question more accurately. So we do retrieve augmented generation in that
00:04:50.320 | case. We can also specify more deterministic dialogues. So maybe what we will see is that
00:05:00.880 | many users are kind of asking the same questions. They're going through the same dialogue paths.
00:05:07.440 | So if we have common dialogues, we could create rails for them and they would allow us to create
00:05:17.280 | or catch the question. So the question would come over here and actually rather than going to the
00:05:23.760 | bot here, we could specify a particular dialogue flow. So we can say, okay, given the user is
00:05:32.000 | asking about X, we should respond with a particular response. So we have a particular response. We can
00:05:40.720 | set that, we can write it ourselves, or we could actually ask the bot to write a response.
00:05:46.320 | And then from there, the dialogue could go, you know, multiple different ways until we reach some
00:05:51.760 | sort of final solution for our user. Now this sort of deterministic dialogue flow is how chatbots
00:05:59.200 | used to work. Before chat GPT, there would be like a set path. You'd have to select the options
00:06:05.920 | within your dialogue. So, you know, the chatbot would introduce itself and it would say,
00:06:11.040 | what can I help you with? And you'd have to say, I have a problem with, and then it would give you
00:06:16.240 | like three options that you could choose from. Click on those and kind of go through almost
00:06:21.760 | like a path of dialogue. You wouldn't really be able to chat with the chatbot because it couldn't
00:06:30.160 | support that. That deterministic dialogue flow is actually useful, but it's restrictive. So we do
00:06:36.480 | kind of want that in some scenarios, particularly for those common dialogue flows that we can
00:06:43.040 | actually help with. But at the same time, we don't want to restrict our users to just those dialogue
00:06:48.400 | flows. We want the more flexible behavior of conversational AI like chat GPT. Now, another
00:06:55.920 | thing that we can actually use these guardrails for, which I've kind of hinted on a little bit
00:07:01.040 | with the rag example over here, is we can actually give it access to tools. Okay. So based on a
00:07:08.080 | particular question, so maybe our user says something like, okay, how is the weather today?
00:07:13.920 | An LLM is not going to be able to answer that question because it doesn't know what the weather
00:07:19.440 | is like today, but a LLM agent or conversational agent would be able to. And the reason that they
00:07:26.560 | can is because they have access to tools such as weather APIs. So the agent could identify this
00:07:34.000 | question is needing to use this weather API tool and it would go to the weather API tool and it
00:07:40.080 | would say, you know, how is the weather? Give me the weather. And then it would formulate a response
00:07:46.000 | back to the user based on that. So we can also include tool usage in there. So let's take a look
00:07:54.880 | at a quick example of how all of this works. In here on the left, we have the Nemo guardrails
00:08:03.280 | folder, and I have this config directory. In here, I have a config and a topics.co. So co
00:08:11.680 | is a colang file, which we'll talk about a little more in a moment. Within the config,
00:08:17.680 | we are essentially specifying the configuration details for our chatbot, for our guardrails.
00:08:27.440 | So here I'm saying I want to use text of entry 003. We're using this model. It just gets a little
00:08:33.440 | bit easier to set up with guardrails. But, of course, we can also use GPT 3.5 and also GPT 4
00:08:42.480 | and actually other models as well from Hugging Face, Llama 2, and so on. So we have this config
00:08:50.720 | YAML file, and we also have this colang file. Now this colang file is where we set up the flow
00:09:01.840 | of a dialogue, so a dialogue flow, or the guardrails for particular topics or issues.
00:09:10.320 | So here I'm defining a few things. So we're expressing greetings from the user. We're also
00:09:19.760 | expressing a greeting from a bot. Now, this is actually a hard-coded greeting. So when we use
00:09:28.080 | this, the chatbot will return specifically this text here, but we don't have to do that. Now,
00:09:36.240 | we just have these, which is the greeting, and we'll talk a little bit more about the syntax
00:09:41.200 | soon. And then we also have a guardrail here. So we want to define our limits. If a user begins
00:09:48.800 | asking about politics, we want to say, okay, the bot's going to respond with, I'm a shopping
00:09:54.400 | assistant. I don't like the talk of politics. And sorry, I can't talk about politics. Actually,
00:10:00.480 | we can remove that. So that will be the response, this here. Now let's take a look at how we would
00:10:08.880 | actually use these files. So over in our terminal, we're going to navigate to this directory.
00:10:15.920 | So I'm going to cd documents, projects, examples, learn, generation, chatbots,
00:10:26.960 | Nemo guardrails, intra. So we've navigated to the directory. In here, we just have that config
00:10:34.640 | directory that I mentioned before. Okay. So in order to use this, what we're going to do is
00:10:42.000 | first, we actually need to pip install guardrails. So pip install Nemo guardrails, like so.
00:10:47.280 | And then we're going to do Nemo guardrails chat, and we set the config. Okay. So this will allow
00:10:54.640 | us to chat within our bash terminal. Okay. So we've now started our chat and we can say something.
00:11:04.480 | Okay. So I'm just going to say, Hey there. And you see where we actually get these two messages. We
00:11:10.000 | get, Hey there. And how are you doing? That is because within our Kolang file, we specified in
00:11:18.160 | a greeting flow that the bot will produce two responses. It will express a greeting and then
00:11:25.920 | it will express or ask how are you, which is exactly what it's doing here. Now, if we continue
00:11:31.920 | and let's ask something political. So can you tell me, tell me your thoughts on the president
00:11:42.720 | of the USA. Right. We should see that this would block. Okay. So we can see it responds with,
00:11:52.800 | I'm a shopping assistant. I don't like to talk politics. How can I help you today? Okay. So
00:11:58.720 | we've successfully blocked that political question using the guardrails that we created in our
00:12:05.680 | Kolang file. Now let's talk a little bit about how that Kolang file was able to identify that
00:12:14.480 | this message that we created here should be blocked and that it belonged to that user
00:12:22.080 | as politics rail, despite us not specifying this exact question. So the way that this works
00:12:28.640 | is that we have our canonical forms and utterances. Just know that here, this is the canonical form
00:12:35.920 | and these are the utterances. Okay. And all of these are coming from the user, right? So we say
00:12:46.720 | define user as political. We give some examples. What, you know, what would be political?
00:12:51.680 | And then we say, define user ask LLM. So it's asking a question about large language models.
00:12:57.920 | What would constitute a question about large language models? All of these sentences get
00:13:02.480 | taken to our embedding model by default. That's a mini LM model and they get encoded into semantic
00:13:09.920 | vector space. All right. So then when the user comes along, they ask that question. Okay. Maybe
00:13:15.440 | they ask what I asked, like, you know, what are your opinions about the president of the U S or
00:13:21.440 | whatever I said. Right. So you have that question coming from the user that goes into the embedding
00:13:29.520 | model and then it creates, it would probably be over here. It creates a embedding. Right. And
00:13:36.880 | then we can see, right. Okay. These are most similar to the utterances that belong to the
00:13:43.520 | as political canonical form. We see that here as well. Right. So this is same visual, right? These
00:13:51.840 | are our, you know, these are our political items. These are our LLM items or utterances. We have our
00:13:58.880 | user query. Are there any government build language models cases, you know, almost in between,
00:14:04.320 | but we're definitely asking about language models here. Hopefully the embedding model understand
00:14:08.560 | this. So the embedding model will take that and code it into the vector space. And it will see
00:14:14.400 | that it has more similarity with the utterances that come from the user ask LM canonical form.
00:14:23.360 | So with that, we know that our query should activate a flow where user ask LM is defined.
00:14:33.280 | Now there's a lot to talk about when it comes to guardrails, but I want to give just one example
00:14:38.880 | before we finish this video. In future videos, we will talk more about co-lang, which is the
00:14:46.320 | modeling language that guardrails uses and guardrails itself. So let's go through this
00:14:53.600 | point example. This is in Colab. So you can sort of just follow along. We're going to first install
00:14:58.960 | Nima guardrails and also opening AI. Now we will need to set our open AI API key. So we'll just
00:15:06.320 | import OS. We do OS environment, open AI API key. And in here, you just pass in your API key. Okay.
00:15:16.720 | And once that is done, the first thing that we want to do is define a co-lang file. So it's kind
00:15:24.960 | of what we saw before it is that.co file. So I am going to define that here. We can either define it
00:15:35.520 | from file, or we can actually define it from a string in our code. So here, I'm going to define
00:15:42.800 | it in a string in our code because we're, well, we're working within Colab. So in here, we have
00:15:50.800 | defined what are the three main types of blocks within co-lang. Those are the define user blocks.
00:15:59.600 | So the user message blocks, the define bot. So that is a bot message block. And if we come down
00:16:06.720 | here, we also have a flow block. So this is how we define the dialogue flow, right? So these here,
00:16:15.040 | they're our canonical forms. These are the utterances and it is using those that we create
00:16:21.600 | that sort of vector space or populate that vector space. Then based on that vector space, we can
00:16:26.800 | decide when a user creates a message, which one of these should be activated. So if the user says,
00:16:34.080 | "Hey, how are you?" It will probably activate the user express greeting form. So actually in here,
00:16:40.800 | we can remove those because this is just a response from the bot again here as well. Okay,
00:16:46.000 | cool. So once we have initialized that, we can, through the Python API, initialize our Rails.
00:16:54.960 | So we need to do from Nemo guardrails, import lm_rails, and also rails_config. Okay. So the
00:17:06.000 | rails_config is basically our configuration file. It takes our colang. And if we have a configuration
00:17:12.160 | yaml, it will take that as well and use that to initialize everything. Now, alongside our
00:17:18.960 | colang content, we also need the config content. So we'll just put yaml content, I think it's called.
00:17:27.280 | So yaml content equals. And this is where we just pass in our configuration details,
00:17:35.040 | which is basically just which model we want to use at least for now. There are more things that
00:17:40.160 | we can populate this with, but this is enough for what we're wanting to do here. Okay. So then we
00:17:47.360 | initialize our config with both of those. We want to write from content, which means we're loading
00:17:54.080 | these from within file. And we will have colang content, which is going to be equal to colang
00:18:03.200 | content and yaml content. Okay. That initializes our config. And from that, we can initialize our
00:18:17.440 | rails. So rails equals lm_rails. And we just pass in our config. Okay. So we run that. Okay. And
00:18:29.520 | then we can generate. So this is where we're actually talking with our rails. So within a
00:18:36.240 | notebook, we actually need to use async functions. Just how it works. Because guardrails is built to
00:18:45.840 | enable async. So we have to write this. And we'll just say, like, hi, there. We can run that. And
00:18:59.280 | we get this response. We say, hey, there. How are you doing? So, again, we can see that the chat bot
00:19:05.200 | is going to bot express greeting and bot ask how are you. We can see, hey, there, and how are you
00:19:12.640 | doing? Which is exactly what we see here. Right? So we can try again with something. By the way,
00:19:19.440 | if you want to run this without async in, like, a Python file, you just run this.
00:19:24.320 | Okay. And we can say, I can't remember what the last question was. Yeah. What is your opinion
00:19:35.120 | on the president? Okay. Okay. Cool. Let's run that. And we can see that activates that guardrail,
00:19:42.640 | which says I'm shopping assistant. I don't want to talk about politics. And then it says,
00:19:47.760 | how are you? How can I help today? Right? So, that is a very simple example of how we would use
00:19:56.160 | guardrails. This really doesn't even start to scratch the surface of what we can actually do
00:20:03.200 | with guardrails. And there are many other examples that I will be sharing with you,
00:20:10.480 | like, in the coming days and weeks, where we'll dive into a lot more detail. We'll take a look at
00:20:17.520 | the Kolang language, things like variables and actions. And on the guardrails side of things,
00:20:24.240 | we'll be diving into more detail on how we can sort of set up agents, essentially.
00:20:30.800 | How we can do retrieval augmentation. And all of these other really cool things that guardrails
00:20:37.600 | allows us to do. For now, that's it for this introduction. So, I hope this has all been
00:20:43.200 | useful and interesting. I know I covered a lot. But there is a lot to cover. So,
00:20:49.360 | thank you very much for watching. And I will see you again in the next one.
00:21:02.660 | (End of Audio)