back to indexCisco - Customer Experience (Sales) Agent Use Case

00:00:01.840 |
So the first thing we're going to show here is just we can have different setups since 00:00:06.120 |
each of the agents that Amon was showing you, we can give them not only different tasks but 00:00:12.560 |
So the one that we're going to be showing here is using Claude Son at 3.7 as the supervisor 00:00:17.560 |
and the agents below that are all going to be using GPT-4, but we can switch to using 00:00:25.560 |
For this demonstration, you're looking at Son at 3.7 and OpenAI's LLM. 00:00:31.680 |
So this would be a typical question that a renewals manager might ask, show me the top five deals 00:00:38.820 |
Also, I want to point out that ATR may not be understood by the LLM, and we'll show you 00:00:44.800 |
later on how it understands what ATR is within the context of this question. 00:00:50.500 |
For all the customers, along with their renewals risk severity level sentiment classification, 00:00:58.380 |
So this question is a good one because it will utilize all of the agents that we were just 00:01:10.640 |
And so while the agents are working on answering that question, we're just going to switch over 00:01:14.740 |
again to the infrastructure that we have here. 00:01:17.620 |
And if we look at what we have as far as the agents and then look at the studio version 00:01:25.100 |
of that, we can see that the supervisor is going to create an action plan. 00:01:30.100 |
Once it's created, the action plan is then going to call one or several of these agents. 00:01:34.240 |
And then once the agents are complete, it will go back to the supervisor, which will synthesize 00:01:39.100 |
the results as the final answer for the user. 00:01:41.980 |
And so if we look at each one of the agents individually, we can see that they're all set up similar, where 00:01:52.860 |
And then we have a react style for each one of these. 00:01:55.860 |
And so they're all pretty much set up the same, except they have access to different tables. 00:02:01.980 |
So they're being set up to specialize in the tasks that they're meant to do. 00:02:06.940 |
And then Aman also mentioned the discovery agent, which I'm not sure we're going to have time 00:02:13.820 |
But the discovery agent is there for those questions that come in that are very vague. 00:02:18.380 |
And we don't quite-- the LLN doesn't quite understand what the user wants. 00:02:21.620 |
But it's given access to a semantic model, which essentially is all the metadata for the database 00:02:27.740 |
behind it so that it can hold a conversation and extract from the user what they're actually 00:02:33.760 |
And we know we can answer that question because the conversation is based on the metadata for 00:02:42.340 |
And so we're back now to that question that I asked, and we get a nice rich output for that. 00:02:48.980 |
And so a renewals manager could use this to start looking for things that they should work on. 00:02:55.140 |
But before we get back into the business use case, we're going to take a little behind-the-scenes look. 00:02:59.540 |
And so a lot of this, you saw Lance show earlier. 00:03:03.200 |
And so we're going to just look at the traces for what was going on. 00:03:06.340 |
So here we can see the AI assistant, which essentially is the supervisor, is involved. 00:03:12.540 |
And then hands out the tasks to the renewals, the sentiment, and the adoption agent. 00:03:18.460 |
And so if we look, again, what's going on at the supervisor level, the first thing that's going to happen-- 00:03:24.300 |
first we can see that within Linesmith, we can see all of the different nodes that we're executing within the graph here. 00:03:31.700 |
And as we dive in deeper, the first thing it's going to do is categorize the question. 00:03:39.860 |
Or does it need discovery because it's a vague question? 00:03:42.420 |
So in this case, it identified that agents are required. 00:03:48.820 |
The first thing we do is each agent has also access to a vector store. 00:03:53.140 |
So the first thing we're going to see is do we have any examples of the question that I'm being asked. 00:03:58.180 |
And if we do, we're going to use those examples. 00:04:02.580 |
So the LLM is going to try to dissect that question and decide which agents are needed. 00:04:10.180 |
And so in this case, it decided that the question requires both the renewals, the sentiment, and the adoption agent. 00:04:17.380 |
And breaks up the question into individual tasks that those agents have to perform. 00:04:21.940 |
And so if we now go in and we look at what the renewals agent does, 00:04:26.500 |
again, we can see that the nodes within the graph are well represented within Linesmith. 00:04:32.420 |
And so now we're going to go down to the end here. 00:04:34.980 |
We're going to start walking through what the renewals agent would do. 00:04:37.300 |
So the first thing we see are the tools that the renewals agent has access to. 00:04:41.300 |
So it has access to four SQL tools where it can list the tables. 00:04:50.020 |
We also have search examples because for the renewals agent, 00:04:53.140 |
it needs to do some research on the internet for some of the questions that they may ask. 00:04:58.180 |
As well as, well actually, search examples from the vector store and the web searches 00:05:02.660 |
for anything that needs to go out on the web floor. 00:05:12.100 |
Then we see the task that it was assigned by the supervisor. 00:05:16.740 |
We then see that it's going to search for examples. 00:05:21.940 |
As part of that example, we give it steps that it could do. 00:05:28.340 |
We just literally give it the English steps that it needs to follow, 00:05:32.020 |
as well as an example output that we would like the output to be. 00:05:36.100 |
It then goes and lists the tables that it has access to. 00:05:38.740 |
It then goes and gets the schema of those tables. 00:05:42.820 |
Now as part of the schema, we also have been leveraging, 00:05:45.620 |
Snowflake has something known as a cortex semantic model. 00:05:49.700 |
So we're not using their LLMs, but we are leveraging their data model for doing semantic 00:05:56.660 |
modeling, which essentially is metadata for the tables. 00:06:02.340 |
So here we can see we have the column names, but we also have sample values that are in that column, 00:06:09.780 |
synonyms that a user may say, you know, client ID, 00:06:14.980 |
which we can then map back to the column name of the cat view ID. 00:06:21.460 |
There's other information here, such as for this total customers is, I mean, this one is a simple one. 00:06:29.140 |
But in some cases, things like filters or calculations are in here. 00:06:33.860 |
Here's one, for instance, where high-risk deal count. 00:06:38.580 |
It probably wouldn't know that offhand how to do that, but here we're telling it how to calculate that. 00:06:43.780 |
And so this metadata helps a lot to get the accuracy of the LLM up as people are asking questions. 00:06:50.500 |
And I'm going to go through this a little bit quicker since I'm running out of time. 00:06:55.460 |
But essentially, we'll create a query based on that, get an answer, and then synthesize the result. 00:07:02.420 |
Here, notice that this output matches the example output that we were showing up further, so that's part of that example. 00:07:10.180 |
And each one of these agents is then going to do that same procedure, so I'm just going to go through this quick. 00:07:16.900 |
And then once all three of those agents have done their job, it goes back to the supervisor, 00:07:22.820 |
who's going to take that output and synthesize a final answer, which is then displayed back to the user. 00:07:29.300 |
And I'll just quickly go through this so the user can then use this information. 00:07:32.980 |
So here, he's looking for high-risk opportunities. 00:07:35.940 |
It's also highlighted as key insights, as well as it's a recommended action for him to follow for this 00:07:41.460 |
customer since their renewals is at high risk. 00:07:44.260 |
And unfortunately, I'm not going to be able to make it through the rest of the demo, and I'm going to hand it back to Amon. 00:07:49.220 |
So please come see us at the booth if you'd like to see the rest of the demo.