back to indexEvaluating AI Search: A Practical Framework for Augmented AI Systems — Quotient AI + Tavily

00:00:00.040 |
Hi everyone, thank you so much for coming. My name is Julia, I'm CEO and co-founder of Quotient AI. 00:00:21.720 |
I'm Deanna Emery, I am founding AI researcher at Quotient AI. 00:00:26.120 |
My name is Vithara Sher, I'm the head of engineering at Tavili. 00:00:30.760 |
And today we are going to talk to you about evaluating AI search. 00:00:35.920 |
So let me start with a fundamental challenge we're all facing in AI today. 00:00:41.140 |
Traditional monitoring approaches simply aren't keeping up with the complexity of modern AI 00:00:49.300 |
Unlike traditional software, AI agents operate in constantly changing environments. 00:00:53.640 |
They're not just executing predetermined logic, they're making real-time decisions based on 00:00:58.520 |
evolving web content, user interactions, and complex tool chains. 00:01:03.880 |
These systems can also have multiple failure modes that happen at the same time. 00:01:08.440 |
They hallucinate, retrieval fails, they make reasoning errors, and all of these are interconnected. 00:01:14.160 |
So a little bit about what we do at Quotient, we monitor live AI agents. 00:01:19.040 |
We have expert evaluators that can detect objective system failures without weighting on ground-through 00:01:26.400 |
A year ago we met Rotem, Tavili's founder and CEO, and he posed us with a problem that really 00:01:33.720 |
crystallized the core issues we needed to solve. 00:01:36.580 |
Here's the challenge: how do you build production-ready AI search agents when your system will be dealing 00:01:43.080 |
with two fundamental sources of unpredictability you cannot proactively control? 00:01:48.020 |
Under the hood, Tavili's agents gather their context by searching the web. 00:01:54.460 |
Traditional benchmarks assume stable ground-through, but when you're dealing with real-time information, 00:02:00.820 |
Your users also don't stick to your test cases. 00:02:04.140 |
They can ask odd, malformed questions, they have implicit context they don't really share, 00:02:13.220 |
Tavili processes hundreds of millions of search requests for its AI agents in production. 00:02:20.700 |
And they need a solution to work that scale in these real-world conditions. 00:02:31.320 |
So at Tavili, we're building the infrastructure layer for a GenTech interaction at scale, 00:02:36.900 |
essentially providing language models with real-time data from across the web. 00:02:43.280 |
There are many use cases where real-time AI search deliver values, and this is just a few examples 00:02:48.860 |
of how our clients are using Tavili to empower their applications. 00:02:53.080 |
Starting from a CLM company that built an AI legal assistant to power their legal and business 00:02:57.940 |
team with instant case insight, to a sports news outlet that created a hybrid-rug chat agent 00:03:05.080 |
that delivers scores, games, and news updates, to a credit card company that uses real-time search 00:03:13.300 |
to fight fraud by pinpointing merchant locations. 00:03:19.100 |
So as you can imagine, evaluating a system in this kind of vast, fast-moving setting is quite challenging. 00:03:27.460 |
We have two principles that guide our evaluation. 00:03:30.160 |
First, the web, which is our foundation of our data, is constantly changing. 00:03:35.680 |
First, this means that our evaluation method must keep up with the ongoing change. 00:03:40.540 |
Second, that truth is often subjective and contextual. 00:03:46.760 |
Evaluating correctness can be tricky because what's right may depend on the source or the 00:03:54.540 |
So we have a responsibility to design our evaluation methods to be as unbiased and fair as possible, 00:04:01.360 |
even when absolute truth is hard to pin down. 00:04:05.400 |
So the first thing to think about in offline evaluation is which data to use to evaluate your system. 00:04:13.400 |
So static data sets are a great start, and there are many widely open source data sets available out in the web. 00:04:23.060 |
It's a benchmark and a data set from OpenAI that serve as a standard for evaluating retrieval accuracy. 00:04:31.180 |
We have many leading AI search providers that use Simple QA to evaluate their performance. 00:04:38.220 |
Simple QA is designed to evaluate the system's ability to answer short, fact-seeking question 00:04:48.540 |
Another widely adopted data set is Hotpot QA, which evaluates the system's ability to answer 00:04:56.100 |
multi-hop questions where reasoning across multiple documents is required to retrieve the 00:05:04.340 |
Data set like Simple QA and Hotpot QA are a great start for evaluating your system. 00:05:10.520 |
But what happens when you're evaluating real-time systems that, especially when measuring that 00:05:22.020 |
your system keeps up with rapidly evolving information and avoiding regression, like where we operate. 00:05:30.340 |
Also, those kinds of static data sets don't address the challenge of benchmarking questions 00:05:38.340 |
where there's no one truth answer or subjectivity is involved. 00:05:45.580 |
This is what led us to think beyond static data sets towards dynamic evaluation that reflects 00:05:51.980 |
the changing the pace of the web essentially. 00:05:59.560 |
Dynamic data sets are essential for benchmarking RAG agents in real-world production system. 00:06:06.200 |
You can answer today's questions with yesterday data. 00:06:12.400 |
They have broad coverage as you can easily create eval sets for any domain or use case that is relevant 00:06:22.400 |
And they also ensure continuous relevancy because they are regularly refreshed, which means that 00:06:27.460 |
your system is always evaluated against the latest data. 00:06:34.880 |
This led us to build an open source agent that basically builds dynamic eval sets for web-based 00:06:44.520 |
It's open source, and we encourage everyone to check it out and contribute. 00:06:50.240 |
And I also want to acknowledge the work of Eyal, our head of data at Tavili, who initiated this 00:06:57.960 |
As you can see here, an example of a data set generated by the agent. 00:07:02.880 |
It generates question and answer pairs for targeted domains using information found in the web. 00:07:11.480 |
So the agent leveraged the LandGraph framework, and it consists of these key steps. 00:07:17.080 |
First, it generates broad web search queries for targeted domains, which essentially lets you 00:07:23.740 |
create eval sets for any domain of your choice and specific needs of your application. 00:07:30.160 |
The second step is to aggregate grounding documents from multiple real-time AI search providers. 00:07:36.940 |
We understand that we cannot just use Tavili to search the web on specific domains, find grounding 00:07:43.140 |
documents, then generate question and answer pairs from those documents, and then evaluate our 00:07:51.120 |
That's why we use multiple real-time AI search providers to both maximize coverage and minimize bias. 00:07:59.500 |
The third step, which is the key step in this process, is to generate the evidence-based question 00:08:07.700 |
And we ensure that in the generation process, the agent is obliged to generate answer context, 00:08:13.960 |
which also increase the reliability of our question and answer pairs and reduce hallucinations. 00:08:20.980 |
You can always go back and check which sources were used and which evidence from those sources 00:08:26.700 |
were used to generate each question and answer pair. 00:08:30.300 |
And lastly, we use lengthness to track our experiments, which is a great observability tool to manage 00:08:36.320 |
these offline evaluation runs and see how your performance at different time steps. 00:08:43.360 |
The next step that we want to address is to support a range of question types, both simple fact-based 00:08:51.740 |
questions and multi-hop questions similar to the hotpot QA. 00:08:57.640 |
We also want to ensure fairness and coverage by proactively addressing bias and covering a wide range of 00:09:05.740 |
perspective for each subject we generate question and answer to. 00:09:10.140 |
Additionally, we want to add a supervisor node for coordination, which proves itself to be valuable, 00:09:17.060 |
especially in these multi-agent architectures. 00:09:20.960 |
And this will increase the quality of our question and answer pairs. 00:09:26.460 |
The next step to think about is benchmarking. 00:09:30.860 |
And we argue that it's important to measure accuracy, but you should not stop there. 00:09:36.180 |
You should ensure an holistic evaluation framework, which use benchmark, like for our case, that measure 00:09:43.560 |
your source diversity, your source relevancy and hallucination rates. 00:09:50.300 |
It's also important to leverage unsupervised evaluation method that remove the need for label data, which 00:09:56.920 |
enable to scale your evaluations and address the subjectivity issue. 00:10:03.540 |
With that, I'll pass it over to Diana, who will explain more about these reference-free benchmarks 00:10:09.320 |
and also share results from an experiment we ran using a static and a dynamic dataset that was generated 00:10:21.100 |
So we performed a two-part evaluation of six different AI search providers. 00:10:28.100 |
The first component of this experiment was to compare the accuracy of search providers on 00:10:34.420 |
a static and a dynamic benchmark in order to demonstrate that static benchmarking is not 00:10:40.500 |
a comprehensive method for evaluation of AI search. 00:10:43.980 |
The second component was to evaluate the dynamic dataset responses using reference-free metrics. 00:10:52.220 |
And we compare these results to the reference-based accuracies that we get from the benchmark in 00:10:59.300 |
order to demonstrate that reference-free evaluation can be an effective substitute when ground truths are not available. 00:11:05.600 |
So jumping right in, for our static versus dynamic benchmarking comparison, we use simple QA benchmark as the static dataset, 00:11:16.960 |
and we're using a dynamic benchmark of about a thousand rows created by Tavilli. 00:11:21.960 |
And as you can see here, both datasets have roughly similar distributions of topics, and this helps to ensure a fair comparison and diversity of questions. 00:11:31.760 |
So to evaluate the AI search providers' performance on these two benchmarks, we're using the simple QA correctness metric, and this is an LLM judge which is used on the simple QA benchmark. 00:11:46.000 |
It compares the model's response against a ground-truth answer in order to determine if it's correct, incorrect, or not attempted. 00:11:58.240 |
And so here we're showing the correctness scores from that simple QA benchmark compared against the dynamic benchmark. 00:12:04.240 |
And we've anonymized the search providers for this talk, but I do want to call out that the simple QA accuracy scores here are all self-reported, and so they don't all necessarily have clear documentation on how they were calculated. 00:12:18.480 |
But as you can see, the correctness scores are for the dynamic benchmark in blue are substantially lower. 00:12:26.720 |
And not only that, the relative rankings have also changed pretty considerably. 00:12:31.920 |
For example, provider F all the way on the end of this plot here performs the worst on simple QA, but it performs the best on the dynamic benchmark. 00:12:43.480 |
And looking a little closer in the results, while this simple QA evaluator is useful, it's certainly far from perfect. 00:12:51.720 |
I have a few examples here of model responses that were flagged as incorrect by this LLM judge. 00:12:59.720 |
But if you look at the actual text in the model outputs, they do contain the correct answer from the ground truth. 00:13:05.920 |
On the flip side of things, here is an example where the LLM judge classified it as correct. 00:13:13.440 |
And, yes, you can see that the correct answer is in this response, but while the correct answer might be present, that doesn't necessarily mean that the full answer is right. 00:13:23.680 |
This evaluation is not accounting for any of the additional text in this response, and there might be hallucinations in there, and that would invalidate it. 00:13:32.680 |
So, ultimately, this evaluation falls short of identifying when things go wrong in AI search. 00:13:39.680 |
So, what are some other ways that we can identify when things go wrong? 00:13:45.680 |
Up to this point, we have been talking about a reference-based approach to evaluation. 00:13:51.920 |
In most online and production settings, this is typically the case, and as we've already discussed, it's especially so in AI search. 00:13:59.920 |
For this talk, we're going to look at three of Quotient's reference-free metrics. 00:14:12.160 |
We'll look at answer completeness, which identifies whether all components of the question were answered, so it classifies model responses as either fully addressed, unaddressed, or unknown, if the model says I don't know. 00:14:25.920 |
Then we'll look at document relevance, and this is the percent of the retrieved documents that are actually relevant to addressing the question. 00:14:37.920 |
And then, finally, we'll look at hallucination detection, which identifies whether there are any facts in the model response that are not present in any of the retrieved documents. 00:14:47.920 |
So, we use these metrics to evaluate the search providers' responses on this dynamic benchmark. 00:14:57.920 |
So, we've got answer completeness plotted here. 00:14:59.920 |
The stacked bar plot shows the number of responses that were either completely answered, unaddressed, or marked as unknown. 00:15:07.920 |
And if we look back at the overall rankings that we saw earlier on the dynamic benchmark, you can see that the rankings from answer completeness pretty closely match. 00:15:19.920 |
The average performance scores for the two get a correlation of 0.94. 00:15:24.920 |
So, this indicates that the reference-free metric can capture relative performance pretty well. 00:15:30.920 |
But completeness is still not the same thing as correctness. 00:15:34.920 |
And when we have no ground truths available, then we have to turn to the next best thing. 00:15:43.920 |
So, this is where document relevance and hallucination detection come in. 00:15:48.920 |
Both of these metrics are going to be looking at those grounding documents in order to measure the quality of the model's response. 00:15:55.920 |
Unfortunately, of all of the search providers we looked at, only three of them actually return the retrieved documents used to generate their answers. 00:16:04.920 |
The majority of search providers typically only provide citations. 00:16:09.920 |
And these are largely unhelpful at scale and also really limit transparency when it comes to debugging. 00:16:17.920 |
So, these are those document relevance scores for the three search providers. 00:16:25.920 |
The plot to the left shows the average document relevance, the percent of retrieved documents that are relevant to the question. 00:16:33.920 |
And the plot to the right shows the number of responses that have no relevant documents. 00:16:39.920 |
And if we consider these results in conjunction with answer completeness, we find that there's a strong inverse correlation between document relevance and the number of unknown answers. 00:16:54.920 |
If you think about it, if you have no relevant documents for the question, the model should say, "I don't know," rather than trying to answer it. 00:17:02.920 |
And so this brings us to hallucination detection. 00:17:07.920 |
And here we were actually surprised to see that there was a direct relationship with the hallucination rate and document relevance. 00:17:15.920 |
Provider X here has the highest hallucination rate, but it also had the highest overall document relevance. 00:17:24.920 |
But if we think about it more, provider X had high answer completeness, the lowest rate of unknown answers, and it also had the highest answer correctness from the benchmarking earlier of these three providers. 00:17:42.920 |
So this probably implies that maybe in provider X's responses, they're more likely to provide new reasoning or interpretations in their response, or maybe even they're more detailed and thorough. 00:17:55.920 |
And this just creates more opportunity for hallucination in their responses. 00:17:59.920 |
But the point I want to make here is that when considering these metrics, depending on your use case, you might index more heavily on one over another. 00:18:10.920 |
They're measuring different dimensions of response quality, and it's often a give and take. 00:18:15.920 |
If you perform really well in one, it might be at the expense of another. 00:18:19.920 |
And as we see here, there is a tradeoff between answer completeness and hallucination. 00:18:28.920 |
But also, if you take these three metrics in conjunction, you can use them to understand why things went wrong and identify potential strategies for addressing those issues. 00:18:40.920 |
So this diagram here shows a few examples on how you can interpret your evaluation results. 00:18:48.920 |
How you can interpret your evaluation results to identify what to do to fix it. 00:18:52.920 |
So we've got one example here where maybe your response is incomplete, but you have relevant documents, you have no hallucinations. 00:19:01.920 |
So this probably means you don't have all the information you need to answer the question. 00:19:06.920 |
And so just retrieving more documents might solve that. 00:19:10.920 |
But the big picture idea is that your evaluation should do more than just provide relative rankings. 00:19:16.920 |
It should help you identify the types of issues that are present. 00:19:19.920 |
And it should also help you understand what strategies to implement to solve those issues. 00:19:25.920 |
So in conclusion, let me just quickly paint a picture of where we're heading with all this. 00:19:32.920 |
Because this is not just about building the agents we've been building for the past couple of years and then slapping evaluation on it and then continuing to do the same thing. 00:19:40.920 |
It's actually -- it's not about building better benchmarking. 00:19:46.920 |
It's about creating AI systems that can continuously improve themselves. 00:19:52.920 |
And imagine for a second that agents don't just retrieve information but learn from the patterns of what information is outdated, what sources are unreliable, and what users need. 00:20:01.920 |
They can also like maybe detect hallucinations mid-conversations and correct the course all without human intervention. 00:20:11.920 |
And this framework that we shared today, dynamic data sets, holistic evaluation, reference-free metrics, are the building blocks for getting there. 00:20:20.920 |
And this is where we want to get with augmented AI.