back to indexMIT AGI: Autonomous Weapons Systems Policy (Richard Moyes)
Chapters
0:0 Introduction
47:28 Q&A
00:00:00.000 |
Welcome back to 6S099 Artificial General Intelligence 00:00:07.920 |
He's the founder and managing director of Article 36 00:00:14.800 |
Working to prevent the unintended, unnecessary 00:00:18.800 |
and unacceptable harm caused by certain weapons 00:00:22.000 |
including autonomous weapons and nuclear weapons 00:00:26.080 |
He will talk with us today about autonomous weapons systems 00:00:32.640 |
This is an extremely important topic for engineers, humanitarians, legal minds, 00:00:39.840 |
policy makers and everybody involved in paving the path for a safe, positive 00:00:46.400 |
future for AI in our society, which I hope is what this 00:00:51.280 |
course is about. Richard flew all the way from the UK to 00:00:55.600 |
visit us today in snowy Massachusetts, so please give him a warm welcome 00:01:07.680 |
Thanks very much Lex and thank you all for coming out 00:01:11.520 |
As Lex said, I work for a not-for-profit organization based in the UK 00:01:15.280 |
We specialize in thinking about policy and legal frameworks 00:01:19.360 |
around weapon technologies particularly, and generally about how to establish 00:01:23.280 |
more constraining policy and legal frameworks 00:01:26.560 |
around weapons. I guess I'm mainly going to talk today 00:01:31.760 |
about these issues of to what extent we should enable 00:01:35.920 |
machines to kill people, to make decisions to kill people 00:01:41.120 |
It's I think a conceptually very interesting topic, quite challenging in 00:01:44.400 |
lots of ways. There's lots of unstable terminology 00:01:47.760 |
and lots of sort of blurry boundaries. My own background, as I said, we work on 00:01:55.440 |
weapons policy issues and I've worked on the development of two legal 00:01:59.200 |
international legal treaties prohibiting certain types of weapons. I worked on the 00:02:03.440 |
development of a 2008 convention on cluster munitions which 00:02:06.960 |
prohibits cluster bombs and worked on our organization pioneered 00:02:11.920 |
the idea of a treaty prohibition on nuclear weapons which was agreed last 00:02:15.760 |
year in the UN and we're part of the steering group of 00:02:19.360 |
ICANN, the International Campaign to Abolish Nuclear Weapons which 00:02:22.000 |
won the Nobel Peace Prize last year so that was a good year 00:02:25.440 |
for us. The issue of autonomous weapons, killer robots, which I'm going to talk 00:02:30.480 |
about today. We're also part of an NGO, non-governmental 00:02:33.840 |
organization coalition on this issue called the Campaign to Stop Killer 00:02:37.440 |
Robots. It's a good name, however I think when we get into some 00:02:42.160 |
of the details of the issue we'll find that perhaps the 00:02:46.080 |
snappiness of the name in a way masks some of the 00:02:50.560 |
complexity that lies underneath this. But this is a live issue in 00:02:56.320 |
international policy and legal discussions. At the 00:02:58.880 |
United Nations for the last several years, three or four 00:03:02.720 |
years now, there have been groups of governments 00:03:05.920 |
coming together to discuss autonomous weapons and whether or not there 00:03:09.280 |
should be some new legal instrument that tackles 00:03:13.520 |
this issue. So it's a live political issue that is being debated in 00:03:18.960 |
policy legal circles and really my comments today are going to be speaking 00:03:24.080 |
to that context. I guess I'm going to try and 00:03:27.200 |
give a sort of bit of a briefing about what the issues are in this 00:03:31.520 |
international debate, how different actors are 00:03:33.920 |
orientating to these issues, some of the conceptual models that we 00:03:38.480 |
use in that. So I'm not really going to give you a 00:03:41.280 |
particular sales pitch as to what you should think about this issue, though 00:03:44.960 |
my own biases are probably going to be fairly evident 00:03:48.320 |
during the process. But really to try and lay out a bit of a sense of how these 00:03:52.800 |
questions are debated in the international political 00:03:56.160 |
scene and maybe in a way that's useful for reflecting on 00:04:00.240 |
sort of wider questions of how AI technologies 00:04:04.000 |
might be orientated to an approach by policy makers and the legal 00:04:09.360 |
framework. So in terms of the structure of my 00:04:12.960 |
comments, I'm going to talk a bit about some of the pros and cons that are put 00:04:16.000 |
forward around autonomous weapons or movements 00:04:19.440 |
towards greater autonomy in weapon systems. I'm going to talk a bit more 00:04:23.520 |
about the political legal framework within which these 00:04:26.800 |
discussions are taking place and then I'm going to try to sort of lay 00:04:30.480 |
out some of the models, the conceptual models that we 00:04:32.960 |
as an organization have developed and are sort of using 00:04:36.320 |
in relation to these issues and perhaps to reflect a bit on where I see the 00:04:40.880 |
the political conversation, the legal conversation on this going at an 00:04:43.760 |
international level. And maybe just finally to 00:04:48.720 |
try to reflect on or just draw out some more general 00:04:51.920 |
thoughts that I think occur to me about what some of this says about 00:04:56.880 |
thinking about AI functions in different social 00:05:00.640 |
social roles. But before getting into that sort of 00:05:10.400 |
start by suggesting a bit of a sort of conceptual 00:05:16.880 |
one of the things, this could be the present, 00:05:23.040 |
one of the things you find when we start, when you say to somebody well we work on 00:05:28.880 |
this issue of autonomous weapons, they tend to orientate to it in in two 00:05:33.120 |
fairly distinct ways. Some people will say oh you mean 00:05:37.920 |
armed drones and you know we know what armed drones 00:05:40.960 |
are, they're being used in the world today and that's kind of an 00:05:45.600 |
issue here in the in the present, right? Armed drones. 00:05:51.520 |
But other people, most of the media and certainly 00:05:55.360 |
pretty much every media photo editor thinks you're talking about the 00:06:04.960 |
maybe a bit of Skynet thrown in. So this is a sort of advanced futuristic 00:06:12.960 |
sci-fi orientation to the issues. My thinking about this, I come from a 00:06:18.320 |
background of working on the impact of weapons in the 00:06:26.560 |
area. My thinking and my anxieties or my concerns around this issue don't 00:06:31.760 |
come from this area. I do this line a bit wiggly here because I also don't 00:06:35.920 |
want to suggest there's any kind of, you know, teleological certainty going on 00:06:40.960 |
here, this is just an imaginary timeline. But I think it's important just in terms 00:06:46.960 |
of situating where I'm coming from in the debate that 00:06:49.760 |
I'm definitely not starting at that end. And yet in the political discussion 00:06:54.160 |
amongst governments and states, well you have people coming in at all 00:06:57.360 |
sorts of different positions along here, imagining 00:07:01.440 |
that autonomous weapons may exist at, you know, somewhere along this sort of 00:07:05.040 |
spectrum. So I'm going to think more about stuff 00:07:08.640 |
that's going on around here and how some of our 00:07:12.000 |
conceptual models really build around some of this thinking, not so much 00:07:15.760 |
actually armed drones but some other systems. 00:07:18.960 |
But my background before I started working on 00:07:27.520 |
setting up and managing landmine clearance operations overseas. 00:07:31.040 |
And well they've been around for quite a long time, landmines. 00:07:38.720 |
And I think it's interesting just to start with, just to reflect on 00:07:44.400 |
the basic anti-personnel landmine. It's simple 00:07:48.000 |
but it gives us I think some sort of useful entry points into 00:07:51.200 |
thinking about what an autonomous weapon system might be 00:07:54.880 |
in its most simple form. If we think about a landmine, 00:08:05.120 |
and there's an input into the landmine, and there's a function that goes on here. 00:08:11.840 |
Pressure is greater than x. Person, they tread on the landmine, 00:08:15.680 |
there's a basic mechanical algorithm goes on, and you get an output, 00:08:21.360 |
an explosion that goes back against the person who trod on the 00:08:28.400 |
landmine. So it's a fairly simple system of a 00:08:31.680 |
signal, a sensor, taking a signal from the outside world. The landmine is 00:08:36.000 |
viewing the outside world through its sensor, it's a basic pressure plate, 00:08:40.160 |
and according to a certain calculus here, you get an output 00:08:44.160 |
and it's directed back at this person, and it's a loop. And that's one of the 00:08:47.680 |
things that I think is fundamental essentially to understanding the 00:08:51.040 |
idea of autonomous weapons, and in a way this is where the autonomy comes in. 00:08:54.560 |
That there's no other person intervening in this process at any 00:08:58.160 |
point, there's just a sort of straightforward relationship 00:09:01.600 |
from the person or object that has initiated the system back into 00:09:10.960 |
applied. So in some ways we'll come back to this later and think about how 00:09:16.240 |
some of the basic building blocks of this may be there in our thinking about 00:09:20.480 |
other weapon systems and weapon technologies as they're 00:09:23.440 |
developing. And maybe thinking about landmines and thinking about these 00:09:28.320 |
the processes of technological change, we see a number of different 00:09:32.880 |
dynamics at play in this sort of imaginary timeline. 00:09:36.800 |
Anti-personnel landmines of course are static, they just sit in the ground where 00:09:40.800 |
you've left them, but we get more and more mobility perhaps 00:09:44.400 |
as we go through this system, certainly drones and other systems that I'll talk 00:09:50.720 |
in the weapon system. Perhaps greater sophistication of 00:09:56.480 |
sensors, I mean a basic pressure plate, just gauging weight, that's a very simple 00:10:04.240 |
sensor structure for interrogating the world. 00:10:07.280 |
We have much more sophisticated sensor systems in weapons 00:10:11.200 |
now, so we have weapon systems now that are looking at radar signatures, they're 00:10:14.720 |
looking at the heat shapes of objects and we'll come 00:10:18.000 |
back and talk about that, but more sophistication of sensors and more 00:10:21.440 |
sophistication of the computer algorithms that are 00:10:34.720 |
in this sort of trajectory from physically very unified 00:10:38.480 |
objects, I always sort of wrestle slightly whether this is the word I want, 00:10:42.160 |
but it's a sort of self-contained entity, the landmine, 00:10:45.680 |
whereas as we move in this direction maybe we see 00:10:48.640 |
more dispersal of functions through different systems and I think 00:10:51.920 |
that's another dynamic that when we think about 00:10:54.400 |
the development of autonomy and weapon systems, it might not 00:10:57.840 |
all live in one place physically moving around in one place, it can be 00:11:02.720 |
an array of different systems functioning in different places. 00:11:07.760 |
And perhaps for people with a sort of AI type of mindset, 00:11:11.360 |
maybe there's some sort of movement from more specific 00:11:14.640 |
types of AI functioning here, use of different specific AI functions here, 00:11:19.280 |
to something more general going in this direction. I'm wary of 00:11:23.440 |
necessarily buying straightforward into that, but maybe you could see some 00:11:26.400 |
movement in that sort of direction. So I just want to sort of put this 00:11:31.040 |
to one side for now, but we'll come back to it and think about some systems that 00:11:37.680 |
issues for us and around which we could expand some 00:11:40.240 |
some models. But I just want us to have this in mind when thinking about this, 00:11:43.280 |
that we're not necessarily, we're definitely not for me thinking 00:11:46.400 |
about humanoid robots walking around fighting like a soldier, rather we're 00:11:51.840 |
thinking about developments and trajectories we can see 00:11:54.960 |
coming out of established military systems now. 00:12:08.320 |
there's a lot of complexity in the world of politics and legal structures, so I 00:12:11.920 |
don't want to get too bogged down in it, but I think in 00:12:14.640 |
terms of understanding the basics of this debate on the 00:12:18.640 |
international landscape, we have to have a bit of background in that 00:12:21.360 |
area. Essentially there's I think three main types of international 00:12:27.440 |
law that we're concerned with here, and again 00:12:30.320 |
concerned with international law rather than domestic legislation which 00:12:33.840 |
any individual state can put in place whatever domestic legislation they want. 00:12:37.840 |
We're looking at the international legal landscape. 00:12:41.040 |
Basically you have international human rights law which 00:12:44.080 |
applies in pretty much all circumstances and it involves the right to life and 00:12:48.560 |
the right to dignity and various other legal protections for people. 00:12:54.240 |
And then particularly prominent in this debate if you have what's called 00:12:57.360 |
international humanitarian law which is the rules that govern behaviour during 00:13:04.960 |
militaries engaged in armed conflict for how they have to conduct themselves. 00:13:09.040 |
This isn't the legal framework that decides whether it's okay to have a war 00:13:15.360 |
you have in the war this is the obligations that you've got to 00:13:18.880 |
you've got to follow. And it basically includes rules that say 00:13:21.920 |
you're not allowed to directly kill civilians, you've got to 00:13:25.360 |
aim your military efforts at the forces of the enemy, at enemy 00:13:29.200 |
combatants. You're not allowed to kill civilians 00:13:32.480 |
directly or deliberately, but you are allowed to kill some civilians as long 00:13:35.920 |
as you don't kill too many of them for the military 00:13:39.200 |
advantage that you're trying to achieve. So there's a sort of balancing acts 00:13:42.960 |
like this, this is called proportionality. Nobody ever really knows where the 00:13:50.880 |
the law that whilst you can kill civilians you 00:13:53.840 |
mustn't kill an excessive number of civilians. 00:13:58.160 |
These are general rules, these apply pretty much to all states 00:14:01.280 |
in armed conflict situations. And then you have treaties on specific 00:14:09.360 |
really where you have weapons that are considered to be particularly 00:14:12.400 |
problematic in some way and it's decided a group of states decides 00:14:16.320 |
to develop and put in place agree a treaty that 00:14:20.080 |
that applies specifically to those to those weapons. 00:14:27.680 |
these legal treaties are all developed and agreed by states, 00:14:32.480 |
they're agreed by international governments talking together, 00:14:36.320 |
negotiating what they think the law should say and they generally only bind 00:14:40.720 |
on states if they choose to adopt that legal instrument. So 00:14:46.240 |
I guess what I'm emphasizing there is a sense that 00:14:49.760 |
these are sort of social products in a way, they're political products. 00:14:53.200 |
It isn't a sort of magical law that's come down from on high perfectly written 00:15:00.720 |
negotiated outcome developed by a complicated set of 00:15:05.040 |
actors who may or may not agree with each other on 00:15:07.360 |
all sorts of things. And what that means is there's quite a 00:15:10.320 |
lot of wiggle room in these legal frameworks and quite a lot of 00:15:12.960 |
uncertainty within them. Lawyers of international humanitarian law will 00:15:17.280 |
tell you that's not true but that's because they're particularly 00:15:21.040 |
keen on that legal framework but in reality there's a lot of 00:15:24.080 |
a lot of fuzziness to what some of the legal provisions, what some of the legal 00:15:27.520 |
provisions say. And it also means that the extent to 00:15:31.040 |
which this law binds on people and bears on people 00:15:33.840 |
is also requires some social enactment. There's not a sort of world 00:15:41.200 |
legal frameworks. It requires a sort of social function from states and from 00:15:45.440 |
other actors to keep articulating their sense of the 00:15:48.880 |
importance of these legal rules and keep trying to put pressure on other actors 00:15:55.120 |
autonomous weapons is in discussion at the United Nations 00:15:59.520 |
under a framework called the UN Convention on Conventional Weapons. 00:16:03.040 |
And this is a body that has the capacity to agree 00:16:06.320 |
new protocols, new treaties essentially on specific weapon 00:16:09.840 |
systems. And that means that diplomats from lots of countries, diplomats from 00:16:14.320 |
the US, from the UK, from Russia and Brazil and 00:16:18.320 |
China and other countries of the world will be 00:16:21.600 |
sitting around in a conference room putting forward their perspectives on 00:16:25.040 |
this issue and trying to find common ground or 00:16:28.480 |
trying not to find common ground just depending on what 00:16:31.120 |
sort of outcome they're working towards. So you know the UN isn't a 00:16:35.120 |
completely separate entity of its own. It's just the 00:16:38.320 |
community of states in the world sitting together talking about 00:16:42.320 |
talking about things. So main focus of concern in those 00:16:47.760 |
discussions when it comes to autonomy is not some sort of generalized 00:16:53.280 |
autonomy or not autonomy of all of its forms that may 00:16:56.480 |
be pertinent in the military space. It's rather much more these 00:17:03.600 |
are selected, identified, decided upon and how is a decision to apply force to 00:17:09.520 |
those targets made. And it's really these are sort of 00:17:13.280 |
the critical functions of weapon systems where the movement towards greater 00:17:17.200 |
autonomy is considered a source of anxiety essentially that we 00:17:21.360 |
may see machines making decisions on what is a target for 00:17:25.360 |
an attack and choosing when and where force is 00:17:33.440 |
So obviously in this context not everybody's like-minded on this. 00:17:43.760 |
autonomy in weapon systems and there's potential disadvantages and 00:17:49.200 |
problems associated with it. And within the you know within this 00:17:52.960 |
international discussion we see different perspectives laid out 00:17:56.400 |
and some states of course will be able to see some advantages and some 00:17:59.680 |
disadvantages. It's not a black and white sort of 00:18:02.720 |
discussion. In terms of the possible advantages for autonomy, 00:18:07.040 |
I mean one of the key ones ultimately is framed in terms of military advantage 00:18:12.160 |
that we want to have more autonomy in weapon systems because it will 00:18:16.320 |
maintain or give us military advantage over possible 00:18:19.840 |
adversaries because in the end military stuff is about winning 00:18:23.120 |
wars right so you want to maintain military advantage. 00:18:26.480 |
And military advantage, number of factors really within that, 00:18:31.520 |
speed is one of them, speed of decision making. 00:18:35.600 |
Can computerized autonomous systems make decisions about where to apply force 00:18:39.920 |
faster than a human would be capable of doing and therefore 00:18:48.160 |
Also speed allows for coordination of numbers so 00:18:51.440 |
if you want to have swarms of systems, you know swarms of small drones or 00:19:00.800 |
autonomy and decision making and communication between those systems 00:19:03.680 |
because again the level of complexity and the 00:19:06.240 |
speed involved is greater than a human would be able to 00:19:09.040 |
sort of manually engineer. So speed both in terms of 00:19:14.000 |
responding to external effects but also perhaps coordinating your own forces. 00:19:18.640 |
Reach, potential for autonomous systems to be able to operate in 00:19:24.400 |
communication denied environments where if you're relying on an electronic 00:19:29.040 |
communications link to say a current armed drone, maybe in a future 00:19:33.120 |
battle space where the enemy is denying communications in some 00:19:39.280 |
still fulfill a mission without needing to rely on that 00:19:42.720 |
communications infrastructure. General force multiplication, 00:19:47.440 |
there's a bit of a sense that there's going to be more and more teaming of 00:19:49.840 |
machines with humans, so machines operating alongside 00:19:56.720 |
And then there's importantly as it's presented at least, a sense that these 00:20:00.400 |
are systems which could allow you to reduce the risk to your own 00:20:03.200 |
forces. That maybe if we can put some sort of autonomous robotic system 00:20:07.680 |
at work in a specific environment then we don't need to put one of our own 00:20:11.040 |
soldiers in that position and as a result we're 00:20:13.520 |
less likely to have casualties coming home which of course 00:20:16.000 |
politically is problematic for maintaining any sort of 00:20:19.840 |
conflict posture. Set against all that stuff, there's a 00:20:25.440 |
sense that I think most fundamentally there's 00:20:28.240 |
perhaps a moral hazard that we come across at some point that there's 00:20:40.320 |
machines are deciding who to kill in a certain context 00:20:44.400 |
is just somehow wrong and well that's not a very easy argument to 00:20:52.400 |
rationalized sense but there's some sort of moral 00:20:55.840 |
revulsion that perhaps comes about at this sense that machines are now 00:21:00.320 |
deciding who should be killed in a particular environment. 00:21:05.760 |
There's a set of legal concerns. Can these systems be used in accordance with 00:21:10.800 |
the existing legal obligations? I'm going to come on a little bit later 00:21:15.440 |
to our orientation and the legal side which is 00:21:18.800 |
also about how they may stretch the fabric of the law and the structure of 00:21:22.080 |
the law as we see it. There's some concerns in this 00:21:25.680 |
sort of legal arguments for me that we sometimes slip 00:21:30.640 |
into a language of talking about machines making legal 00:21:34.240 |
decisions. Will a machine be able to apply the rule of proportionality 00:21:38.960 |
properly? There's dangers in that. I know what it means 00:21:43.280 |
but at the same time the law is addressed to humans, the law isn't 00:21:47.280 |
addressed to machines so it's humans who have the 00:21:49.920 |
obligation to enact the legal obligation. A machine 00:21:54.320 |
may do a function that is sort of analogous to that legal decision 00:21:58.320 |
but ultimately to my mind it's still a human who has to 00:22:01.680 |
be making the legal determination based on some prediction of what that machine 00:22:07.760 |
slippage because even senior legal academics can slip into 00:22:15.200 |
handing over the legal framework to machines before you've even 00:22:18.640 |
got on to arguing about what we should or shouldn't have. So 00:22:22.240 |
we need to be careful in that area and it's a little bit to do with 00:22:25.360 |
for me continuing to treat these technologies as machines rather than 00:22:28.800 |
into treating them as agents in some way of a sort of equal 00:22:35.920 |
to humans. And then we have a whole set of wider 00:22:39.600 |
concerns that are raised. So we've got moral anxieties, legal concerns 00:22:43.920 |
and then a set of other concerns around risks 00:22:46.960 |
that could be unpredictable risks. There's a sort of normal accidents 00:22:51.360 |
theory, maybe you've come across that stuff, there's a bit of that sort of 00:22:54.080 |
language in the debate about complicated systems and not being able to 00:22:58.960 |
not be able to avoid accidents in some respects. 00:23:04.240 |
Some anxieties about maybe this will reduce the 00:23:07.360 |
barriers to engaging in military action, maybe being able to use autonomous 00:23:11.360 |
weapons will make it easier to go to war and some anxieties about sort of 00:23:15.920 |
international security and balance of power and arms races and the like. 00:23:21.280 |
These are all significant concerns. I don't tend to 00:23:25.280 |
think much in this area, partly because they involve quite a lot of speculation 00:23:30.960 |
and they're quite difficult to populate with sort of more grounded 00:23:35.120 |
arguments I find. But that doesn't mean that they 00:23:37.760 |
aren't significant in themselves but I find them 00:23:40.560 |
less straightforward as an entry point. So in all of these different issues 00:23:46.400 |
there's lots of unstable terminology, lots of arguments coming in different 00:23:49.600 |
directions and our job as an NGO in a way is we're 00:23:53.920 |
trying to find ways of building a constructive 00:23:56.400 |
conversation in this environment which can move towards states adopting 00:24:00.320 |
a more constraining orientation to this movement towards autonomy. 00:24:05.760 |
And the main tool we've used to work towards that so far has been 00:24:11.920 |
to perhaps stop focusing on the technology per se and the idea of what 00:24:17.520 |
is autonomy and how much autonomy is a problem and to bring the focus back 00:24:21.840 |
a bit onto what is the human element that we want to preserve in all of this 00:24:25.680 |
because it seems like most of the anxieties that 00:24:29.120 |
come from a sense of a problem with autonomous weapons 00:24:32.880 |
are about some sort of absence of a human element that we want to preserve. 00:24:37.040 |
But unless we can in some way define what this human element is that we want 00:24:40.160 |
to preserve I'm not sure we can expect to define its 00:24:43.600 |
absence very straightforwardly. So I kind of feel like we want to pull the 00:24:46.880 |
discussion onto a focus on the human element. 00:24:51.760 |
And the tool we've used for this so far has been 00:24:55.440 |
basically a terminology about the need for meaningful human control 00:25:02.400 |
sort of introduced into the debate and we've promoted it 00:25:05.680 |
in discussions with diplomats and with different actors and we've built up the 00:25:08.960 |
idea of this terminology as being a sort of tool, it's a bit 00:25:12.960 |
like a meme right, you create the terms and then you use 00:25:17.360 |
that to sort of structure the discussion in a 00:25:19.680 |
productive way. One of the reasons I like it is it 00:25:24.080 |
works partly because the word meaningful doesn't 00:25:26.880 |
mean anything particular or at least it means 00:25:30.240 |
whatever you might want it to mean and I find 00:25:34.160 |
an enjoyable sort of tension in that but the term meaningful human 00:25:38.000 |
control has been quite well picked up in the literature on this issue and in 00:25:41.920 |
the diplomatic discourse and it's helping to structure us 00:25:50.160 |
Basic arguments for the idea of meaningful human control from my 00:25:53.920 |
perspective are quite simple and we tended to use 00:25:57.440 |
basically a sort of absurdist sort of logic if there is such a thing. 00:26:04.640 |
First of all really to recognize that no governments are in favor of an 00:26:09.680 |
autonomous weapon system that has no human control whatsoever 00:26:13.440 |
right, nobody is arguing that it would be a good 00:26:19.600 |
weapon that just flies around the world deciding to kill people, we don't know 00:26:23.920 |
who it's going to kill or why, it doesn't have to report back to us but 00:26:27.360 |
you know we're in favor, nobody is in favor of this right, this is obviously 00:26:31.200 |
ridiculous so there needs to be some form of human control 00:26:34.720 |
because we can rule out that sort of you know ridiculous extension of the 00:26:39.680 |
argument and on the other hand if you just have a 00:26:47.360 |
comes on every now and again and they don't know anything else about 00:26:49.920 |
what's going on but they're the human who's controlling this autonomous 00:26:53.200 |
weapon and when the red light comes on they push the fire button to launch a 00:26:57.680 |
rocket or something, we know that that isn't sufficient human 00:27:01.600 |
control either right, there's a person doing something, there's a person engaged 00:27:05.200 |
in the process but clearly it's just some sort of 00:27:08.480 |
mechanistic pro forma human engagement so between these two 00:27:13.040 |
kind of ridiculous extremes I think we get the idea that there's 00:27:17.760 |
there's some sort of fuzzy line that must exist in there 00:27:23.360 |
somewhere and that everybody can in some way agree to the idea that such a 00:27:27.440 |
line should exist, so the question then for us is how to 00:27:32.160 |
move the conversation in the international community towards a 00:27:35.680 |
productive sort of discussion of where the parameters of this line might be 00:27:43.760 |
So that's brought us on to thinking about a more substantive set of 00:27:50.480 |
meaningful human control and we've laid out some basic elements 00:27:56.000 |
so I might get rid of that fuzzy line because it's a bit useless anyway isn't 00:28:19.760 |
This is kind of before you get into exactly what the system's going to do, 00:28:26.480 |
we want the technology itself to be sort of well made 00:28:30.160 |
and it's you know it's going to basically do what it says it's 00:28:33.840 |
going to do whatever that is and we want to be able to understand it to some 00:28:39.840 |
challenge in some of the AI type functions where you start to 00:28:43.200 |
have machine learning issues and like these issues of 00:28:46.160 |
transparency perhaps they start to come up a little bit 00:28:48.720 |
a little bit there but these are kind of issues in the design and the 00:28:51.600 |
development of systems. Another thing we want to have is 00:28:57.440 |
and I think this is a key one, is accurate information 00:29:03.520 |
and it's accurate information on the intent of the commander 00:29:14.480 |
or the outcome, what's the outcome we're trying to achieve, 00:29:33.440 |
Third one, there's only four so it won't take long, third one is 00:29:53.040 |
It'd be good if we could turn it off at some point if it's going to be a very 00:29:57.120 |
long acting system, it'd be good if we could turn it off 00:30:00.240 |
maybe. And the fourth one is just a sort of framework of accountability. 00:30:13.360 |
human control can be broken down into these areas. 00:30:16.720 |
Some of them are about the technology itself, 00:30:19.840 |
how it's designed and made, how do you verify and validate 00:30:22.960 |
that it's going to do what the manufacturers have said it's going to do, 00:30:26.960 |
can you understand it. This one I think is the key one 00:30:31.280 |
in terms of thinking about the issue and this is what I'm going to talk about a 00:30:34.240 |
bit more now, but accurate information on what's the 00:30:37.680 |
commander's intent, what do you want to achieve 00:30:39.920 |
in the use of this system, what effects is it going to have. 00:30:44.880 |
I mean this makes a big difference how it works, these factors here involve 00:30:48.720 |
what are the target profiles that it's going to use, where is our landmine. On 00:30:55.040 |
pressure is being taken as a, pressure on the ground is being taken as a proxy 00:30:59.680 |
for a military target for a human who we're going to assume is a military 00:31:03.280 |
target, but in these systems we're going to have 00:31:06.320 |
different target profiles, different heat shapes, different 00:31:09.440 |
patterns of data that the system is going to operate on the basis of. 00:31:14.320 |
What sort of actual weapon is it going to use to apply force, it makes a 00:31:16.960 |
difference if it's going to just fire a bullet from a gun or if it's 00:31:20.000 |
going to drop a 2,000 pound bomb, I mean that has a 00:31:23.840 |
different effect and the way in which you envisage 00:31:27.120 |
and sort of control for those effects is going to be 00:31:29.440 |
different in those different cases. And finally very importantly these 00:31:34.480 |
issues of context, information on the context in which the system will 00:31:40.960 |
Context of course includes are there going to be civilians present in the 00:31:45.920 |
area, can you assess are there going to be other objects in 00:31:48.880 |
the area that may present a similar pattern to the proxy 00:31:52.720 |
data, you know if you're using a heat shape of a vehicle engine, 00:31:56.640 |
it might be aimed at a tank but if there's an ambulance in the same 00:32:00.160 |
location, is an ambulance's vehicle engine heat 00:32:04.080 |
shape sufficiently similar to the tank to cause 00:32:07.600 |
some confusion between the two, so sets of information like that. 00:32:13.600 |
And context of course varies in different, you know obviously varies in 00:32:16.720 |
different environments but I think we can see different domains in this area 00:32:19.200 |
as well which is significant, that operating in the water or in the 00:32:25.440 |
environment, a less complex environment than if you're operating in an urban 00:32:28.640 |
in an urban area, so that's another factor that needs to be taken into 00:32:32.400 |
into account in this. So I just wanted to talk a little bit about 00:32:38.400 |
some existing systems perhaps and think about them in the context of 00:33:12.560 |
anti-missile system, it's on a boat but there's various anti-missile systems, I 00:33:15.920 |
mean it doesn't, the details don't matter in this context. These are systems 00:33:25.200 |
choosing when to turn it on and a human turns it off 00:33:28.560 |
again, but when it's operating it's, the radar is basically scanning an area of 00:33:34.080 |
an area of sky up here and it's looking for fast-moving incoming 00:33:41.120 |
objects because basically it's designed to automatically shoot down 00:33:44.640 |
incoming missiles or rockets. So thinking about these characteristics, 00:33:51.680 |
you know what the outcome you want is, you want your boat not to get blown up 00:33:54.720 |
by an incoming missile and you want to shoot down any 00:33:58.480 |
incoming missiles. You know how the technology works 00:34:01.680 |
because you know that it's basically using radar to 00:34:04.400 |
see incoming fast-moving signatures and you have a pretty good idea of the 00:34:10.720 |
context because the skies are fairly uncluttered comparatively and 00:34:16.800 |
you'd like to think that any fast-moving incoming objects towards you here 00:34:24.480 |
Not guaranteed to be the case, one of these systems shot down an Iranian 00:34:28.400 |
passenger airliner but by accident, which is obviously a significant accident. 00:34:38.000 |
the fact that the data that you're using tracks pretty well to the target 00:34:41.120 |
objects, if not absolutely precisely. You've got a relatively controllable 00:34:45.760 |
environment in terms of the sky and you've got a 00:34:49.040 |
human being. This system isn't really mobile, I mean 00:34:51.920 |
it's kind of mobile in so far as the boat can move around but 00:35:01.680 |
relatively static. So I think looking at that you could 00:35:06.640 |
suggest that there's still a reasonable amount of human control over this system 00:35:10.480 |
because when we look at it in terms of a number 00:35:12.800 |
of the functions here, we can understand how that 00:35:17.120 |
system is being managed in a human controlled way and although 00:35:20.400 |
there's still a degree of autonomy or at least it's sort of 00:35:23.440 |
highly automated in the way that it actually identifies the targets and moves 00:35:29.760 |
The basic framework is one in which I feel like, and I mean it's not for me to 00:35:33.920 |
say, but I feel like still a reasonable amount of human 00:35:37.360 |
control is being applied. Okay, another sort of system. 00:35:42.480 |
I've got to draw some tanks or something now. 00:35:48.880 |
Okay, well I'm just going to draw them like that because otherwise it'll take 00:35:56.800 |
Okay, these are tanks, armored fighting vehicles. Ignore the 00:36:00.720 |
graphic design skills. There are sensor fused weapon systems 00:36:09.600 |
can't necessarily see the location of the of the tanks 00:36:13.840 |
but they know that there's some enemy tanks in this area 00:36:16.880 |
over here. And maybe they have some sense of what this 00:36:22.960 |
area is. They're not in the middle of a town, 00:36:25.040 |
they're out in the open. So they have an understanding of the context but maybe 00:36:30.640 |
So the weapon system is going to fire multiple warheads 00:36:34.480 |
into this target area. The commander has decided upon the 00:36:38.400 |
target of the attack, this group of tanks here. 00:36:43.040 |
But as the warheads approach the target area, the warheads are going to 00:36:45.920 |
communicate amongst themselves and they're going to allocate 00:36:54.640 |
And they're going to detect the heat shape of the vehicle's engines. 00:36:58.560 |
They're going to match that with some profile that says this is a 00:37:01.920 |
enemy armored fighting vehicle as far as we're concerned. 00:37:05.520 |
And then they're going to apply force downwards from the air 00:37:09.680 |
using a bit of explosive engineering shaped charge which focuses a blast of 00:37:13.760 |
blast of explosive, basically a jet of explosives, 00:37:23.520 |
Okay, so in this situation, well, has the weapon system chosen 00:37:30.880 |
the target? Well, it's a bit ambiguous because 00:37:34.000 |
as long as we conceptualize the group of tanks as the target, then 00:37:38.240 |
a human has chosen the target and the weapon system has essentially just 00:37:42.720 |
been efficient in its distribution of force to the target objects. 00:37:47.200 |
But if we see the individual vehicles as individual targets, maybe 00:37:54.880 |
Potentially some advantages of autonomy in this 00:38:03.040 |
ability to focus a jet of explosive force directly on the object that you're 00:38:07.920 |
looking to strike, so long as you've got the right object, this is much better 00:38:15.680 |
explosive force effect on the surrounding area, probably put a wider 00:38:19.120 |
population at risk. So there's a sort of set of considerations 00:38:24.640 |
here that I think are significant. So we have these systems, 00:38:27.920 |
you know, these systems exist today. You could ask questions about 00:38:32.800 |
whether those heat-shaped profiles of those objects 00:38:35.920 |
sufficiently tightly tied to enemy fighting vehicles or whatever, but 00:38:39.760 |
I think it can be conceptualized reasonably straightforwardly in those 00:38:47.840 |
problem with this stuff is in the potential for this 00:38:53.120 |
circle or this pattern just to get bigger and 00:38:56.560 |
bigger essentially, because it's all reasonably straightforward when you put 00:39:00.720 |
the tanks reasonably close together and you can 00:39:02.880 |
envisage having one sort of information about this area 00:39:07.680 |
which allows you to make the legal determinations that you need to make. 00:39:11.040 |
But once these tanks get spread out over a much larger area and you have a weapon 00:39:14.560 |
system that using basically the same sorts of 00:39:18.080 |
technological approach is able to cover a substantially wider 00:39:22.560 |
area of enemy terrain over a longer period of time, 00:39:25.680 |
then it suddenly gets much more difficult for the military commander 00:39:29.520 |
to have any really detailed information about the 00:39:32.560 |
context in which force will actually be applied. And for me this is I think the 00:39:36.960 |
main point of anxiety or point of concern that I 00:39:39.600 |
have in the way in which autonomy and weapon systems is 00:39:46.080 |
Because under the legal framework a military commander has an obligation 00:39:50.320 |
to apply certain rules in an attack and an attack 00:39:54.560 |
is not precisely defined but it needs to have some I think some 00:40:00.320 |
spatial and conceptual boundaries to it that allow a sufficient granularity of 00:40:05.280 |
legal application. Because if you treat this as an attack 00:40:09.760 |
I think that's fine. As you expand it out, so you've got vehicles 00:40:13.040 |
across a whole wide area of a country, say across the country as a whole using 00:40:17.440 |
the same sort of extension logic as in some previous 00:40:21.280 |
arguments, once you've got vehicles across the whole 00:40:24.560 |
country and you're saying in this attack I'm going to just target the vehicles of 00:40:27.520 |
the enemy and you send out your warheads across 00:40:30.800 |
the whole location, now I don't think that's going to 00:40:33.200 |
happen in the immediate term but I'm just using that as a sort of conceptual 00:40:38.160 |
challenge. You start to have applications of actual physical force in all sorts of 00:40:42.160 |
locations where a commander really can't assess in any realistic way 00:40:46.480 |
what the actual effects of that are going to be and I 00:40:49.040 |
think at that point you can no longer say that there is sufficient 00:40:53.040 |
human control being applied. So this capacity of AI enabled systems 00:41:00.320 |
or AI driven systems to expand attacks across a much wider 00:41:04.960 |
geographical area and potentially over a longer period of 00:41:08.080 |
time I think is a significant challenge to how the legal framework is 00:41:12.160 |
is understood at present, not one that relies upon 00:41:15.520 |
determinations about whether this weapon system will apply the rules 00:41:19.280 |
properly or not but rather one which involves 00:41:24.240 |
the frequency and the proximity of human decision making 00:41:30.720 |
time. So that's a significant area of concern for me. 00:41:35.920 |
Final sort of set of concerns in these areas is around these issues about 00:41:40.240 |
encoding of targets. I think we could say pretty clearly that 00:41:44.480 |
weight is a very meagre basis for evaluating whether something is a 00:41:51.680 |
valid military target or not. The significant problems with 00:41:56.320 |
suggesting that we could just take the weight of 00:41:58.640 |
something as being sufficient for us to decide 00:42:01.120 |
is this a target or not. In any of these processes 00:42:05.920 |
we have to decide that certain patterns of data represent 00:42:14.640 |
in a way I think what we sort of see in the sort of proponents of greater and 00:42:17.840 |
greater autonomy and weapon systems is a sense that well as we expand the 00:42:21.840 |
scope of this attack we just need to have a more sophisticated 00:42:25.040 |
system that's undertaking the attack that can 00:42:27.680 |
take on more of the evaluation and more of this 00:42:32.000 |
process of basically mapping a coding of the world 00:42:35.520 |
into a set of decisions about the application of force. 00:42:39.120 |
But overall yeah I'm skeptical about the way in which 00:42:44.960 |
our social systems are likely to go about mapping 00:42:48.560 |
people's indicators of identities into some sort of fixed sense of military 00:42:53.920 |
objects or military targets. As a society over you know the last 00:42:59.040 |
hundred years there's been plenty of times where we've 00:43:01.760 |
applied certain labels to certain types of people, certain groups 00:43:10.880 |
apparently seemed reasonable to some significant section of society at the 00:43:17.760 |
subsequently thought were highly problematic. And so I think we need to be 00:43:21.680 |
very wary of any sort of ideas of thinking that we can 00:43:25.520 |
encode in terms of humans particularly, very concrete indicators that certain 00:43:30.720 |
groups of people should be considered valid targets or not. Just going to say a 00:43:41.040 |
future discussions in the CCW. The chair of the group of governmental experts 00:43:45.840 |
that's the body that's going to discuss autonomous weapons 00:43:49.040 |
has asked states for the next meeting which will take place in April to come 00:43:52.160 |
prepared with ideas about the touch points of human 00:43:56.240 |
machine interaction. This is a sort of code for 00:44:01.280 |
what are the ways in which we can control technology. 00:44:04.400 |
So I suppose from our context as an organization 00:44:08.400 |
we'll be looking to get states to start to try and lay out this kind of 00:44:12.000 |
framework as being the basis for their perception of the ways in which the 00:44:15.600 |
entry points to control of technology could be thought about. 00:44:19.520 |
Again it's really a question of structuring the debate. We won't get into 00:44:23.440 |
detail across all of this but I think it's plausible that 00:44:26.800 |
this year and next we'll start to see the debate falling into some 00:44:31.040 |
adoption of this kind of framework which I think will give us some tools to work 00:44:34.960 |
with. I think at least if we start to get some 00:44:37.520 |
agreement from a significant body of states that 00:44:40.560 |
these are the sort of entry points we should be thinking about in terms of 00:44:43.360 |
control of technology. It will give us a bit of 00:44:49.680 |
overarching obligation that there should be some sort of 00:44:52.000 |
meaningful or sufficient human control but also in a way of thinking about that 00:44:56.320 |
and interrogating that as new technologies develop in the future 00:44:59.680 |
that we can leverage in some ways. I feel reasonably confident about 00:45:05.040 |
that but it's a difficult political environment 00:45:07.680 |
and you know it's quite possible that I don't see any rush amongst states to 00:45:11.920 |
move towards any legal controls in this area. Just as a 00:45:19.200 |
more abstract in my thinking on this. I feel like and this sort of 00:45:24.080 |
reflecting on maybe some dynamics of AI functioning. My anxiety 00:45:28.560 |
here about the expansion of the concept of attacks and in the same in 00:45:32.880 |
in conjunction with that a sort of breaking down of the granularity of the 00:45:36.960 |
legal framework. I think this is another a sort of 00:45:40.880 |
generalizing function again and it's a movement away 00:45:44.160 |
from more specific legal application by humans 00:45:46.880 |
to perhaps pushing humans people towards a more general 00:45:53.680 |
legal orientation. I feel like in the context of 00:45:57.040 |
conflict we should be pushing for a more specific and more 00:46:00.880 |
focused and more regular application of human judgments and 00:46:09.520 |
humans are perfect in any way. There's lots of problems with humans 00:46:13.680 |
but at the same time I think that we should be very wary of thinking that 00:46:17.200 |
violence is something that can be somehow perfected and that we can encode 00:46:21.920 |
how to conduct violence in some machinery that will then 00:46:26.640 |
provide an adequate social product for society as a whole. 00:46:31.520 |
I guess there's a very final thought a bit linked to that is 00:46:34.960 |
there's some questions in my mind about how this all relates to bureaucracy in a 00:46:38.000 |
way and a sense that some of the functions that we're seeing 00:46:40.640 |
here and some of the AI functions that we see here are 00:46:43.600 |
in many ways related I think to bureaucracy to the 00:46:47.760 |
encoding and categorization of data in certain ways 00:46:50.960 |
and just a very fast management of that bureaucracy 00:46:55.600 |
which is really an extension of the bureaucracies that we already have 00:46:59.360 |
and I think extending that too far into the world of 00:47:03.280 |
violence and the application of force to people will 00:47:06.320 |
precipitate painful effects for us as a society and as it brings to the 00:47:13.200 |
sort of rationales of that bureaucratic framework. 00:47:16.880 |
So there we go it's a bit of a broad brush sketch. 00:47:23.040 |
So this question is kind of a little bit multifaceted but 00:47:35.680 |
as humans evolve and adapt to increasingly autonomous weapons 00:47:39.360 |
the complexity and sophistication could increase with expansion of targets and 00:47:43.520 |
types and target area. Do you think there's like a 00:47:50.000 |
evolution and do you think that bureaucracy can keep up with 00:47:54.880 |
how fast the autonomy of these weapons could develop over time? 00:48:00.560 |
Yeah I'm not sure I caught all the first bits of the question but there's 00:48:03.600 |
definitely it's definitely a challenge that the 00:48:06.720 |
types of legal discussions at the UN Convention on Conventional Weapons 00:48:13.840 |
In fact they're incredibly slow and in that framework every state essentially 00:48:19.040 |
has a veto over everything so even over the agenda of the 00:48:23.040 |
next meeting if you know if the US wants to block the 00:48:26.400 |
agenda they can block the agenda let alone block 00:48:29.040 |
the outcome that might come if you could agree an agenda. 00:48:32.800 |
So every state has an ability to keep things moving very slowly there 00:48:36.800 |
and that's definitely a challenge in a context where 00:48:40.080 |
pace of technological development moves pretty quickly. 00:48:43.440 |
The only thing I would say which I forgot to mention before in terms of 00:48:46.720 |
thinking about the dynamics in this debate is that it's not straightforwardly 00:48:50.640 |
a situation where militaries really want loads more 00:48:57.600 |
I mean military commanders also like control and they 00:49:00.640 |
they like troops on the ground like control and they like 00:49:03.600 |
trust and confidence in the systems that they're operating around. 00:49:06.720 |
They don't want to get blown up by their own equipment and 00:49:10.240 |
military commanders like control and like to know what's happening so 00:49:13.920 |
there are some constraints within the military structures as well 00:49:17.760 |
to the overall sort of development here. I guess from our side in terms of this 00:49:22.080 |
sort of how to constrain against this expansion of attacks and the expansion 00:49:30.480 |
autonomous systems. In a way that's where I feel like 00:49:32.640 |
developing the idea that there's a principle of human control that needs to 00:49:39.040 |
boundaries we can use that and interrogate it as a social process 00:49:43.040 |
to try and keep constraint going back towards the specific because 00:49:48.160 |
in the end like I said earlier these legal structures are sort of social 00:49:52.000 |
processes as well and it's not very easy it's not something where you can just 00:49:55.440 |
straightforwardly draw a line and then no new technologies will come along that 00:49:59.200 |
challenge your expectations right. Rather we need to find the sort of camp 00:50:03.760 |
on the international legal political landscape. We need to 00:50:07.520 |
sketch out the parameters of that camp in legal terms 00:50:11.120 |
and then we need people to turn up at those meetings and continuously 00:50:15.120 |
complain about things and put pressure on things because that's the only way 00:50:18.400 |
over time where you maintain that sort of interrogation of 00:50:21.200 |
future technologies as they come out of the pipeline or 00:50:25.040 |
or whatever. So it's a sort of social function I think. Yeah 00:50:28.240 |
that answered my question is like the balance between like how fast science 00:50:34.640 |
like how fast the bureaucracy can move to keep up. Yeah I don't think it can just 00:50:37.600 |
be resolved I think it's an ongoing it's got to be an ongoing 00:50:40.080 |
social political process in a way. Thank you. 00:50:45.280 |
So given that this course is on AGI and we'll likely see a wide variety of 00:50:51.280 |
different kinds of autonomous systems in the future 00:50:54.240 |
can you give us perhaps some sort of extrapolation from this domain 00:50:58.160 |
to a broader set of potentially risky behaviors that 00:51:02.480 |
more autonomous and more intelligent systems would do and ways that 00:51:06.480 |
you know the creators of such systems such as potentially the folks sitting in 00:51:10.080 |
this room can change what they're doing to make 00:51:17.600 |
useful to think about in some ways these ideas 00:51:21.280 |
of from the present from where we are now how can 00:51:24.880 |
people involved in developing different technologies new technological 00:51:28.400 |
capacities just be thinking of the potential outcomes in this sort of 00:51:33.760 |
weaponization area and building in some orientation to their 00:51:36.880 |
work that thinks about that and thinks about what 00:51:40.000 |
the potential consequences of work can be. I mean I think in some ways 00:51:44.560 |
the risky outcomes type thinking I mean again it gets you 00:51:49.120 |
into hypothetical arguments but the the idea of two sides 00:51:53.840 |
both with substantial autonomous weapon system capabilities is probably 00:51:58.640 |
the sort of area where these ideas of accidental escalations 00:52:03.840 |
come to the fore that if you've got two adversarially orientated 00:52:09.840 |
states with substantial autonomous systems then 00:52:14.160 |
there's a potential for interactions to occur between those systems that 00:52:18.320 |
rapidly escalate a violent situation in a way that 00:52:25.120 |
allow you to to curtail it and to stall it and I think I mean I know in 00:52:30.960 |
other areas of you know of algorithm functioning in 00:52:35.040 |
society we've seen aspects of that right and it's sort of 00:52:37.680 |
probably in the financial sector and other such locations so 00:52:42.880 |
so I think yeah those areas those ideas of sort of rapidly escalating 00:52:46.800 |
cascading risks is a concern in that area but 00:52:52.080 |
again based on hypothetical thinking about you know stuff. 00:52:56.560 |
Last question. All right what do you think of this 00:53:00.480 |
criteria? So we have this tank example on the right. 00:53:04.720 |
Our simulations, our ability to simulate things is getting better and better. 00:53:09.840 |
What if we showed a simulation of what would happen to a person that has the 00:53:15.760 |
ability to hit the go button on it and if the simulation does not have 00:53:20.080 |
enough fidelity we consider that a no-go. We cannot do that or if the 00:53:26.960 |
simulation shows it does have enough fidelity and it shows 00:53:30.960 |
a bad outcome then maybe that would be a criteria in which 00:53:37.440 |
to judge this circumstance on the right and that could also let us 00:53:43.680 |
as that circle gets bigger and bigger it can let us kind of 00:53:48.560 |
put a it could let us cap that by saying hey if we don't if we do not have 00:53:55.360 |
enough information to make this simulation to even show the 00:54:05.360 |
in a way this is an issue of modeling right based on 00:54:08.480 |
contextual information that you that you have so 00:54:13.120 |
maybe with technological developments you have a better capacity for modeling 00:54:16.960 |
specific situations. I suppose the challenge is how do you 00:54:22.480 |
in a sort of timely manner especially in a conflict environment where 00:54:26.240 |
tempo is significant can you can you put the data that you have into a 00:54:36.480 |
I don't see any problem with the idea of using 00:54:43.120 |
and to you know give you readouts on what the likely effects are going to be. 00:54:47.920 |
I guess the challenge is that what counts as an adequate effect 00:54:51.360 |
and where the boundary lines of sufficient information and insufficient 00:54:54.800 |
information fall they're kind of open questions as well 00:54:58.480 |
right and you know militaries tend to like to 00:55:01.440 |
leave some openness on those those points as well but 00:55:05.840 |
but I think there can be definitely a role for modeling in 00:55:09.120 |
better understanding what what effects are going to be. 00:55:12.640 |
Great let's give Richard a big hand. Thank you very much.