Back to Index

MIT AGI: Autonomous Weapons Systems Policy (Richard Moyes)


Chapters

0:0 Introduction
47:28 Q&A

Transcript

Welcome back to 6S099 Artificial General Intelligence Today we have Richard Moyes He's the founder and managing director of Article 36 A UK-based not-for-profit organization Working to prevent the unintended, unnecessary and unacceptable harm caused by certain weapons including autonomous weapons and nuclear weapons He will talk with us today about autonomous weapons systems in the context of AI safety This is an extremely important topic for engineers, humanitarians, legal minds, policy makers and everybody involved in paving the path for a safe, positive future for AI in our society, which I hope is what this course is about.

Richard flew all the way from the UK to visit us today in snowy Massachusetts, so please give him a warm welcome Thanks very much Lex and thank you all for coming out As Lex said, I work for a not-for-profit organization based in the UK We specialize in thinking about policy and legal frameworks around weapon technologies particularly, and generally about how to establish more constraining policy and legal frameworks around weapons.

I guess I'm mainly going to talk today about these issues of to what extent we should enable machines to kill people, to make decisions to kill people It's I think a conceptually very interesting topic, quite challenging in lots of ways. There's lots of unstable terminology and lots of sort of blurry boundaries.

My own background, as I said, we work on weapons policy issues and I've worked on the development of two legal international legal treaties prohibiting certain types of weapons. I worked on the development of a 2008 convention on cluster munitions which prohibits cluster bombs and worked on our organization pioneered the idea of a treaty prohibition on nuclear weapons which was agreed last year in the UN and we're part of the steering group of ICANN, the International Campaign to Abolish Nuclear Weapons which won the Nobel Peace Prize last year so that was a good year for us.

The issue of autonomous weapons, killer robots, which I'm going to talk about today. We're also part of an NGO, non-governmental organization coalition on this issue called the Campaign to Stop Killer Robots. It's a good name, however I think when we get into some of the details of the issue we'll find that perhaps the snappiness of the name in a way masks some of the complexity that lies underneath this.

But this is a live issue in international policy and legal discussions. At the United Nations for the last several years, three or four years now, there have been groups of governments coming together to discuss autonomous weapons and whether or not there should be some new legal instrument that tackles this issue.

So it's a live political issue that is being debated in policy legal circles and really my comments today are going to be speaking to that context. I guess I'm going to try and give a sort of bit of a briefing about what the issues are in this international debate, how different actors are orientating to these issues, some of the conceptual models that we use in that.

So I'm not really going to give you a particular sales pitch as to what you should think about this issue, though my own biases are probably going to be fairly evident during the process. But really to try and lay out a bit of a sense of how these questions are debated in the international political scene and maybe in a way that's useful for reflecting on sort of wider questions of how AI technologies might be orientated to an approach by policy makers and the legal framework.

So in terms of the structure of my comments, I'm going to talk a bit about some of the pros and cons that are put forward around autonomous weapons or movements towards greater autonomy in weapon systems. I'm going to talk a bit more about the political legal framework within which these discussions are taking place and then I'm going to try to sort of lay out some of the models, the conceptual models that we as an organization have developed and are sort of using in relation to these issues and perhaps to reflect a bit on where I see the the political conversation, the legal conversation on this going at an international level.

And maybe just finally to try to reflect on or just draw out some more general thoughts that I think occur to me about what some of this says about thinking about AI functions in different social social roles. But before getting into that sort of pros and cons type stuff, I just wanted to start by suggesting a bit of a sort of conceptual timeline because one of the things, this could be the present, one of the things you find when we start, when you say to somebody well we work on this issue of autonomous weapons, they tend to orientate to it in in two fairly distinct ways.

Some people will say oh you mean armed drones and you know we know what armed drones are, they're being used in the world today and that's kind of an issue here in the in the present, right? Armed drones. But other people, most of the media and certainly pretty much every media photo editor thinks you're talking about the Terminator over here, yeah, maybe a bit of Skynet thrown in.

So this is a sort of advanced futuristic sci-fi orientation to the issues. My thinking about this, I come from a background of working on the impact of weapons in the present. I'm less concerned about this area. My thinking and my anxieties or my concerns around this issue don't come from this area.

I do this line a bit wiggly here because I also don't want to suggest there's any kind of, you know, teleological certainty going on here, this is just an imaginary timeline. But I think it's important just in terms of situating where I'm coming from in the debate that I'm definitely not starting at that end.

And yet in the political discussion amongst governments and states, well you have people coming in at all sorts of different positions along here, imagining that autonomous weapons may exist at, you know, somewhere along this sort of spectrum. So I'm going to think more about stuff that's going on around here and how some of our conceptual models really build around some of this thinking, not so much actually armed drones but some other systems.

But my background before I started working on policy and law around weapons was setting up and managing landmine clearance operations overseas. And well they've been around for quite a long time, landmines. And I think it's interesting just to start with, just to reflect on the basic anti-personnel landmine. It's simple but it gives us I think some sort of useful entry points into thinking about what an autonomous weapon system might be in its most simple form.

If we think about a landmine, well essentially we have a person, and there's an input into the landmine, and there's a function that goes on here. Pressure is greater than x. Person, they tread on the landmine, there's a basic mechanical algorithm goes on, and you get an output, an explosion that goes back against the person who trod on the landmine.

So it's a fairly simple system of a signal, a sensor, taking a signal from the outside world. The landmine is viewing the outside world through its sensor, it's a basic pressure plate, and according to a certain calculus here, you get an output and it's directed back at this person, and it's a loop.

And that's one of the things that I think is fundamental essentially to understanding the idea of autonomous weapons, and in a way this is where the autonomy comes in. That there's no other person intervening in this process at any point, there's just a sort of straightforward relationship from the person or object that has initiated the system back into the effects that are being applied.

So in some ways we'll come back to this later and think about how some of the basic building blocks of this may be there in our thinking about other weapon systems and weapon technologies as they're developing. And maybe thinking about landmines and thinking about these the processes of technological change, we see a number of different dynamics at play in this sort of imaginary timeline.

Anti-personnel landmines of course are static, they just sit in the ground where you've left them, but we get more and more mobility perhaps as we go through this system, certainly drones and other systems that I'll talk about, you start to see more mobility in the in the weapon system.

Perhaps greater sophistication of sensors, I mean a basic pressure plate, just gauging weight, that's a very simple sensor structure for interrogating the world. We have much more sophisticated sensor systems in weapons now, so we have weapon systems now that are looking at radar signatures, they're looking at the heat shapes of objects and we'll come back and talk about that, but more sophistication of sensors and more sophistication of the computer algorithms that are basically interrogating those sensor inputs.

Perhaps a little bit as well of a movement in this sort of trajectory from physically very unified objects, I always sort of wrestle slightly whether this is the word I want, but it's a sort of self-contained entity, the landmine, whereas as we move in this direction maybe we see more dispersal of functions through different systems and I think that's another dynamic that when we think about the development of autonomy and weapon systems, it might not all live in one place physically moving around in one place, it can be an array of different systems functioning in different places.

And perhaps for people with a sort of AI type of mindset, maybe there's some sort of movement from more specific types of AI functioning here, use of different specific AI functions here, to something more general going in this direction. I'm wary of necessarily buying straightforward into that, but maybe you could see some movement in that sort of direction.

So I just want to sort of put this to one side for now, but we'll come back to it and think about some systems that are existing here that I think sort of raise issues for us and around which we could expand some some models. But I just want us to have this in mind when thinking about this, that we're not necessarily, we're definitely not for me thinking about humanoid robots walking around fighting like a soldier, rather we're thinking about developments and trajectories we can see coming out of established military systems now.

So I was going to talk now a bit about the political and the legal context. Obviously there's a lot of complexity in the world of politics and legal structures, so I don't want to get too bogged down in it, but I think in terms of understanding the basics of this debate on the international landscape, we have to have a bit of background in that area.

Essentially there's I think three main types of international law that we're concerned with here, and again concerned with international law rather than domestic legislation which any individual state can put in place whatever domestic legislation they want. We're looking at the international legal landscape. Basically you have international human rights law which applies in pretty much all circumstances and it involves the right to life and the right to dignity and various other legal protections for people.

And then particularly prominent in this debate if you have what's called international humanitarian law which is the rules that govern behaviour during armed conflict and provide obligations on militaries engaged in armed conflict for how they have to conduct themselves. This isn't the legal framework that decides whether it's okay to have a war or not, this is a legal framework that once you have in the war this is the obligations that you've got to you've got to follow.

And it basically includes rules that say you're not allowed to directly kill civilians, you've got to aim your military efforts at the forces of the enemy, at enemy combatants. You're not allowed to kill civilians directly or deliberately, but you are allowed to kill some civilians as long as you don't kill too many of them for the military advantage that you're trying to achieve.

So there's a sort of balancing acts like this, this is called proportionality. Nobody ever really knows where the balance lies but it's a sort of principle of the law that whilst you can kill civilians you mustn't kill an excessive number of civilians. These are general rules, these apply pretty much to all states in armed conflict situations.

And then you have treaties on specific weapon, specific weapon types and this is really where you have weapons that are considered to be particularly problematic in some way and it's decided a group of states decides to develop and put in place agree a treaty that that applies specifically to those to those weapons.

I think it's important to recognize that these legal treaties are all developed and agreed by states, they're agreed by international governments talking together, negotiating what they think the law should say and they generally only bind on states if they choose to adopt that legal instrument. So I guess what I'm emphasizing there is a sense that these are sort of social products in a way, they're political products.

It isn't a sort of magical law that's come down from on high perfectly written to match the needs of humanity. It's a negotiated outcome developed by a complicated set of actors who may or may not agree with each other on all sorts of things. And what that means is there's quite a lot of wiggle room in these legal frameworks and quite a lot of uncertainty within them.

Lawyers of international humanitarian law will tell you that's not true but that's because they're particularly keen on that legal framework but in reality there's a lot of a lot of fuzziness to what some of the legal provisions, what some of the legal provisions say. And it also means that the extent to which this law binds on people and bears on people is also requires some social enactment.

There's not a sort of world police who can follow up on all of these legal frameworks. It requires a sort of social function from states and from other actors to keep articulating their sense of the importance of these legal rules and keep trying to put pressure on other actors to accord with them.

So the issue of autonomous weapons is in discussion at the United Nations under a framework called the UN Convention on Conventional Weapons. And this is a body that has the capacity to agree new protocols, new treaties essentially on specific weapon systems. And that means that diplomats from lots of countries, diplomats from the US, from the UK, from Russia and Brazil and China and other countries of the world will be sitting around in a conference room putting forward their perspectives on this issue and trying to find common ground or trying not to find common ground just depending on what sort of outcome they're working towards.

So you know the UN isn't a completely separate entity of its own. It's just the community of states in the world sitting together talking about talking about things. So main focus of concern in those discussions when it comes to autonomy is not some sort of generalized autonomy or not autonomy of all of its forms that may be pertinent in the military space.

It's rather much more these questions of how the targets of an attack are selected, identified, decided upon and how is a decision to apply force to those targets made. And it's really these are sort of the critical functions of weapon systems where the movement towards greater autonomy is considered a source of anxiety essentially that we may see machines making decisions on what is a target for an attack and choosing when and where force is applied to that specific target.

So obviously in this context not everybody's like-minded on this. There are potential advantages to increasing autonomy in weapon systems and there's potential disadvantages and problems associated with it. And within the you know within this international discussion we see different perspectives laid out and some states of course will be able to see some advantages and some disadvantages.

It's not a black and white sort of discussion. In terms of the possible advantages for autonomy, I mean one of the key ones ultimately is framed in terms of military advantage that we want to have more autonomy in weapon systems because it will maintain or give us military advantage over possible adversaries because in the end military stuff is about winning wars right so you want to maintain military advantage.

And military advantage, number of factors really within that, speed is one of them, speed of decision making. Can computerized autonomous systems make decisions about where to apply force faster than a human would be capable of doing and therefore this is advantageous for us. Also speed allows for coordination of numbers so if you want to have swarms of systems, you know swarms of small drones or some such, you need quite a lot of probably autonomy and decision making and communication between those systems because again the level of complexity and the speed involved is greater than a human would be able to sort of manually engineer.

So speed both in terms of responding to external effects but also perhaps coordinating your own forces. Reach, potential for autonomous systems to be able to operate in communication denied environments where if you're relying on an electronic communications link to say a current armed drone, maybe in a future battle space where the enemy is denying communications in some way, you could use an autonomous system to still fulfill a mission without needing to rely on that communications infrastructure.

General force multiplication, there's a bit of a sense that there's going to be more and more teaming of machines with humans, so machines operating alongside humans in the battle space. And then there's importantly as it's presented at least, a sense that these are systems which could allow you to reduce the risk to your own forces.

That maybe if we can put some sort of autonomous robotic system at work in a specific environment then we don't need to put one of our own soldiers in that position and as a result we're less likely to have casualties coming home which of course politically is problematic for maintaining any sort of conflict posture.

Set against all that stuff, there's a sense that I think most fundamentally there's perhaps a moral hazard that we come across at some point that there's some sort of boundary where seeing or conceptualizing a situation where machines are deciding who to kill in a certain context is just somehow wrong and well that's not a very easy argument to just start you know articulate in a sort of rationalized sense but there's some sort of moral revulsion that perhaps comes about at this sense that machines are now deciding who should be killed in a particular environment.

There's a set of legal concerns. Can these systems be used in accordance with the existing legal obligations? I'm going to come on a little bit later to our orientation and the legal side which is also about how they may stretch the fabric of the law and the structure of the law as we see it.

There's some concerns in this sort of legal arguments for me that we sometimes slip into a language of talking about machines making legal decisions. Will a machine be able to apply the rule of proportionality properly? There's dangers in that. I know what it means but at the same time the law is addressed to humans, the law isn't addressed to machines so it's humans who have the obligation to enact the legal obligation.

A machine may do a function that is sort of analogous to that legal decision but ultimately to my mind it's still a human who has to be making the legal determination based on some prediction of what that machine will do and I think this is a very dangerous slippage because even senior legal academics can slip into this mindset which is a little bit like handing over the legal framework to machines before you've even got on to arguing about what we should or shouldn't have.

So we need to be careful in that area and it's a little bit to do with for me continuing to treat these technologies as machines rather than into treating them as agents in some way of a sort of equal or equivalent or similar moral standing to to humans. And then we have a whole set of wider concerns that are raised.

So we've got moral anxieties, legal concerns and then a set of other concerns around risks that could be unpredictable risks. There's a sort of normal accidents theory, maybe you've come across that stuff, there's a bit of that sort of language in the debate about complicated systems and not being able to not be able to avoid accidents in some respects.

Some anxieties about maybe this will reduce the barriers to engaging in military action, maybe being able to use autonomous weapons will make it easier to go to war and some anxieties about sort of international security and balance of power and arms races and the like. These are all significant concerns.

I don't tend to think much in this area, partly because they involve quite a lot of speculation about what may or may not be in the future and they're quite difficult to populate with sort of more grounded arguments I find. But that doesn't mean that they aren't significant in themselves but I find them less straightforward as an entry point.

So in all of these different issues there's lots of unstable terminology, lots of arguments coming in different directions and our job as an NGO in a way is we're trying to find ways of building a constructive conversation in this environment which can move towards states adopting a more constraining orientation to this movement towards autonomy.

And the main tool we've used to work towards that so far has been to perhaps stop focusing on the technology per se and the idea of what is autonomy and how much autonomy is a problem and to bring the focus back a bit onto what is the human element that we want to preserve in all of this because it seems like most of the anxieties that come from a sense of a problem with autonomous weapons are about some sort of absence of a human element that we want to preserve.

But unless we can in some way define what this human element is that we want to preserve I'm not sure we can expect to define its absence very straightforwardly. So I kind of feel like we want to pull the discussion onto a focus on the human element. And the tool we've used for this so far has been basically a terminology about the need for meaningful human control and this is just a form of words that we've sort of introduced into the debate and we've promoted it in discussions with diplomats and with different actors and we've built up the idea of this terminology as being a sort of tool, it's a bit like a meme right, you create the terms and then you use that to sort of structure the discussion in a productive way.

One of the reasons I like it is it works partly because the word meaningful doesn't mean anything particular or at least it means whatever you might want it to mean and I find an enjoyable sort of tension in that but the term meaningful human control has been quite well picked up in the literature on this issue and in the diplomatic discourse and it's helping to structure us towards what we think are the key questions.

Basic arguments for the idea of meaningful human control from my perspective are quite simple and we tended to use basically a sort of absurdist sort of logic if there is such a thing. First of all really to recognize that no governments are in favor of an autonomous weapon system that has no human control whatsoever right, nobody is arguing that it would be a good idea for us to have some sort of autonomous weapon that just flies around the world deciding to kill people, we don't know who it's going to kill or why, it doesn't have to report back to us but you know we're in favor, nobody is in favor of this right, this is obviously ridiculous so there needs to be some form of human control because we can rule out that sort of you know ridiculous extension of the argument and on the other hand if you just have a person in a dark room with a red light that comes on every now and again and they don't know anything else about what's going on but they're the human who's controlling this autonomous weapon and when the red light comes on they push the fire button to launch a rocket or something, we know that that isn't sufficient human control either right, there's a person doing something, there's a person engaged in the process but clearly it's just some sort of mechanistic pro forma human engagement so between these two kind of ridiculous extremes I think we get the idea that there's there's some sort of fuzzy line that must exist in there somewhere and that everybody can in some way agree to the idea that such a line should exist, so the question then for us is how to move the conversation in the international community towards a productive sort of discussion of where the parameters of this line might be conceptualized.

So that's brought us on to thinking about a more substantive set of questions about what are the key elements of meaningful human control and we've laid out some basic elements so I might get rid of that fuzzy line because it's a bit useless anyway isn't it and then I can put my key elements on Well one of them is predictable, reliable, transparent technology.

This is kind of before you get into exactly what the system's going to do, we want the technology itself to be sort of well made and it's you know it's going to basically do what it says it's going to do whatever that is and we want to be able to understand it to some extent.

Obviously this becomes a bit of a challenge in some of the AI type functions where you start to have machine learning issues and like these issues of transparency perhaps they start to come up a little bit a little bit there but these are kind of issues in the design and the development of systems.

Another thing we want to have is and I think this is a key one, is accurate information and it's accurate information on the intent of the commander or the outcome, what's the outcome we're trying to achieve, how does the technology work, and what's the context. Third one, there's only four so it won't take long, third one is timely intervention, human intervention I should be.

It'd be good if we could turn it off at some point if it's going to be a very long acting system, it'd be good if we could turn it off maybe. And the fourth one is just a sort of framework of accountability. So we're thinking that basic elements of human control can be broken down into these areas.

Some of them are about the technology itself, how it's designed and made, how do you verify and validate that it's going to do what the manufacturers have said it's going to do, can you understand it. This one I think is the key one in terms of thinking about the issue and this is what I'm going to talk about a bit more now, but accurate information on what's the commander's intent, what do you want to achieve in the use of this system, what effects is it going to have.

I mean this makes a big difference how it works, these factors here involve what are the target profiles that it's going to use, where is our landmine. On the landmine of course it was just pressure, pressure is being taken as a, pressure on the ground is being taken as a proxy for a military target for a human who we're going to assume is a military target, but in these systems we're going to have different target profiles, different heat shapes, different patterns of data that the system is going to operate on the basis of.

What sort of actual weapon is it going to use to apply force, it makes a difference if it's going to just fire a bullet from a gun or if it's going to drop a 2,000 pound bomb, I mean that has a different effect and the way in which you envisage and sort of control for those effects is going to be different in those different cases.

And finally very importantly these issues of context, information on the context in which the system will will operate. Context of course includes are there going to be civilians present in the area, can you assess are there going to be other objects in the area that may present a similar pattern to the proxy data, you know if you're using a heat shape of a vehicle engine, it might be aimed at a tank but if there's an ambulance in the same location, is an ambulance's vehicle engine heat shape sufficiently similar to the tank to cause some confusion between the two, so sets of information like that.

And context of course varies in different, you know obviously varies in different environments but I think we can see different domains in this area as well which is significant, that operating in the water or in the ocean you've probably got a less cluttered environment, a less complex environment than if you're operating in an urban in an urban area, so that's another factor that needs to be taken into into account in this.

So I just wanted to talk a little bit about some existing systems perhaps and think about them in the context of this, these sort of set of issues here. One system that you may may be aware of is, it's on a boat. Okay so something like the Phalanx anti-missile system, it's on a boat but there's various anti-missile systems, I mean it doesn't, the details don't matter in this context.

These are systems that a human turns it on, so a human is choosing when to turn it on and a human turns it off again, but when it's operating it's, the radar is basically scanning an area of an area of sky up here and it's looking for fast-moving incoming objects because basically it's designed to automatically shoot down incoming missiles or rockets.

So thinking about these characteristics, you know what the outcome you want is, you want your boat not to get blown up by an incoming missile and you want to shoot down any incoming missiles. You know how the technology works because you know that it's basically using radar to see incoming fast-moving signatures and you have a pretty good idea of the context because the skies are fairly uncluttered comparatively and you'd like to think that any fast-moving incoming objects towards you here are probably going to be incoming missiles.

Not guaranteed to be the case, one of these systems shot down an Iranian passenger airliner but by accident, which is obviously a significant accident. But basically you have a sense of, you know, the fact that the data that you're using tracks pretty well to the target objects, if not absolutely precisely.

You've got a relatively controllable environment in terms of the sky and you've got a human being. This system isn't really mobile, I mean it's kind of mobile in so far as the boat can move around but the person who's operating it is, you know, they're mobile in the same place so it's relatively static.

So I think looking at that you could suggest that there's still a reasonable amount of human control over this system because when we look at it in terms of a number of the functions here, we can understand how that system is being managed in a human controlled way and although there's still a degree of autonomy or at least it's sort of highly automated in the way that it actually identifies the targets and moves the gun and shoots down the incoming object.

The basic framework is one in which I feel like, and I mean it's not for me to say, but I feel like still a reasonable amount of human control is being applied. Okay, another sort of system. I've got to draw some tanks or something now. Let's see what... Okay, well I'm just going to draw them like that because otherwise it'll take too long.

Okay, these are tanks, armored fighting vehicles. Ignore the graphic design skills. There are sensor fused weapon systems where a commander at a significant distance can't necessarily see the location of the of the tanks but they know that there's some enemy tanks in this area over here. And maybe they have some sense of what this area is.

They're not in the middle of a town, they're out in the open. So they have an understanding of the context but maybe not a detailed understanding of the context. So the weapon system is going to fire multiple warheads into this target area. The commander has decided upon the target of the attack, this group of tanks here.

But as the warheads approach the target area, the warheads are going to communicate amongst themselves and they're going to allocate themselves to the specific objects. And they're going to detect the heat shape of the vehicle's engines. They're going to match that with some profile that says this is a enemy armored fighting vehicle as far as we're concerned.

And then they're going to apply force downwards from the air using a bit of explosive engineering shaped charge which focuses a blast of blast of explosive, basically a jet of explosives, downwards onto the specific targets. Okay, so in this situation, well, has the weapon system chosen the target? Well, it's a bit ambiguous because as long as we conceptualize the group of tanks as the target, then a human has chosen the target and the weapon system has essentially just been efficient in its distribution of force to the target objects.

But if we see the individual vehicles as individual targets, maybe the weapon system has chosen the targets. Potentially some advantages of autonomy in this situation from my perspective. This kind of ability to focus a jet of explosive force directly on the object that you're looking to strike, so long as you've got the right object, this is much better than setting off lots of artillery shells in this area which would have a much greater explosive force effect on the surrounding area, probably put a wider population at risk.

So there's a sort of set of considerations here that I think are significant. So we have these systems, you know, these systems exist today. You could ask questions about whether those heat-shaped profiles of those objects sufficiently tightly tied to enemy fighting vehicles or whatever, but I think it can be conceptualized reasonably straightforwardly in those terms.

But the area where I start to have a problem with this stuff is in the potential for this circle or this pattern just to get bigger and bigger essentially, because it's all reasonably straightforward when you put the tanks reasonably close together and you can envisage having one sort of information about this area which allows you to make the legal determinations that you need to make.

But once these tanks get spread out over a much larger area and you have a weapon system that using basically the same sorts of technological approach is able to cover a substantially wider area of enemy terrain over a longer period of time, then it suddenly gets much more difficult for the military commander to have any really detailed information about the context in which force will actually be applied.

And for me this is I think the main point of anxiety or point of concern that I have in the way in which autonomy and weapon systems is likely to develop over the immediate future. Because under the legal framework a military commander has an obligation to apply certain rules in an attack and an attack is not precisely defined but it needs to have some I think some spatial and conceptual boundaries to it that allow a sufficient granularity of legal application.

Because if you treat this as an attack I think that's fine. As you expand it out, so you've got vehicles across a whole wide area of a country, say across the country as a whole using the same sort of extension logic as in some previous arguments, once you've got vehicles across the whole country and you're saying in this attack I'm going to just target the vehicles of the enemy and you send out your warheads across the whole location, now I don't think that's going to happen in the immediate term but I'm just using that as a sort of conceptual challenge.

You start to have applications of actual physical force in all sorts of locations where a commander really can't assess in any realistic way what the actual effects of that are going to be and I think at that point you can no longer say that there is sufficient human control being applied.

So this capacity of AI enabled systems or AI driven systems to expand attacks across a much wider geographical area and potentially over a longer period of time I think is a significant challenge to how the legal framework is is understood at present, not one that relies upon determinations about whether this weapon system will apply the rules properly or not but rather one which involves the frequency and the proximity of human decision making to be sort of diluted progressively over time.

So that's a significant area of concern for me. Final sort of set of concerns in these areas is around these issues about encoding of targets. I think we could say pretty clearly that weight is a very meagre basis for evaluating whether something is a valid military target or not.

The significant problems with suggesting that we could just take the weight of something as being sufficient for us to decide is this a target or not. In any of these processes we have to decide that certain patterns of data represent military objects of some type and of course in a way I think what we sort of see in the sort of proponents of greater and greater autonomy and weapon systems is a sense that well as we expand the scope of this attack we just need to have a more sophisticated system that's undertaking the attack that can take on more of the evaluation and more of this process of basically mapping a coding of the world into a set of decisions about the application of force.

But overall yeah I'm skeptical about the way in which our social systems are likely to go about mapping people's indicators of identities into some sort of fixed sense of military objects or military targets. As a society over you know the last hundred years there's been plenty of times where we've applied certain labels to certain types of people, certain groups of people, based on various indicators which apparently seemed reasonable to some significant section of society at the time but that ultimately I think we've subsequently thought were highly problematic.

And so I think we need to be very wary of any sort of ideas of thinking that we can encode in terms of humans particularly, very concrete indicators that certain groups of people should be considered valid targets or not. Just going to say a couple of final things about future discussions in the CCW.

The chair of the group of governmental experts that's the body that's going to discuss autonomous weapons has asked states for the next meeting which will take place in April to come prepared with ideas about the touch points of human machine interaction. This is a sort of code for what are the ways in which we can control technology.

So I suppose from our context as an organization we'll be looking to get states to start to try and lay out this kind of framework as being the basis for their perception of the ways in which the entry points to control of technology could be thought about. Again it's really a question of structuring the debate.

We won't get into detail across all of this but I think it's plausible that this year and next we'll start to see the debate falling into some adoption of this kind of framework which I think will give us some tools to work with. I think at least if we start to get some agreement from a significant body of states that these are the sort of entry points we should be thinking about in terms of control of technology.

It will give us a bit of leverage for a start towards suggesting an overarching obligation that there should be some sort of meaningful or sufficient human control but also in a way of thinking about that and interrogating that as new technologies develop in the future that we can leverage in some ways.

I feel reasonably confident about that but it's a difficult political environment and you know it's quite possible that I don't see any rush amongst states to move towards any legal controls in this area. Just as a few very final thoughts which may be a bit more abstract in my thinking on this.

I feel like and this sort of reflecting on maybe some dynamics of AI functioning. My anxiety here about the expansion of the concept of attacks and in the same in in conjunction with that a sort of breaking down of the granularity of the legal framework. I think this is another a sort of generalizing function again and it's a movement away from more specific legal application by humans to perhaps pushing humans people towards a more general legal orientation.

I feel like in the context of conflict we should be pushing for a more specific and more focused and more regular application of human judgments and moral agency. That isn't to say that I think humans are perfect in any way. There's lots of problems with humans but at the same time I think that we should be very wary of thinking that violence is something that can be somehow perfected and that we can encode how to conduct violence in some machinery that will then provide an adequate social product for society as a whole.

I guess there's a very final thought a bit linked to that is there's some questions in my mind about how this all relates to bureaucracy in a way and a sense that some of the functions that we're seeing here and some of the AI functions that we see here are in many ways related I think to bureaucracy to the encoding and categorization of data in certain ways and just a very fast management of that bureaucracy which is really an extension of the bureaucracies that we already have and I think extending that too far into the world of violence and the application of force to people will precipitate painful effects for us as a society and as it brings to the fore I think some of the underpinning sort of rationales of that bureaucratic framework.

So there we go it's a bit of a broad brush sketch. So this question is kind of a little bit multifaceted but as humans evolve and adapt to increasingly autonomous weapons the complexity and sophistication could increase with expansion of targets and types and target area. Do you think there's like a limit to which we can repair against such an evolution and do you think that bureaucracy can keep up with how fast the autonomy of these weapons could develop over time?

Yeah I'm not sure I caught all the first bits of the question but there's definitely it's definitely a challenge that the types of legal discussions at the UN Convention on Conventional Weapons they're not famous for going too quickly. In fact they're incredibly slow and in that framework every state essentially has a veto over everything so even over the agenda of the next meeting if you know if the US wants to block the agenda they can block the agenda let alone block the outcome that might come if you could agree an agenda.

So every state has an ability to keep things moving very slowly there and that's definitely a challenge in a context where pace of technological development moves pretty quickly. The only thing I would say which I forgot to mention before in terms of thinking about the dynamics in this debate is that it's not straightforwardly a situation where militaries really want loads more autonomous weapons and other people don't.

I mean military commanders also like control and they they like troops on the ground like control and they like trust and confidence in the systems that they're operating around. They don't want to get blown up by their own equipment and military commanders like control and like to know what's happening so there are some constraints within the military structures as well to the overall sort of development here.

I guess from our side in terms of this sort of how to constrain against this expansion of attacks and the expansion of sort of objects that may be attacked by autonomous systems. In a way that's where I feel like developing the idea that there's a principle of human control that needs to be applied even if it's a bit fuzzy in its boundaries we can use that and interrogate it as a social process to try and keep constraint going back towards the specific because in the end like I said earlier these legal structures are sort of social processes as well and it's not very easy it's not something where you can just straightforwardly draw a line and then no new technologies will come along that challenge your expectations right.

Rather we need to find the sort of camp on the international legal political landscape. We need to sketch out the parameters of that camp in legal terms and then we need people to turn up at those meetings and continuously complain about things and put pressure on things because that's the only way over time where you maintain that sort of interrogation of future technologies as they come out of the pipeline or or whatever.

So it's a sort of social function I think. Yeah that answered my question is like the balance between like how fast science would be like advancing in this field versus like how fast the bureaucracy can move to keep up. Yeah I don't think it can just be resolved I think it's an ongoing it's got to be an ongoing social political process in a way.

Thank you. So given that this course is on AGI and we'll likely see a wide variety of different kinds of autonomous systems in the future can you give us perhaps some sort of extrapolation from this domain to a broader set of potentially risky behaviors that more autonomous and more intelligent systems would do and ways that you know the creators of such systems such as potentially the folks sitting in this room can change what they're doing to make those safer.

Yeah I mean I think useful to think about in some ways these ideas of from the present from where we are now how can people involved in developing different technologies new technological capacities just be thinking of the potential outcomes in this sort of weaponization area and building in some orientation to their work that thinks about that and thinks about what the potential consequences of work can be.

I mean I think in some ways the risky outcomes type thinking I mean again it gets you into hypothetical arguments but the the idea of two sides both with substantial autonomous weapon system capabilities is probably the sort of area where these ideas of accidental escalations come to the fore that if you've got two adversarially orientated states with substantial autonomous systems then there's a potential for interactions to occur between those systems that rapidly escalate a violent situation in a way that greater capacity for human engagement would allow you to to curtail it and to stall it and I think I mean I know in other areas of you know of algorithm functioning in society we've seen aspects of that right and it's sort of probably in the financial sector and other such locations so so I think yeah those areas those ideas of sort of rapidly escalating cascading risks is a concern in that area but again based on hypothetical thinking about you know stuff.

Last question. All right what do you think of this criteria? So we have this tank example on the right. Our simulations, our ability to simulate things is getting better and better. What if we showed a simulation of what would happen to a person that has the ability to hit the go button on it and if the simulation does not have enough fidelity we consider that a no-go.

We cannot do that or if the simulation shows it does have enough fidelity and it shows a bad outcome then maybe that would be a criteria in which to judge this circumstance on the right and that could also let us as that circle gets bigger and bigger it can let us kind of put a it could let us cap that by saying hey if we don't if we do not have enough information to make this simulation to even show the person then it's a no-go.

Yeah yeah I think in a way this is an issue of modeling right based on contextual information that you that you have so maybe with technological developments you have a better capacity for modeling specific situations. I suppose the challenge is how do you in a sort of timely manner especially in a conflict environment where tempo is significant can you can you put the data that you have into a some sort of modeling system adequately but I don't see any problem with the idea of using AI to model the outcomes of specific attacks and to you know give you readouts on what the likely effects are going to be.

I guess the challenge is that what counts as an adequate effect and where the boundary lines of sufficient information and insufficient information fall they're kind of open questions as well right and you know militaries tend to like to leave some openness on those those points as well but but I think there can be definitely a role for modeling in better understanding what what effects are going to be.

Great let's give Richard a big hand. Thank you very much. you you you you you