Killer robots could become the self-propelled weapon system of the future unless an international ban takes effect. Photo: Campaign to stop killer robots

Report

Killer robots - the self-propelled weapons of the future

Killer robots act without human control and violate international law. The robots increase the likelihood of war and discrimination against certain vulnerable groups. An international ban can counteract the development of killer robots, but this process is complicated by the current tense geopolitical situation and Sweden's passive attitude, according to several experts in the field.

Algorithms that determine who will live and who will die. Robots programmed to independently identify certain types of people and then attack at lightning speed without anyone being able to stop them. Agnes Hellström, chairman of Svenska Freds, waves her arms in the air in front of a background of colorful birds when she explains that more people must press for an international ban on completely autonomous weapon systems, also called killer robots. 

I ask Hellström and Gabriella Irsten, political administrator for IKFF, what the biggest risk with killer robots is. They both start by asking if they just have to choose a risk "because there are so many". Hellström explains that completely autonomous weapon systems would mean that the human factor for the first time is completely disconnected, something that on several historical occasions has saved us from nuclear war. This aspect disappears with autonomous weapons as they are programmed to take their own belsut. Irsten addresses two other issues. She believes that we will see a new global rearmament similar to the period after the gunpowder or nuclear weapons were invented, and just as then, killer robots risk creating additional uncertainty and exacerbating today's already tense geopolitical situation. 

International law problems with fully autonomous weapon systems

Who is really responsible for a robot's actions? The programmer, the state that presses the start button or someone else completely? This is just one of the legal problems that Pål Wrange, associate professor of international law, talks about over the phone when he is out on his morning walk in gray April weather. He believes that all states must agree on the rules that apply to completely autonomous weapons systems, including that a person is criminally liable if a war crime is committed. Wrange begins to ponder anxiously and wonders if it can even be called a war crime when the act was carried out by a machine because international law says that someone must have had intent to be convicted. 

Wrange also addresses two of the basic principles of international law; the principle of distinction and the principle of proportionality, which create problems with the use of fully autonomous weapon systems. The former means that the warring parties must be able to distinguish military targets from civilian targets and that attacks may only be directed at military targets. One argument for the use of fully autonomous weapon systems is that robots in many cases make fewer mistakes than humans. However, there is a risk that the software is programmed incorrectly and can not distinguish who is civil, which would be a problem for the principle of distinction. The principle of proportionality means that if civilian damage is inevitable, it must be proportionate to military damage. In this case, the military is represented by machines and the question then becomes how losses in the form of technology can be made in relation to human life? In addition, it becomes "cheap" to use force when one's own soldiers do not lose their lives and this means that states may be tempted to use military means in situations where they otherwise would not have done so. Therefore, the threshold for using force is lowered, Hellström and Irsten say.

An additional problem is about the type of person the robot is targeting and that this is in the hands of the programmer. Irsten believes that prejudices we humans have will play a significant role as they can be programmed in unconsciously or consciously.

- How the programmer looks at people with different ethnicities, for example, plays a role. By deliberately adding prejudices, killer robots can be an "easy" tool for conducting ethnic cleansing, Irsten explains with concern in his eyes. 

In addition, the robot is self-learning and therefore we humans can not predict what it will learn or how it will behave. What happens if these weapons end up in the "wrong hands"? For example, what will be the consequences of terrorists or authoritarian leaders being given the opportunity to develop these weapons? 

An international ban on killer robots

Hellström notes that 30 states and the UN Secretary-General have expressed that they want to see an international ban on killer robots. This also works Campaign to Stop Killer Robots for, which is a coalition of NGOs. In Sweden, Svenska Freds and IKFF collaborate with several to build a campaign in Sweden and influence the government and public opinion for more people to get involved. At present, Sweden has not supported a ban and Hellström believes that Sweden has a rather passive attitude. 

- At present, completely autonomous weapons are not used in war, as far as we know, but it is a battle against the clock. Technological development is super fast, while diplomatic processes and international banning conventions are very slow. Right now, there is a trend to invest a lot of money in military rearmament, and it is becoming a kind of vicious spiral that increases uncertainty.

Hellström believes that Sweden, as a humanitarian superpower, could, by supporting a ban, contribute to creating a stigma and an international norm against killer robots. In the long run, such a signal also affects countries that want to develop killer robots. For example, neither the United States nor Russia has signed the ban on cluster munitions, but the development and investment in these extremely destructive weapons has decreased significantly due to the international ban and the stigma surrounding them. 

- You do not want to put your money in the development of weapons that blow children to pieces. If we create a sufficiently large resistance to a type of weapon, banks, companies and other players will stop investing and then the value of developing them further decreases. If Sweden does not act, it will be much more difficult for others to follow. Instead, it will be a "chicken race" where Sweden does not want to take a clear position, says Hellström.

There is nothing to explain why Sweden did not support a ban, but according to Irsten, it is similar to how Sweden acted initially on the issue of a ban on nuclear weapons and cluster munitions. Sweden then listened a lot to its allies, such as the United States. Even though Sweden itself would not want to develop completely autonomous weapons, we have an arms industry. Great powers that import our weapons and on which the Swedish government considers itself dependent can use the arms industry as a method of negotiation to prevent Sweden from signing a ban. Such pressures have occurred in other matters and can be a factor that puts sticks in the wheel for Sweden to support a ban.

- A ban or regulation of certain weapons systems means removing a strategic advantage for states that have a technological lead. Therefore, these countries, such as the United States, will oppose these regulations, Wrange explains.

A professor of international law, a chairman of the world's oldest peace organization and a political administrator. Everyone agrees that an international ban needs to come into force to prevent a world where human lives are in the hands of robots.

Is there something in the text that is not correct? Contact us at opinion@fuf.se

Share this: