Killer robots are not inevitable
Global catastrophic risk mitigated
Weapons of Mass Destruction
The threat from new and emerging technology
Unknown risks
Risk multiplier managed
Conflict or political violence
Implementation timeframe
Short term
Killer robots would violate international humanitarian law and bring about untold suffering. Governments must urgently launch negotiations on a new international treaty to pre-emptively prohibit fully autonomous weapons and retain meaningful human control over the use of force.
Implementation strategy

Fully autonomous weapons, also known as lethal autonomous weapons systems or ‘killer robots’, are not a science fiction monster far off in our future, but a real and present threat to humanity. Fully autonomous weapons are systems that by their nature or use would lack meaningful human control over the critical functions of selecting and engaging targets. These weapons are not yet currently in use, but are actively being developed by a small number of military powers, including the United States, Russia, China, Israel, and the United Kingdom. Despite these investments, there is growing recognition by the international community that there must always be human control over the use of force.

Killer robots are not inevitable, but artificial intelligence (AI) and emerging technologies are developing faster than nations are moving to regulate. These systems raise numerous ethical, moral, legal, technical, and security concerns, and pose an incredible threat to our shared humanity and global security. The only way to adequately address the threats these weapons raise is by pre-emptively banning fully autonomous weapons through a new international treaty and national laws. A requirement to maintain meaningful human control over the use of force would protect humanity, preserve the dignity of human life, and facilitate compliance with international humanitarian and human rights law.

The Campaign to Stop Killer Robots urges all states to: 1) Immediately call for negotiations on a legally binding instrument to prohibit fully autonomous weapons and retain meaningful human control over the use of force; 2) Recommend the Convention on Conventional Weapons (CCW) begin negotiations on a new killer robots protocol and seek to adopt one by the end of 2020; 3) Commit not to develop or acquire fully autonomous weapons and establish national policy and laws towards this objective, in consultation with civil society and other national stakeholders.

As the moral, ethical, legal, operational, technical, proliferation, international stability and other concerns raised by killer robots continue to multiply, the Campaign continues to express concern at the lack of ambition and urgency shown by states in ongoing formal discussions on fully autonomous weapons. Meetings at the CCW to address issues related to lethal autonomous weapons systems have been ongoing since 2014, but have yet to produce any tangible steps to address the risks of killer robots. If the CCW cannot produce a credible outcome to address killer robots with the urgency they necessitate, the international community must pursue alternative pathways to negotiate a ban on killer robots and prevent future humanitarian harm.

Political will exists to realise this proposal

Killer robots are now regarded as one of four “looming threats that endanger 21st-century progress,” with widespread recognition that weapons systems lacking meaningful human control cross a critical threshold and must be prohibited. United Nations Secretary General Guterres views killer robots as “politically unacceptable and morally repugnant,” recently stating, “I have a simple and direct plea to all Member States: Ban lethal autonomous weapons now.”

States are working multilaterally to address killer robots. Foreign Ministers of France, Germany, and at least 18 other countries endorsed a political declaration on killer robots at an Alliance for Multilateralism event at the 74th UN General Assembly. 30 states have explicitly called for a new treaty banning killer robots, and there is general agreement among more than 80 countries on the need to retain some form of human control over the use of force. In 2020 and 2021, states at the CCW will focus on “development of aspects of the normative and operational framework” on lethal autonomous weapons systems.

Outside the UN, the Organization for Security and Co-operation in Europe parliamentary assembly adopted a declaration urging members “to support international negotiations to ban lethal autonomous weapons,” and the European Parliament adopted a resolution calling for urgent negotiation of “an international ban on weapon systems that lack human control over the use of force.”

Action is also building in parliaments around the world. In Canada, the Foreign Minister has a mandate to advance international efforts to ban fully autonomous weapons. In Finland and Germany, government coalition agreements commit to support regulation of killer robots. In the Netherlands and Belgium, adopted parliamentary resolutions call for a legally binding instrument on autonomous weapons.

Political will is growing, but the pace of action must quicken if we hope to prevent the consequences of fully autonomous weapons.

What if political will does not exist yet

While the threats to humanity posed by fully autonomous weapons are now widely acknowledged, diplomacy to address the issue has moved slowly, and progress to tangible outcomes has been blocked by a small minority of states.

However, sometimes things must look worse before they look better. The almost universally adopted Mine Ban Treaty and the Convention on Cluster Munitions were both born of external diplomatic processes following the failure of the CCW efforts to respond to the human suffering caused by antipersonnel landmines and cluster munitions. The 2017 Treaty on the Prohibition of Nuclear Weapons was negotiated in the UN General Assembly following decades of nuclear states disagreeing on how to disarm. Those ground-breaking and life-saving treaties were negotiated through partnerships between like-minded and concerned states, United Nations (UN) agencies, the International Committee of the Red Cross (ICRC), and dedicated coalitions of nongovernmental organizations.

To continue to advance political will and commitment to launch negotiations on a new, pre-emptive treaty banning killer robots, the Campaign to Stop Killer Robots will focus building and strengthening the positions of our supporters and allies.

We will continue to engage with concerned states to advocate for a ban in international fora.

We will support and engage with efforts to convene states and stakeholders to advance discussions on how to address fully autonomous weapons, such as those planned and proposed for 2020 by Brazil, Japan, Austria, and Germany.

And we will prioritize political outreach and national campaigning in an effort to bring new countries, civil society groups, endorsers, and allies on board the call to ban killer robots.

Realisation by implementing or making adjustments to current roadmaps

null

Decision makers and implementers

null

Why is this a long term proposal

null

Mitigating weapons of mass destruction

When people hear ‘killer robots’, images of the Terminator or SkyNet might come to mind. But the real concerns over fully autonomous weapons are not related to walking, talking, gun-slinging robot warriors, or super intelligent robots will take over the world. The real concern is that AI, algorithms, machine learning, and emerging technologies will replace meaningful human control in the process of selecting targets and ‘deciding’ to attack them.

Supporters of fully autonomous weapons argue that they will bring increased speed and efficiency to the battlefield, they would be able to operate in communications insecure environments, and that they could save lives by decreasing the need for human soldiers, increasing accuracy in targeting, and acting as a deterrent. They say killer robots wouldn’t get hungry, tired, feel pain, fear or anger, and wouldn’t act in self-defence or make rash decisions in the heat of the moment. But similar arguments were made for other indiscriminate weapons in the past, like landmines, cluster munitions, and nuclear weapons. Those weapons claimed hundreds of thousands of innocent civilian victims before being banned by humanitarian disarmament treaties.

AI expert Stuart Russell has warned that based on current development fully autonomous weapons would become “scalable weapons of mass destruction” where thousands of small weapons could be deployed at once in widespread attacks. Such weapons could be programmed to attack specific people or groups of people that meet certain visual criteria, such as age, gender, race, ethnicity, religion, or other data markers. Other experts, like Noel Sharkey, have suggested that these systems would be easy and cheap to produce, and would likely proliferate quickly, including to non-state actors.

A treaty banning fully autonomous weapons would prevent the humanitarian disaster that would result from the use of killer robots at scale as a new weapon of mass destruction.

Mitigating the threat from new and emerging technology

Emerging technologies like artificial intelligence, image recognition, computer vision, and machine learning are developing rapidly, and there is urgent need for diplomacy to keep pace through the swift negotiation of a new international treaty preventing automation of the use of force.

Thousands of tech experts and hundreds of companies working in the tech sector have recognized the risk posed by the militarisation and weaponisation of emerging technologies. Worried their work will be used in weapons that lack meaningful human control, they have raised concerns that fully autonomous weapons would be unpredictable and unreliable, vulnerable to hacking or spoofing, and would be unable to make complex decisions required to adhere to IHL norms of distinction, proportionality, and necessity. In response, a pledge to not work on or help develop fully autonomous weapons has been signed by more than 4,500 tech workers, roboticists, and scientists, and a similar pledge has been made by over 200 companies in the tech sector.

A ban on fully autonomous weapons would not mean a ban on all military applications of emerging technologies. But it would address the application of emerging technologies in weapons systems to the extent that there is no longer meaningful human control over the selection and engagement of targets and use of force. A treaty of this kind would ensure that advancements in technology are used for the betterment of society, rather than to its detriment.

The negotiation of a new international treaty banning fully autonomous weapons is also crucial because these weapons fundamentally differ from other conventional weapons, and raise unique challenges for existing law. A specific treaty can address the distinctive concerns related to dual-use technology, clarify and strengthen existing IHL, and unambiguously address how existing law applies to these new weapons.

Mitigating unknown risks

Replacing troops with machines could make the decision to go to war easier and would shift the burden of conflict even further on to civilians. Fully autonomous weapons would lack the human judgment necessary to evaluate the proportionality of an attack, distinguish civilian from combatant, and abide by other core principles of the laws of war. It is very likely that they would make tragic mistakes with unanticipated consequences that could inflame tensions, contribute to regional fragilities, and exacerbate existing conflict.

Negotiating new international law to pre-emptively prohibit fully autonomous weapons and retain meaningful human control over the use of force would reinforce and strengthen existing international humanitarian law and international human rights law. It would prevent a destabilizing new AI arms race, and would deter states from rushing to use technology that is unreliable and unpredictable in response to the emergence of new or unknown risks.

Political declarations, principles, codes of conduct and other measures that fall short of new international law are completely insufficient to address the specific risks posed by killer robots. The only way global governance institutions can adequately address this threat is by supporting calls to launch negotiations on a legally binding instrument to prevent the development, production and use of fully autonomous weapons and retain meaningful human control over the use of force.

Reducing inclusivity and accountability in national and global governance

A treaty banning fully autonomous weapons would not reduce inclusivity and accountability in national and global governance. In fact, such a treaty would directly strengthen accountability by averting the legal accountability gap posed by autonomous weapons.

It’s unclear who, if anyone, could be held accountable for unlawful acts caused by a fully autonomous weapon operating without meaningful human control. Would responsibility be placed on the programmer, manufacturer, commander, or machine itself? This accountability gap would make it difficult to ensure justice for victims; condemn or verify criminal acts, war crimes, or crimes against humanity; or deter future illegal attacks.

The negotiation of a pre-emptive treaty banning fully autonomous weapons would negate this accountability gap, clarify and support existing law, and contribute to accountability in national and global governance by strengthening a rules-based international order.

Conversing effect in increasing poverty and inequality

A treaty banning fully autonomous weapons would not increase poverty and inequality.

Conflict is often a contributing factor to global poverty. Use of weapons can contaminate and reduce agricultural land, destroy cities and economies, increase health care costs, and have long-term effects on the physical, mental, and emotional health of victims and survivors. Killer robots, like other conventional weapons, would have these same destabilizing impacts.

Banning killer robots would support peace and security, in turn contributing to sustainable development. The lack of killer robots would ensure valuable expertise and resources within the tech sector are dedicated to education, healthcare, and poverty reduction – tech for good.

A treaty would also directly prevent certain types of inequality that would likely increase through the use of autonomous weapons; namely, the higher likelihood that certain groups would be victims of killer robots either as a result of algorithmic bias or target profiling.

Reducing conflict and political violence

The development of fully autonomous weapons has been referred to as the third revolution of warfare, following gunpowder and nuclear weapons. If these systems are developed and used, it would drastically change the wars are fought and viewed. And the use of fully autonomous weapons would not make anyone safer.

Allowing lines of code to make life and death decisions crosses a moral and ethical red-line, would lower the threshold to conflict, could spark a robotic arms race, challenges international law, and could lead to use of autonomous weapons in circumstances outside of armed conflict, like policing or border control.

By agreeing that we must retain meaningful human control over the use of force and pre-emptively banning fully autonomous weapons, the international community would be avoiding a new humanitarian disaster and preventing the devastating consequences that have followed the use of other indiscriminate and inhumane weapons.

A new treaty prohibiting killer robots would prevent a global arms race and slow proliferation of weapons with increasing autonomy. A treaty would reinforce existing international humanitarian law and international human rights law. And a ban on killer robots would prevent the slippery slope towards a lowering threshold for conflict and the conduct of ‘forever wars’.

Disarmament is a critical pillar in building peace and security. The negotiation of a new international treaty to retain meaningful human control over the use of force and prohibit fully autonomous weapons will reinforce the humanitarian disarmament movement and, in doing so, protect civilians and prevent further dehumanisation of conflict and political violence.

Additional information

The Campaign to Stop Killer Robots is an international coalition of more than 140 non-governmental organizations and academic partners in over 60 countries with the specific goal of pre-emptively banning the development, production, and use of fully autonomous weapons, also known as lethal autonomous weapons systems, or killer robots.

The Campaign to Stop Killer Robots calls on all policymakers to create a new international treaty to pre-emptively ban fully autonomous weapons and retain meaningful human control over the use of force; and promote development of national policies and laws to implement and enforce a ban. The Campaign views such a treaty as a humanitarian imperative, legal necessity, and moral obligation.

Since the launch of the Campaign to Stop Killer Robots, a broad range and growing number of countries, regional bodies, private companies, organizations, and individuals have endorsed the call to pre-emptively ban fully autonomous weapons, including:

- Thirty countries: Algeria, Argentina, Austria, Bolivia, Brazil, Chile, China (on use only), Colombia, Costa Rica, Cuba, Djibouti, Ecuador, Egypt, El Salvador, Ghana, Guatemala, Holy See, Iraq, Jordan, Mexico, Morocco, Namibia, Nicaragua, Pakistan, Panama, Peru, State of Palestine, Uganda, Venezuela, and Zimbabwe.

- United Nations Secretary-General António Guterres, who has urged states to prohibit weapons systems that could, by themselves, target and attack human beings, calling them “morally repugnant and politically unacceptable.”

- More than 4,500 artificial intelligence (AI) and robotics experts, who signed an open letter in 2015 affirming that they have “no interest in building AI weapons and do not want others to tarnish their field by doing so.” Since then, another 30,000 individuals have signed various open letters supporting a ban on lethal autonomous weapons, including more than 14 current and past presidents of AI and robotics organizations and professional associations such as American Association for Artificial Intelligence (AAAI), IEEE Robotics and Automation Society (IEEE-RAS), International Joint Conferences on Artificial Intelligence (IJCAI), and the European Association for Artificial Intelligence (EurAI). Individual signatories include Tesla CEO Elon Musk, Apple co-founder Steve Wozniak, Skype co-founder Jaan Tallin, the late Professor Stephen Hawking, and Google DeepMind chief executive Demis Hassabis along 21 of his lab engineers, developers and research scientists. Notable female signatories include Professors Barbara Grosz of Harvard University, Martha E. Pollack of the University of Michigan, Carme Torras of the Robotics Institute at CSIC-UPC in Barcelona, and Francesca Rossi of Padova University as well as IBM Watson design leader Kathryn McElroy.

- More than 270 scientists in 37 countries, who have warned that interactions by devices controlled by complex algorithms “could create unstable and unpredictable behavior … that could initiate or escalate conflicts, or cause unjustifiable harm to civilian populations.” Signatories include Professors Geoffrey Hinton of the University of Toronto, Alan Bundy of the University of Edinburgh, Bruno Siciliano of the University of Naples, and James Hendler of Rensselaer Polytechnic Institute (former Chief Scientist of the Information Systems Office at the US Defense Advanced Research Projects Agency aka DARPA).

- More than 26 Nobel Peace Laureates, who are concerned that “leaving the killing to machines might make going to war easier.” This includes: Jody Williams, Juan Manuel Santos, Leymah Gbowee, Tawakkol Karman, Shirin Ebadi, José Ramos-Horta, F.W. de Klerk, Rigoberta Menchú Tum, His Holiness the Dalai Lama, Oscar Arias Sánchez, Archbishop Desmond Tutu, Lech Walesa, Mairead Maguire and Betty Williams.

- More than 160 religious leaders and organizations of various denominations, who called killer robots “an affront to human dignity and to the sacredness of life.” The signatories include South Africa’s Archbishop Desmond Tutu, the Latin Patriarchate of Jerusalem Fouad Twal, the Archbishop of Liverpool Rev. Malcolm McMahon, the Archbishop of Juba in South Sudan Rev. Daniel Deng Bul Yak, Religions for Peace Secretary General Dr. William Vendley, Maryknoll Office for Global Concerns Executive Director Gerry Lee, and the Bishop of the Evangelical Lutheran Church in Jordan and the Holy Land Reverend Dr. Munib Younan.

- The UN Special Rapporteur on extrajudicial, summary or arbitrary executions and the UN Special Rapporteur on the rights to freedom of peaceful assembly and of association in their joint report that drew attention to potential law enforcement use of weapons systems that would lack meaningful human control.

- Google, which issued a set of ethical principles in June 2018 committing not to “design or deploy” AI for use in weapons systems. Technology company Vision Labs, which in 2019 said it does “not develop or sell lethal autonomous weapons systems” and “explicitly prohibit[s] the use of Vision Labs technology for military applications” as “part of our contracts.” Boston Dynamics owner Softbank said it will not develop killer robots as it does “not have a weapons business and have no intention to develop technologies that could be used for military purposes.” The CEO of Animal Dynamics Alex Caccia said that “under our company charter, and our relationship with Oxford University, we will not weaponize or provide ‘kinetic’ functionality to the products we make.”

Although formal UN talks on fully autonomous weapons launched in 2014 and killer robots have continuously been raised by states both at the Convention on Conventional Weapons and the UN General Assembly, little progress has been made on a tangible outcome which can adequately address the threats posed by these weapons.

While states have agreed on a mandate to discuss development of a “normative and operational framework” on killer robots within the CCW in 2020 and 2021, there is no clarity on what that actually means. While some states push for a new international treaty, others insist that existing IHL is sufficient. Bound by consensus decision-making, the diplomatic process seems to be locked in a standstill.

Meanwhile, technology development continues to race forward and a small number of states investing heavily into development of increasingly autonomous weapons. It is urgent that states pick up the pace and immediately launch negotiations on a treaty to ensure meaningful human control over the use of force, before it’s too late.

For more information, visit www.stopkillerrobots.org.

Other ideas you might be interested in
Published by Cristina Petcu Unknown risks
Create a strong UN Peacebuilding Council to replace the current Peacebuilding Commission
Similar to the transformation of the Human Rights Commission into a Council, it is time for the UN Peacebuilding Commission to be upgraded into a Council with enhanced powers and responsibilities; and mandated to lead on policy development, coordinat...
Published by Richard Alexander Shirres Climate change, Eco-system collapse, Unknown risks
Scheme to accredit 'UN Global Eco-Steward Champions' status to active citizens
Contribution to global governance - Purpose: To expand and develop advocacy of the UN’s work in relation to global ecological stewardship and raise awareness of this crucial but lesser appreciated role of the UN and, thereby, advance support for the...
Published by Richard Maxheim The threat from new and emerging technology
UN sovereignty in Affairs of Mankind
A more effective UN needs its own sovereignty. This can be created if member states surrender certain parts of their national sovereignty. The sovereignty of the UN should be limited to affairs of mankind. For this, the affairs of mankind would have...
Published by Arthur Lyon Dahl Climate change, Eco-system collapse, Pandemics and anti-microbial resistance, The threat from new and emerging technology, Unknown risks
Governance, Science and the Climate Crisis
For climate and other catastrophic risks, science is the foundation for public education, policy-making and action, requiring strengthening formal science-policy inputs to UN and government decision-making, while building public support for action as...
Published by Arthur Lyon Dahl Climate change, Eco-system collapse, Pandemics and anti-microbial resistance, Weapons of Mass Destruction, The threat from new and emerging technology, Unknown risks
Global Governance and the Emergence of Global Institutions for the 21st Century
Our book "Global Governance and the Emergence of Global Institutions for the 21st Century" will initiate wide dialogue on the future of global governance, presenting a package of core UN reforms to modernize the current global governance system to re...