Drones are reshaping warfare. As artificial intelligence advances, pressure is mounting to control "killer robots"

New York: Europe and the Arabs
A world in which algorithms determine the fate of soldiers and civilians alike is no longer hypothetical. Drones and artificial intelligence are reshaping warfare today, raising profound ethical questions about automation in conflict. In this context, international policymakers are rushing to establish ground rules, and the race is on to curb this rapidly advancing technology. According to a UN news bulletin, "Every day, we voluntarily hand over information about ourselves to machines. This happens when we accept an online cookie or use a search engine. We may not think about how our data will be sold and used before we click the 'agree' button to reach the page we want, or we may not realize that it will be used to target us as consumers and persuade us to buy something we didn't know we needed.
But what if these machines were using the data to determine who should be targeted for killing as enemies? The United Nations and a group of non-governmental organizations are deeply concerned that this scenario is about to become a reality. They are therefore calling for international regulation of lethal autonomous weapons to avoid a near future in which machines dictate life-and-death choices.
A massive drone war unfolds in Ukraine
For several months, the Kherson region of Ukraine has been under sustained attack by Russian drones, which primarily target non-combatants. More than 150 civilians have been killed and hundreds injured, according to official sources. An independent UN human rights investigation concluded that these attacks constitute "Crimes Against Humanity"
The Ukrainian military also relies heavily on drones, reportedly developing a "wall" of armed drones to protect vulnerable areas along the country's borders.
Once the preserve of the wealthiest countries able to afford the most sophisticated and expensive drones, Ukraine has demonstrated that, with a little creativity, even low-cost drones can be modified to deadly effect. As this shift is reflected in conflicts around the world, the nature of modern warfare is being reshaped.
The Rise of "Digital Dehumanization"
However, despite the devastation this new form of warfare could cause, the looming specter of drones or other autonomous weapons heightens ongoing concerns about "killer robots" raining death from the sky and deciding who to attack.
Izumi Nakamitsu, the UN High Representative for Disarmament Affairs, said the organization's position on these weapons is clear. She added: "The Secretary-General has consistently said that the use of fully delegated machines that make decisions about the killing of human beings is morally repugnant. It should not be allowed." Rather, it should be prohibited under international law.”
Human Rights Watch, an international human rights nongovernmental organization, said that the use of autonomous weapons would be the latest and most dangerous example of “digital dehumanization,” where artificial intelligence makes a range of decisions that affect human lives, such as policing, law enforcement, and border control.
Mary Wareham, arms director at Human Rights Watch, warned: “Many countries with vast resources are investing heavily in artificial intelligence and related technologies to develop autonomous weapons systems for land and sea. This is a reality.” The United States is leading the charge, but other major powers, such as Russia, China, Israel, and South Korea, are investing heavily in autonomous weapons systems.
Proponents of the use of AI in warfare often cite the limitations of human capabilities to justify its expansion. Soldiers can miscalculate, act on impulses, need rest, and, of course, demand wages—while claiming that machines are getting better at identifying threats based on behavioral and movement patterns. Some proponents suggest that the next step is to allow autonomous systems to decide when to fire.
There are two main objections to allowing machines to dominate battlefields: First, the technology is far from foolproof. Second, the United Nations and many other organizations consider the use of lethal autonomous weapons unethical.
“It’s very easy for machines to misidentify human targets,” said Wareham of Human Rights Watch. “People with disabilities are particularly vulnerable because of the way they move. Their wheelchairs can be mistaken for weapons.” There are also concerns that facial recognition technology and other biometrics cannot correctly identify people of different skin tones. Artificial intelligence remains flawed, and it incorporates the biases of the people who programmed these systems.
Regarding ethical and moral objections, Nicole van Rooijen, executive director of the Stop Killer Robots Coalition, a coalition campaigning for a new international law on the autonomy of weapons systems, said that these weapons would make it extremely difficult to determine responsibility for war crimes and other atrocities.
She asked: "Who is responsible? Is it the manufacturer? Or the person who programmed the algorithm?" It raises a wide range of issues and concerns, and its widespread use would be a moral failure.
A ban by 2026?
The rapid advancement of technology, and evidence that AI-powered targeting systems are already being used on the battlefield, increases the urgency of calls for international regulations for this technology.
In May, informal discussions were held at UN Headquarters, where Mr. Guterres called on member states to agree on a legally binding treaty to regulate and prohibit the use of these weapons by 2026.

Attempts to regulate and ban lethal autonomous weapons systems are not new. In fact, the UN held its first meeting of diplomats in 2014, at the Palais des Nations in Geneva, where French Ambassador Jean-Hugues Simon-Michel chaired four days of expert talks. At the time, he described lethal autonomous weapons systems as a “difficult emerging issue on the disarmament agenda,” even though none had been used in conflict. The prevailing view at the time was that preemptive rule-making was necessary should technology make lethal autonomous weapons systems a reality.
Eleven years later, the talks are still ongoing, but there is still no consensus on the definition of autonomous weapons, let alone agreed-upon controls on their use. Nevertheless, NGOs and the UN are optimistic that the international community is slowly moving toward a common understanding of the key issues.
“We are nowhere near negotiating a text,” said Ms. Rogin of Stop Killer Robots. However, she pointed to a "very promising and innovative text that could form the basis for negotiations if there is political will and courage" put forward by the current chair of the Convention on Certain Conventional Weapons (CCW), a UN humanitarian legal instrument aimed at prohibiting or restricting the use of certain types of weapons "that are deemed to cause unnecessary or unjustified suffering to combatants or to affect civilians indiscriminately."
Ms. Wareham of Human Rights Watch also viewed the May talks at the UN as an important step forward, saying: "At least 120 countries fully support the call to negotiate a new international law on autonomous weapons systems. We see significant interest and support, including from Nobel Peace Prize laureates, artificial intelligence experts, technologists, and religious leaders."
Ms. Nakamitsu, Under-Secretary-General for Disarmament Affairs, said: "There is emerging agreement that autonomous weapons systems should be banned entirely. When it comes to war, someone has to be held accountable."

Share

Related News

Comments

No Comments Found