Red Cross: deciding targets in conflict is difficult for AI – 7/5/2023 – Tech

Red Cross: deciding targets in conflict is difficult for AI – 7/5/2023 – Tech

[ad_1]

Autonomous weapons, decision-making algorithms and mass generation of fake news are possible uses of artificial intelligence in armed conflicts. Some of these tactics are already used in warfare, but data on this is scarce, according to the International Committee of the Red Cross (ICRC).

“With the information we have, it is impossible to know whether casualties decrease with the adoption of artificial intelligence or if and by how much they increase,” says ICRC public policy adviser Neil Davison.

The think tank NSCAI (National Security Commission Focused on AI), linked to the US government, argues in reports that more accurate autonomous weapons can reduce the number of unwanted casualties.

These weapons decide on targets and timing of firing based on sensors and techniques such as computer vision —artificial intelligence technique for identifying people and objects.

Davison argues that, in deciding who gets protection, class and context matter more than identity. “A soldier is a target, but if he surrenders or is injured, he ceases to be. A civilian who represents a risk can suffer repression.”

Today, prohibitions and limitations on autonomous weapons, defined by a convention of government experts convened in 2013, do not oblige participating countries to follow them. The ICRC is pressing for these rules to become binding.

Members of NATO (North Atlantic Treaty Organization) have defined ethical principles for the use of artificial intelligence, such as legality, accountability, prior testing and reduction of bias.

For Davison, there is still a lack of a specific legal system, since the applications of artificial intelligence are diverse. “It’s very different to use facial recognition to locate missing or downed soldiers than it is to determine who is in the line of fire. These models have known inefficiencies with darker skinned people.”

Currently, around 90 countries, including Brazil, are in favor of new rules for the application of artificial intelligence in warfare, according to the ICRC adviser. The military powers that dispute the technological prevalence, however, press for less regulation.

The risks of using these weapons without any constraint are serious, as the ICRC president, Mirjana Spoljaric, said earlier this year. “Are we to tolerate a world in which life and death are reduced to mechanical calculations?”

SEE BELOW AI APPLICATIONS IN CONFLICTS AND THEIR RISKS

autonomous weapons

Automatic weapon systems select targets and use force without human intervention, from sensors and software. Based on information from the environment, these systems recognize people and vehicles.

An example of an autonomous weapon already in use are anti-missile systems, which fire when they detect risks. Another rudimentary form of autonomous weapon is the land mine, which explodes on human contact.

The use of autonomous weapons must disregard the general rules of international humanitarian law, which allow the use of force only in self-defense against armed attack or under authorization of the United Nations Security Council.

cyber attacks

Big language models like ChatGPT and the like can be used to create fake news in text, video and audio on an unprecedented scale. As the cliché goes, in war, the truth is the first casualty.

The ICRC is concerned that civilians, because of misinformation, may be unfairly detained or subjected to ill-treatment, discrimination or denied access to essential services.

Decision-making

Artificial intelligence systems are able to analyze large amounts of data in less time than humans would be able to and provide guidelines and indexes to accelerate decisions. An example of this is Scale.AI’s Donovan chatbot, sold as an assistant to the US military.

The risk in this case, according to Davison, is to exempt the actors involved in the conflict from their responsibility. “Does the command for the shot come from the machine or from the officer?” he asks. He recalls, however, that countries are always responsible for adopting autonomous systems.

Espionage

Nations use drones and satellites equipped with computer vision capabilities to gather intelligence on other countries and armed groups. Such equipment may pose privacy risks and is subject to errors due to biases in AI programs.

Document translation

AIs are also capable of translating documents into several languages, which can provide a competitive advantage in the dispute for information.

[ad_2]

Source link