Techrecipe

Ethical challenges posed by AI military use

Research and development to use artificial intelligence militarily due to technological advances is underway in each country. However, it is pointed out that autonomous weapons that can kill enemies and destroy bases by judging the situation without direct commands from humans cannot be measured by computers without the same morality as humans, and there is a big problem in putting them into practical use.

The Pentagon is actively conducting AI research and is allocating $927 million in 2020 and $841 million in 2021 for AI-related projects, including weapons development. In addition, the Department of Defense Advanced Research Projects Agency is planning AI-related projects for five years from 2019 to 2023, with a budget of $2 billion.

In addition, the New York Military Academy is training robot tanks that can be programmed to defeat enemies, and is teaching not only how to handle algorithms, but also caution about autonomous weapons. However, it is said that the robot is not yet ready to put this robot tank into battle.

In December 2020, the U.S. Air Force conducted an AI experiment based on the MuZero developed by DeepMind to detect enemy missiles and missile launchers from the U-2 high-altitude tactical reconnaissance aircraft. Although the operation of the U-2 itself was done by humans, this experiment is the first example of using AI in a US aircraft in earnest.

In addition, it is developing a system that makes tactics and strategies think with AI as well as mounted on weapons. According to an official at the Joint Artificial Intelligence Center of the Ministry of National Defense, a system that simplifies the process of selecting a target using AI and reduces the time it takes to attack is scheduled to be put into battle from around 2021.

Human rights group officials have tried to ban the manufacture and use of anti-personnel weapons, such as cluster bombs and land mines, but are now appealing to governments to impose drastic restrictions on autonomous weapons. As the machine cannot consider ethical choices that are difficult for even humans to judge without consideration, it is argued that killing by autonomous weapons is beyond the moral threshold. It is said that killing by robots is prone to war because responsibility is vague. For example, if oil seeps into the opponent’s shirt, using a different camouflage pattern than usual, or even if there is a small difference from the assumed situation, the computer may become confused and lose the ability to distinguish friendly from enemy. In an experiment held at Carnegie Mellon University, it is said that the AI, which was instructed not to lose to the opponent in Tetris, ultimately decided that the game would not be paused. In other words, AI has the potential to become completely useless on the battlefield because it completely ignores fairness and norms to meet demands.

An expert on AI-related projects from the U.S. Department of Defense said that there are concerns that AI technology may not be able to protect the world, but AI tests should be conducted, but there is no risk of losing ethical and moral standards. But advances in AI technology mean that we are faced with the ethical challenges of AI. When it comes to AI military use, the question arises as to how much control the commander should give the machine over the decision to kill a person on the battlefield.

Unlike humans, machines do not have sensory numbness due to fatigue and stress. Human soldiers can make wrong choices about who to target with what firepower if their comrades are killed, but the machines are focused on their mission without affecting their emotions. Former Google CEO Eric Schmidt, who toured the U-2 experiment, said the military is unlikely to immediately adopt AI because it is difficult to show how AI functions in every conceivable situation, including those where human lives are threatened. have.

Of course, there is also an opinion that there should be a crackdown on the use of AI by the military. In October 2012, several human rights groups were concerned about the rapid development of drones and the rapid growth of artificial intelligence technology, and launched a campaign to abolish robots for the purpose of killing humans. In addition, at an international conference on the CCW, a treaty on the ban on the use of certain conventional weapons held in 2013, it was discussed whether to completely ban the manufacture, sale, and use of weapons of self-destruction.

In order to submit to the UN a draft ban on weapons of self-destruction in the CCW, the consent of all 125 countries with the CCW must be obtained. However, so far, only 30 countries have responded that they agree.

There are also voices within Google that oppose participation in such military projects. In April 2018, 4,000 Google employees signed and submitted a petition to Google calling for the withdrawal of Project Maven, which is developing a system for AI-identifying and tracking objects from drone footage or satellites. In June of the same year, Google announced that it was not renewing the Project Marvin contract and said that it had nothing to do with the system it was using directly on the weapon. Similar petitions were also filed by Amazon and Microsoft, but both companies still have close ties with the Pentagon.

In response to the Project Marvin riot, the Pentagon asked the U.S. Defense Innovation Commission for proposals for ethical guidelines on AI and ordered an investigation into ethical issues related to the use of AI. Former CEO Schmidt said it would be a tragedy if a human made a mistake and killed a civilian, but it would be more than a tragedy when an autonomous system kills a civilian. Unlike humans, systems with uncertain performance in general do not consider responsibility, and AI reliability may be modified in the coming decades, he said. Related information can be found here.

lswcap

lswcap

Through the monthly AHC PC and HowPC magazine era, he has watched 'technology age' in online IT media such as ZDNet, electronic newspaper Internet manager, editor of Consumer Journal Ivers, TechHolic publisher, and editor of Venture Square. I am curious about this market that is still full of vitality.

Add comment

Follow us

Don't be shy, get in touch. We love meeting interesting people and making new friends.

Most discussed

%d 블로거가 이것을 좋아합니다: