Drones have been used in warfare for several years, but are usually remotely operated by humans. But now, by combining image-recognition software and auto-navigation software, autonomous drones can be mass-produced inexpensively. What kind of crisis will autonomous drones used as weapons and weapons equipped with AI bring to the battlefield?
In March 2020, the Libyan government, with the help of Turkish reinforcements, used autonomous drones to attack armed forces. In May 2021, Israel conducted the world’s first group flight using fully autonomous drones and conducted strategic intelligence gathering. In addition to attackable drones, Turkey and Israel are deploying self-destruct drones that specialize in collecting information.
The enactment of a complete ban on autonomous killing weapons called killer robots, long demanded by human rights activists, is supported by 30 countries as of 2021. However, countries that lead the world militarily argue that a full ban is unnecessary. The US says concerns are exaggerated and that humans can effectively control autonomous weapons. The Russian government also says it can’t ban true AI weapons because they don’t exist yet.
However, the reality is that weapons that make their own decisions based on technological advances, such as the civil war in Libya, are already killing people. Because these drones have both remote-controlled and autonomous capabilities, it is impossible for the outside world to know whether humans have made the final decision to bomb each target.
Autonomous drones are becoming a force on the battlefield, and there are now dozens of projects in various governments to develop autonomous drones. Countries such as the United States, China and Russia participate in the discussion of treaties limiting autonomous drones, but continue to develop them.
Over the past decade, the availability of computing that can rapidly process large data sets has enabled researchers to make great strides in designing computer programs to process large amounts of information. Advances in AI make it possible for AI to write poems, translate languages accurately, and help develop new drugs.
However, the debate about the dangers of relying on computers for decision-making is heating up. Companies such as Google, Amazon, Apple, and Tesla are spending huge amounts of money on technology development, and attempts are being made to ensure that AI is not biased. He also announced that he was asking for an explanation.
In some countries, AI technologies such as facial recognition are already being deployed on autonomous weapons. In 2010, Samsung’s weapons division developed a Sentry gun that uses image recognition to detect and fire humans. The same artillery was also deployed on the Israeli-Gaza border. However, the governments of both Korea and Israel are saying that the system operates automatically, but it is humans who control it.
One expert says the technology makes weapons smarter, but it’s also making it easier for humans to remotely control them, saying it’s possible to fire a missile and stop it if it detects that it’s likely to hit a civilian.
However, an official from an international peace organization pointed out that the speed required on the battlefield will inevitably increase the number of decisions being left to machines, and that war is becoming a reality at a speed that humans can no longer control. . Related information can be found here.