Self -operating weapons are more dangerous than artificial intelligence in wars
There is no doubt that artificial intelligence will change the nature of wars as it changes the nature of almost everything. But will this change be disastrous and destructive, or a gradual development of the best forms? Let us hope for the gradual development for humanity. Technological development always changes the nature of the war. So this is the matter, as the rise of horses -signed wheels, the groove, gunpowder, nuclear bombs and drones in our time, as the Ukrainians and Russians explain to us every day. The example I prefer (because it is very simple) is the Battle of Congarians in the nineteenth century in which the Prussia triumphed over the Austrians, and thus included Germany’s unification of Berlin instead of Vienna. The victory of the Prussia is largely due to their struggles, with guns filled with ammunition from the back of the pipe, so they could quickly regain them while lying on the ground, while the Austrians used guns with the front nozzle, and they drove them in a slower way. If artificial intelligence is similar to the type of technology, the United States or China may hope for their struggle to lead in this area, to bring about military superiority for a short period. However, artificial intelligence as military technology looks less similar to the gun filled from behind, and closer to Telegraph, the internet or even electricity. In the sense that it is not a weapon, just like an essential structure that will gradually change everything, including the battles themselves. Also read: America increases the cost of the world’s Upper Armament program, satellites and spying marches. This has already been achieved. US satellites with monitoring and spying tickets are now catching information that people cannot analyze and quickly to provide useful advice to the Ukrainians related to the Russian forces movement at a time that enables a military action. Therefore, artificial intelligence takes care of the task. In this way, soldiers like doctors who use artificial intelligence are to guide them amid a large amount of X -ray data. The next step is to add artificial intelligence to the different types of robots that will, for example, get the automatic automatic function assistance. The person will continue to lead the plane, but it will be surrounded by a squadron of drones using sensors and artificial intelligence to monitor and destroy the air defense of the enemy or soil forces with the pilot’s orders. Robots will not care, even if their validity ends during this process, if it is their fate. In this way, artificial intelligence can save lives, in addition to saving costs, and people liberate people to focus on the greatest context of the task. Also read: Military offices support Erdogan in promoting the global influence of Türkiye. The most important details here are that these robots must need a person’s consent before they are killed. And I believe that we should never trust any algorithm that is adequate context consciousness, for example that a group of people wearing civilian clothes will probably be civilians, or that they are fighters – even people are known for a poor ability to distinguish the difference between them. We should also not allow artificial intelligence to determine whether the human losses needed for the success of the tactical task are proportional to the strategic goal or not. The existential question, and therefore the existential question does not relate to artificial intelligence in itself. Paul Sar of the ‘New American Security Center’, who wrote books on this topic, is indeed believed to be related to the degree of independence our people give to our machines. In other words, will the algorithm soldiers, officers and leaders help, or will you replace it? This problem is not completely new either. A long time before artificial intelligence appeared during the Cold War, Moscow built ‘dead hand’ systems, including a system called ‘perimeter’. It is a fully automated system for the launch of nuclear attacks after the death of the human management of the Kremlin in any attack. It is clear that the purpose of this is to persuade the enemy that the first blow, even if it succeeds, will lead to a certain mutual destruction. But one wonders what will happen as the “perimter” system, which is the Russians working on the update and firing of the missiles by accident. Therefore, this problem is related to how machines make a decision to be independent of people. In the case of nuclear weapons, these risks are existential. But it is still frighteningly high in all other “laws) (laws) weapons systems, as they are officially called deadly robots. The algorithm can make good decisions and minimize death, and for this reason some air defense systems are already artificial intelligence, they are faster and better than people. Most technologically advanced, in some ways the initiatives led in the style of role models, and they were not in other aspects. It is not the purpose to ban it anyway. Here, as in international law, the United States can often play a constructive role. The United Nations Convention on certain traditional weapons, which seek to limit malicious murder techniques such as land mines, is trying to block an extensive ban self -working robots. But the United States is one of the countries that opposes the ban. Instead, she must support the imposition of the embargo and encourage China and then encourage other countries to follow his example. Even if the world rejects the presence of ‘autonomous operational weapons systems’, artificial intelligence will naturally continue to produce new risks. It will accelerate the military decisions that people do not have enough time to judge the situation, and in the case of serious pressure, or make them deadly mistakes or surrender to the algorithm. This is called ‘automation prejudice’, which is the psychological phenomenon that is common when people leave for their cars, for example, to direct them until they fall into a pool or the highest slope. However, the risks increased with the increase in military innovations, as the healthy person tied stones to the Ramah industry. Until now, we have mostly learned how to deal with new risks, provided we people, not robots, keep the final and existential decisions, and there is still hope that we will develop side by side with artificial intelligence, rather than to mean.