Google distance from the bounce to use artificial intelligence militarily

The era of the slogan “Don’t Be Evil” ended with “Google”, after replacing it with the slogan of Allen in 2018, is “do right”. Now that the leadership of its mother’s business, Al -albanit, has withdrawn one of the most important moral positions of the company regarding the use of the army of the army of his artificial intelligence. This month, the company deleted its promise not to use artificial intelligence for weapons or monitoring, a promise she cut in 2018, this era is no longer under the principles of ‘responsible artificial intelligence’, and the head of his artificial intelligence division, Dimis Hasabis, has published a blog that declares the change and in any way. Hasabis wrote that artificial intelligence “became widespread like cell phones … developed rapidly.” However, the idea that ethical principles with the market must ‘develop’ is false. Yes, we live in an increasingly complicated geopolitical scene, as Hasabis described, but abandoning the moral premises of the war can lead to consequences beyond the control. Self -operational weapons are more dangerous than artificial intelligence in wars if we bring artificial intelligence to the battlefield, we can find automated systems that respond quickly and do not leave time for diplomacy. The war can become deadly, with the escalation of conflicts before people have time to intervene. The idea of ​​’clean’ automatic battles can force more military leaders to move, although artificial intelligence systems make a lot of mistakes and can also cause civil losses. A program that decides to kill a person automatically is the real problem here. Unlike the previous technology that has made armies more efficient or powerful, artificial intelligence systems can mainly change from (or what) the decision to end the life of human life. The annoying that Hasabis was the one who formulated the justification of “Google”, in which he clearly died. He adopted a serious melody in 2018, when the company established the principles of artificial intelligence, and among more than 2400 people in the field of artificial intelligence, they put their names on a promise not to work on independent weapons. Less than a decade, this promise did not have great importance. Worker Agency, a political and communications business, said Google has been under pressure for years to conclude military contracts. Military consultations support Erdogan in strengthening the global influence of Turkey and reminiscent of the visit of former Deputy Defense Minister Patrick Shanshan in the ‘Google’ industry in Sanifel, California, in 2017, while employees in the unit built up the infrastructure to work on many secret military projects. Hope was strong contracts. Anti -military use helped Fitzgerald stop it while participating in organizing the company’s protests regarding the “Maven” project, an agreement concluded by “Google” with the Ministry of Defense to develop artificial intelligence to analyze the drones shots, and Google employees were afraid to lead to automatic target. In 2015, about 4,000 employees signed a petition that stipulated that ‘(Google) should not be involved in the war field’, and about ten of them resigned, and in the end Google broke up, ‘the contract was not renewed. Looking back, Fitzgerald believes it was just a flash. He said: “It was an anomaly in the Path of the Silicon Valley. “Example,” Oben Ai “has entered into a partnership with the defense contracting business Anduril Industries and promoted its products to the US Army. Service “Claude” to defense of the defense. The controversial ethical council dissolved in 2019 and then expelled two of the most prominent artificial intelligence managers from her after a year. The company has deviated from its original goals to the extent that it can no longer see them. Likewise, the case of his peers in the Silicon Valley, which should not be left to set his regulatory rules for himself. But with some happiness, Google’s transformation will put more pressure on government leaders this week to put legally binding regulations to develop military artificial intelligence, before ethnic interaction mechanisms and political pressure have increased problems. The rules can be simple. A compulsory obligation under human supervision of all military artificial intelligence systems, and the ban on completely independent weapons that can define targets without human consent, and to first ensure that these artificial intelligence systems can be investigated. Artificial intelligence accelerates the coming of ambulances and firefighters in America. One of the reasonable proposals on this policy comes from the Institute for the Future of Life, a research center that has been funded by Elon Musk in the past and is currently run by physicist Max Tigmark of the Massachusetts Institute of Technology. The proposal calls for a multi -level system so that the national authorities treat military artificial intelligence systems as well as core facilities, and ask for unambiguous evidence of their safety margins. Governments that meet in Paris also consider the establishment of an international body to set up these safety standards, such as the supervision of the International Atomic Energy Agency on nuclear technology. It should be able to impose sanctions on companies (and countries) that violate these standards. The transformation of “Google” is a warning that even the most powerful corporate values ​​can collapse under the pressure of a highly interactive market and in the light of a political administration that one cannot say, not just. The era of self -organization has ended according to a principle that is not an evil, but there is still an opportunity to set binding rules to prevent the darker risk of artificial intelligence, and the automatic war is definitely one of them.