'Google' stumbles in the 'Gimenai' proposal came in the interests of the world

Google’s investors have the right to be angry about the incredible deficit that the company has proposed the AI ​​system “Gemini”. But for the rest of the people, also I am the user of Google, who is always optimistic about technology, it was a blessing. Funny failures on the photos born the ‘Gimenai’ conversation, like the pictures of a diverse ethnic Nazi soldiers, a useful look at a dark world similar to that writer George Orwell filmed his functions in one of his most famous novels. Thus, these images also shed light on important questions about blurring, trust, the scope of the application and truth, and they deserve more attention as we think about the path to which artificial intelligence will lead us. Artificial intelligence is a revolutionary innovation that can contribute to radical transformations. As is the case of this defeating innovations, it is capable of making tremendous progress in human well -being. What the world needs is a contract or two contracts for economic growth in artificial intelligence. However, rejoice in existing artificial intelligence is premature. The idea of ​​artificial intelligence is very enthusiastic, intellectual achievement and it is surprising, so surprising that one can easily drive to him. But innovators, real and future users and organizational bodies should look more closely at what happens, especially with regard to the purposes that artificial intelligence can serve useful. Who is the speaker? An aspect of problems in dealing with all the results that artificial intelligence will lead to the great effort to design artificial intelligence models that express themselves like people, probably for marketing reasons. For example, artificial intelligence says, “Yes, I can help you with that.” Thank you, but what is meant by the word “I”? The idea is that artificial intelligence can be understood and that they can handle as much as a person can understand and handle, but the difference is that artificial intelligence is smarter and more informed in an unlimited way. For this reason, when it comes to making decisions, artificial intelligence places a ability of power on its users with poor mental abilities. There is a big difference between artificial intelligence as a tool that people use to improve their decisions, which is decisions that they will continue to bear their responsibility, and artificial intelligence as a decision -making instrument. ‘Google’ introduces an updated version of ‘Gimenyi’ for artificial intelligence in time, it is likely that artificial intelligence receives decisions on a large scale, not only with regard to information (in the form of texts and videos) that it is transferred to its human users, but also with regard to verbs. In the end, the “Tesla” car described as “self -management” will actually be a fully self -managed car. At that point, the responsibility for bad decisions will fall while running the ‘Tesla’. Between artificial consultative intelligence and independent artificial intelligence, it is difficult to determine whether responsibility should bear when systems cause errors that have consequences. There is no doubt that the courts will handle this case. Evaluation As we set aside responsibility, we will want to evaluate the effectiveness of artificial intelligence to make decisions, the more progress. But this is also a problem. For reasons I do not understand it, it is not said that artificial intelligence models are wrong, but it is said to be ‘cheerful’. But how do we know it is cheerful? We take care of this with certainty when these models offer absurd results that people with limited information are laughing. But when artificial intelligence systems produce things, they will not be with this folly in all cases. Even the designers of these systems cannot explain all these errors, and their discovery can exceed human abilities. We can ask the question to an artificial intelligence systems, but it is cheerful. How does creative artificial intelligence benefit -hallucinations? Even if the discovery of errors and inventory it is reliable, the criteria for evaluating the execution of artificial intelligence models are unclear. People make mistakes all the time. If artificial intelligence errors are less than human error, will it be good enough? I will tend to answer yes about different reasons, including the whole self -leadership, but the field of questions about artificial intelligence must be appropriate. This question is one of the questions I do not want to answer artificial intelligence: “If artificial intelligence makes less mistakes than those people, will it be good enough?” The facts and values ​​of the idea are that such judgments are not directly related to the facts, and it is a difference in the heart of the matter. The evaluation, if it is an opinion or justified behavior, is often dependent on values. Perhaps these values ​​are influenced by these behavior in themselves (for example, do I understand the rights of a person?) Or with its consequences (is the result as a social benefit greater than the alternative?). Artificial intelligence deals with these complications by adding values ​​to verbs and/or implicitly, but he must distract these values ​​by something similar to the consensus included in the information trained on it or by instructions it receives from the users and/or the designers. The problem is that more consensus has any more instructions have any moral authority. If artificial intelligence produces an opinion, it is still just an opinion. Alphabet struggles for this reason in her race with Microsoft with Jimina’s weapon, and artificial intelligence came in his time. The difference between the facts and values, which were previously clear, was criticized by all sides. Prisoner -journalists said that they really do not understand the meaning of the word “objectively”. The ‘critical theorists’ who dominate many social study programs in colleges deal with ‘false awareness’, ‘social construction’ and the truth as a ‘living experience’, all of which question the existence of facts and consider values ​​as suppression equipment. As for the owners of the principle of effective altruism, they deal with values ​​in a completely different way, because they claim that we can judge the effects from one angle, which eliminates all other values ​​except ‘benefit’. So they are delighted with those who agree with the morals of algorithms. Valuable statements, although these ideas permeate what artificial intelligence claims to know it, has driven more than designers who promote cultural re -rape regarding race, gender and justice, expected artificial intelligence systems to make values ​​as facts (just like humans) and obscure information you can make to make a moral mistake (just like people). And Andrew Salifan, Google at the beginning emphasized that the results of her research are “unbiased and objective”, but now the main purpose of being “socially useful” is. Artificial intelligence systems can be estimated or directed that if they have to choose between what is real and what a social advantage has, she must choose what a social advantage has, and then lie to users to take this step. In the end, artificial intelligence is very smart, so it must be the ‘truth’ that it is really real. Jiminai has demonstrated unforgettably that the truth he offers is not sincere. Thank you, “Google” for this great failure.