Stop comparing artificial intelligence with people in vain
If you hear the term general artificial intelligence, human intelligence may come to you, just as the artificial intelligence program is the torment of the voice that the film (her) ate, or a supernatural intelligence similar to “shine” from the movie “The Terminator”. It’s a fantastic thing that is far away anyway. Those who predict the rise of public artificial intelligence or artificial intelligence “at the level of man” are increasing in the near future, whether among the workers in the technological sector or from the outside. They may believe what they say, but at least partly it is due to propaganda willing to attract investors who pump billions of dollars into artificial intelligence companies. We will inevitably see major changes and we must prepare for it. But the name is generally artificial intelligence, it is at its best distraction and in the worst deliberate wrong information. Business leaders and policymakers need a better way to think about what’s coming. Fortunately, there is a way. How long will it pass before we reach it? Sam Altman of “Oben Ai”, Dario Amani of “Anthropic” and Elon Musk of “Xai” (which is the least reason for his fame) said that public artificial intelligence or the like will appear within two years. While others, such as Dimis Hasabis from Google Depp Mind “and Yan from” Dead “, believe it would not be 5 to 10 years ago. The spread of this term recently expanded and journalists, including Ezra Klein and Kevin Rose of the New York Times, were the case that society should prepare for something similar to the general artificial intelligence and then Expression like ‘strong artificial intelligence’. Development point has not reached. It is a worthy prediction, and the name is general artificial intelligence. But instead of talking about general artificial intelligence or artificial intelligence at the level of man, let’s talk about different types of artificial intelligence and what they can’t do. What the most important language models cannot do is to achieve the goal, as the beginning of the artificial intelligence breed reaches a form of intelligence 70 years ago. The best that could be achieved was “narrow artificial intelligence” such as “deep blue” of “IBM” that won the chess game, or “alpha fold” from “Google”, which predicted the structure of protein, and whose innovators (including Hasabis) made the Nobel Prize in chemistry last year. Both were much heavier than the human level, but only for one specific task. If the general artificial intelligence looks closer, it is because the wonderful language models on which the ‘chat GBT’ and its kind are more similar to humans and comprehensive. The great language models are interacted with us in a simple language. It can at least provide for most questions. You write good fictional stories, at least if you are very short. (In long stories you lose the ability to detect characters and detect details). This is constantly higher results in normative tests for skills such as programming, medical exams, or the association of advocates and math issues. It improves thinking step by step and complicated tasks. When the enthusiastic of artificial intelligence speaks the rise of general artificial intelligence carefully, they actually talk about more advanced form of these models. This does not mean that language models will not have significant consequences, as some software businesses intend to employ fewer engineers. Most of the tasks that follow a similar process each time – such as performing medical diagnoses, preparing legal files, writing research summaries, creating marketing campaigns, etc. – will be tasks that employees can use, even partially with artificial intelligence. Some of them start it. This will increase their productivity, and this can lead to the removal of some jobs. But that doesn’t necessarily happen. Jeffrey Hinton, a Nobel Prize -winning computer world, known as the spiritual father of artificial intelligence, expected this technique to eliminate the work of radioologists. But today there is a shortage of their numbers in the United States. Linguistic models are still “a close artificial intelligence” and can perform in one job, while it is bad in another work linked to it, a phenomenon known as “robust boundaries”. For example, artificial intelligence can succeed the syndicate exam with great success, but it cannot convert a conversation with a customer into a legitimate file. He may answer a few questions perfectly, but it exaggerates the ‘imagination’ (ie the facts produced) in other questions. Language models have donated the issues that can be resolved with clear rules, but in some of the latest tests that were more mysterious rules, the models, which received 80% or more in other criteria, experienced difficulties in reaching lower success rates than 10%. Even if the language models in these tests begin to perform, they will remain limited. There is a big difference between addressing a specific and limited problem, regardless of its problems, and the experience of what people do on a normal work day. Even the math world doesn’t just spend his full day solving math problems. People do countless things that cannot be measured because they are not limited problems with correct or wrong answers. We are balanced in conflict of priorities and give up failed plans and calculate cognitive palaces and place alternative solutions and act based on our intuition and learning about what’s going on in the room, and we are constantly dealing with the unexpected and irrational human intelligences. In fact, one of the arguments that denies the ability of linguistic models is to perform works comparable to the Nobel Prize, that the brightest scholars are no longer known, but rather those who challenge traditional wisdom, ask unlikely hypotheses and ask questions that no one thought of asking. It differs completely from linguistic models, designed to find the most compatible answers based on all available information. Therefore, one day we may be able to build linguistic models that enable us to perform almost any individual cognitive task with human efficiency. Maybe you can connect a whole series of tasks to solve a bigger problem. According to some definitions, it will be artificial intelligence at the level of human. But it will remain very stupid if placed in an office. Human intelligence is not ‘common’. The biggest problem lies in the idea of general artificial intelligence that it is based on a high -central human concept of what intelligence is. Most research on artificial intelligence deals with intelligence as if it were a written measure. It is assumed that the machines will reach at some point in the level of human intelligence or “general intelligence”, and maybe then “super intelligence”, and then you become like the “Sky just” network, destroy us or convert into good powers that take care of all our needs. But there is a strong argument that human intelligence is in fact “common”. Our mind has developed to present a very specific challenge, what is to be what we are in terms of the sizes of our bodies and shapes, the types of food we can digest, the predators we are exposed to, the size of our family members, the way we communicate, and even the strength of the gravity and the length of light waves we realize. Other animals have many forms of intelligence that we do not have: the spider can distinguish between the predator and the prey through the vibrations of his network, and the elephant can remember the migration paths that finish up to thousands, and in Octopus every sensor carries as if it has a thought that belongs to it. In an article published by the “Ward” magazine in 2017, Hajj Kevin Kelly has that we should not regard human intelligence as the height of the tree of development, but rather as one point within a group of earthly intelligences, which itself represents a small stomach in a world full of all space intelligences and possible mechanism. He wrote that it removes the ‘supernatural intelligence legend’ that can do everything much better than us. On the contrary, we should “expect hundreds of new kinds -human thinking, which is a lot of people, and none of them will have a general purpose, and none of them will be an immediate superpower that solves major problems in a jiffy.” It is a function, not a defect. When it comes to most needs, I think specialized intelligences will be cheaper and more reliable than multiple intelligences that are very similar to us. Not to mention that it is less likely to rise and claim rights. Customers swarms do not mean ignoring the tremendous jumps we can expect from artificial intelligence in the next few years. One of the jumps that has already begun is the ‘agent’ artificial intelligence. Customers still depend on major language programs, but instead of just analyzing information, they can implement procedures such as purchase or filling a web model. For example, “Zoom” plans to launch customers soon. To date, the execution of customers of artificial intelligence is varying, but as with large linguistic models programs, we expect to improve so much that it can automate many complicated activities. Some people can claim to be common artificial intelligence. But I repeat that it is more confused than that. The agents will not be ‘two years’, but are rather like assistants with one -sided thoughts. You may have dozens of them. Even if they increase your productivity, their management will be more like running dozens of different software applications, just as you currently do. You may be appointed as an agent to manage all your agents, but he will also be bound by the goals you set for him. What will happen if millions or billions of agents work together online is something no one knows. Since the trading algorithms have caused ‘sudden crashes’ that are not interpreted in the markets, they can push each other to serial reactions that cannot paralyze half the internet. The most disturbing is that malicious bodies can mobilize swarms of clients to spread chaos. However, large linguistic models and clients are only one kind of artificial intelligence. Within a few years we can have radically different types. For example, a lab in “dead” is one of several laboratories trying to call artificial artificial intelligence. The theory behind it is that the placement of artificial intelligence in a robotic body in the material world, or in simulation, can learn about things, location and movement- which are the basic building blocks of human understanding from which the higher concepts arise. In contrast, linguistic models programs are trained only on g Root quantities of texts, which mimic human thought processes, but they show no evidence of possession of this ability or even meaning. Will the incarnate artificial intelligence lead to the rise of intelligent thinking machines, or just ingenious robots? Currently it is impossible to claim. Even if it is the first answer, it will still be described as misleading artificial intelligence. When we return to the point of development: as much as it is ridiculous to expect a person who thinks like the spider or the elephant will be ridiculous that we expect a rectangular robot to think of six wheels and four arms, that he does not sleep, not eat or multiply – not to form his inability to form or think of friendships. He could possibly transport the grandmother from the living room to the bedroom, but it will realize the task and perform it completely differently from people. We can’t even imagine many of the things that artificial intelligence can do today. The best way to detect and understand this progress is to stop comparing it with people or any of the films and continue to ask: What can he really do?