When will Ai be smarter than people? Don't ask

(Bloomberg opinion) -if you have heard the term artificial general intelligence, or AGI, it probably makes you think of a human intelligence, such as the love interest in the film, or a superhuman, such as Skypet of the Terminator. Anyway, something science fictional and distant. But now a growing number of people in the technology industry are prophesying and even out there, Agi or ‘human level’ is prophesying in the near future. These people may believe what they say, but it is at least partly hype designed to get investors to throw billions of dollars on AI businesses. Yes, big changes are almost certain, and you have to prepare for it. But for most of us, it is a distraction and in the worst case of deliberate wrong guidance. Business leaders and policymakers need a better way to think about what’s coming. Fortunately, there is one. How many years away? Sam Altman of Openai, Dario Amodei of Anthropic and Elon Musk of Xai (the thing for which he is the least famous) recently said that Agi, or something, will arrive within a few years. More measured voices such as Google Deepmind’s Demis Hassabis and Meta’s Yann Lecun see that it is at least five to ten years. More recently, the meme mainstream, with journalists, including New York Times’s Ezra Klein and Kevin Roose, arguing that society should be ready for something like AGI in the near future. I say ‘something like’, because of time, these people with the term agi flirt and then pull back to a more ominous phrasing like ‘powerful AI’. And what they can mean by that vary greatly-of AI that can do almost any individual cognitive task as well as one, but can still be very specialized (small, Roose), to do Nobel price level (amodei, altman), to think like a real person in every way (Hassabis), to a physical world (members). Is one of these ‘really’ AGI? The truth is, it doesn’t matter. If there’s even something like AGI – that I will argue, it is not – it will not be a sharp threshold that crosses us. For the people who tackle it, AGI is now simply a short version of the idea that something is very disruptive: software that cannot just encode an app, sets up a school assignment, writes bedtime stories for your children or can discuss a holiday – but can throw many people out of work, make great scientific breakthroughs, and offer frightening power to hackers, terrorists, terrorists. This prediction is worth taking seriously, and to call it Agi does have a way of letting people sit up and listening. But instead of talking about Agi or Ai at human level, let’s talk about different types of AI, and what they will and can’t do. What LLMs cannot do a form of intelligence at human level has been the goal since the AI ​​race kicked off 70 years ago. The best that could be done was for decades ‘narrow AI’ such as IBM’s chess-winning deep blue, or Google’s Alphafold, which predicts protein structures and won the creators (including Hassabis) part of the Chemistry Nobel last year. Both were very beyond human level, but only for one highly specific task. If Agi suddenly looks closer, it is because the great language models underlying Chatgpt and its kind of more human and more general purpose. LLMS interacts in an ordinary language. They can at least provide plausible answers to most questions. They write pretty good fiction, at least if it’s very short. (For longer stories, they lose the trail of characters and intriguing details.) They are increasingly achieving measures of skills of skills such as coding, medical or bar exams and math problems. They get better at step-by-step reasoning and more complicated tasks. If the people who talk most Gung-Ho Ai are around the corner, it is basically a more advanced form of these models they are talking about. It is not that LLMs do not have major consequences. Some software businesses are already planning to hire fewer engineers. Most tasks that follow a similar process each time – to make medical diagnoses, set up legal docks, write research assignments, create marketing campaigns, and so on – will be things that a human worker can at least partially outsource to AI. Some are already. This will make these workers more productive, which can lead to the elimination of some posts. Although not necessarily: Geoffrey Hinton, the Nobel -winning computer scientist, known as The Godfather of Ai, notorious predicted that AI would soon be aging radiologists. There is a shortage of them in the US today. But in an important sense, LLMs are still ‘narrow ai’. They can find one job while they are bad at a seemingly adjacent one – a phenomenon known as the curtailed border. For example, an AI may pass a bar exam with flying colors, but Botch transforms a conversation with a client into a legal assignment. This may answer a few questions perfectly, but regularly “hallucinates” (ie find out facts) on others. LLMS is doing well with problems that can be solved using clear rules, but in some newer tests where the rules were more ambiguous, models that achieved 80% or more on other criteria even struggled to reach single figures. And even if LLMS also starts beating these tests, they would still be narrow. It is one thing to tackle a defined, limited problem, however difficult. It is something else to adopt what people do on a typical work day. Even a mathematician doesn’t just spend all day on math problems. People do countless things that cannot be benchmark because they do not have problems with right or wrong answers. We weigh conflicting priorities, slumbering plans, make grants for incomplete knowledge, develop solutions, act on the desire, read the room and, above all, interact with the highly unpredictable and irrational intelligences that other people are. Indeed, one argument against LLMS that can ever do Nobel pricing level is that the most brilliant scientists are not the ones who know most, but those who challenge conventional wisdom, suggest unlikely hypotheses and ask questions that no one else thought about it. It is almost the opposite of an LLM, designed to find the most likely consensus answer based on all the information available. So one day we can build an LLM that can do almost any individual cognitive task as well as one. It could possibly compile a whole series of tasks to solve a bigger problem. According to some definitions, it would be at human level. But it would still be as stupid as a brick if you made it work in an office. Human intelligence is not “generally” a core problem with the idea of ​​AGI is that it is based on a very anthropocentric idea of ​​what intelligence is. Most AI research considers intelligence as a more or less linear measure. It assumes that machines will reach human level or “general” intelligence at some point, and then perhaps “super intelligence”, at what point they become shine and destroy us or turn into benevolent gods who take care of all our needs. But there is a strong argument that human intelligence is not “common”. Our thoughts have developed for the very specific challenge of being us. Our body size and shape, the types of food we can digest, the predators we once faced, the size of our family groups, the way we communicate, even the strength of gravity and the wavelengths of the light we see, determined everything to determine what our minds are good for. Other animals have many forms of intelligence that we do not have: a spider can distinguish predators of prey in the vibrations of his web, an elephant can remember that migration routes are thousands of miles long, and in an octopus, each tentacle literally has its own thoughts. In a 2017 essay for Wired, Kevin Kelly argued that we should think of human intelligence, not than the top of an evolutionary tree, but only one point within a group of earth-based intelligences that are a small grease in a universe of all possible strangers and machines. This, he wrote, blows the ‘myth of a superhuman ai’ apart that can do everything better than us. On the contrary, we should expect “many hundreds of extra-human new thinking species, most different from people, no one who will be a general purpose, and no one who will be an immediate God who will solve big problems in a blink of an eye.” It is a feature, not an error. In most needs, specialized intelligences will, in my opinion, be cheaper and more reliable than an onslaught that looks as closely possible. Not to mention that they are less likely to get up and claim their rights. Swarms of agents None of this is to dismiss the big leap we can expect from AI in the next few years. One leap already started is ‘agent’ ai. Agents are still based on LLMs, but instead of simply analyzing information, they can perform actions such as filling out a purchase or filling out a web form. Zoom, for example, is soon planning to launch agents who can find a transcript of the meeting to create action items, set up follow-up e-mails and plan the next meeting. So far, the performance of AI drugs has been mixed, but as with LLMS, it expects to improve dramatically to the point where fairly sophisticated processes can be automatically. Some people can claim to be AGI. But again, it is more confusing than enlightening. Agents will not be ‘common’, but more like personal assistants with extraordinary one-track thoughts. Maybe you have dozens of them. Even if they skyrocket your productivity, its management will be like juggling dozens of different software apps – just as you do. You may find an agent to manage all your agents, but it will also be limited to whatever goals you set them. And what will happen if millions or billions of agents work together online are someone’s guess. Perhaps, just as the trading algorithms have set up an unexplained market “flash accidents”, they will activate each other in unstoppable chain reactions that paralyze half of the internet. More worryingly, malicious actors can mobilize flocks of agents to wreak havoc. Yet LLMs and their agents are only one kind of AI. Within a few years, we may have fundamentally different types. Lecun’s laboratory at Meta, for example, is one of several who tries to build the so -called embodiment AI. The theory is that by placing AI in a robotic body in the physical world, or in a simulation, it can learn about objects, location and movement – the building blocks of human understanding from which higher concepts can flow. By contrast, LLMS, only on large amounts of text, has only ape -human thought processes on the surface, but shows no proof that they actually have them, or even what they think in any meaningful sense. Will embodiment ai lead to real thinking machines, or just very capable robots? Right now it’s impossible to say. However, even if it is the former, it would still be misleading to call it AGI. To go back to the point about evolution: Just as it would be absurd to expect one to think like a spider or an elephant, it would be absurd to expect an elongated robot with six wheels and four arms that do not sleep, eat or have sex – let alone friendships, wrestle with his conscience or to consider his own mortality. It may be able to transport grandmother from the living room to the bedroom, but it will perform and perform the task completely differently than we would. Many of the things that AI will be capable of, we can’t even imagine it. The best way to detect that progress and make sense is to stop comparing it with people, or to ask anything from the films, and rather: What does it actually do? More from Bloomberg opinion: This column reflects the author’s personal views and does not necessarily reflect the opinion of the editorial or Bloomberg MP and his owners. Gideon Lichfield is the former editor -in -chief of Wired Magazine and mit technology review. He writes FuturePolis, a newsletter about the future of democracy. More stories like these are available on Bloomberg.com/opinion © 2025 Bloomberg LP