With the increasing use of artificial intelligence around the world, concerns have been raised about its accuracy and accountability. Amidst this shift in global workflow and culture, along with the doubts that come with it, an important question arises: can scientists and doctors begin to trust AI with their work? Responding to the question of whether scientists and doctors trust AI, Pushmeet Kohli, vice president of science and strategic initiatives at Google DeepMind, emphasized that artificial intelligence is a new form of intelligence, which demonstrates great power but also makes mistakes. He further noted that understanding when it produces accurate results is crucial. Speaking at Hindustan Times Leadership Summit (HTLS) 2025, Kohli said: “This is a new kind of intelligence. We are still learning and trying to understand the behavior of this technology. And yes, it is very powerful, but it makes mistakes, and the important element is to find out when it basically gives us the right results and when it fails.” Citing the example of Google AlphaFold, which is an AI system that uses machine learning to predict a protein’s 3D structure based on its primary amino acid sequence, Kohli noted, “It can figure out the structure of any protein, and yet, if you’re a biologist, you’ll think: Well, this model, although it’s extremely accurate, it still makes mistakes sometimes, and I’d like to spend the next 10 years of my life being right and finding out that it’s wrong.” He further acknowledged and said, “AlphaFold was not only very accurate, but it was also very good at showing its uncertainty about the problem. So where it made a mistake, it also kind of held its hand up and said, well, maybe I was unsure about this particular solution, so don’t trust it so much.” Kohli ensured that the progress of the generation of LLM, the big language models still remains. “Sometimes they hallucinate, and we build technology to make sure that when they hallucinate, we can catch it,” he said. Need for responsible AI Amid this growing use of AI, Kohli emphasized the need to ensure responsible AI. Emphasizing how DeepMind is focused on deploying AI responsibly, Kohli emphasized: “We’re not approaching this amazing era with a move fast and let’s break things mentality. We’re going with it with an element of, let’s be brave, but let’s also be responsible. So in that context, we recently announced and shared this major breakthrough from SynthID.” The platform is a tool by Google, which detects AI-generated content, including images, text, audio and video, by embedding invisible digital signals directly into the media, with the aim of helping users distinguish between human-generated and AI-generated material to combat misinformation. “And I think we need breakthroughs like this to make sure that AI not only comes up with these amazing breakthroughs, but is deployed responsibly in the world,” Kohli said. Role of AI in Healthcare Meanwhile, with the growing cases of chronic diseases, AI is becoming significantly important in science and healthcare. Experts at the 22nd CII Annual Health Summit last month suggested that AI could become a key tool in reshaping how healthcare is delivered, ANI reported. Naresh Trehan, Chairman of the CII Steering Group on Health & Healthcare Council and CMD of Medanta – The Medicity, noted that while India has progressed through government efforts and public-private collaboration, the next phase involves using AI to improve access and efficiency. “AI holds enormous potential to transform healthcare delivery by expanding access, improving expertise, reducing costs and improving overall efficiency,” the news portal quoted Trehan as saying.