Attention Gmail users! Google warns about hidden AI scam stealing passwords, 1.8 billion accounts vulnerable
Google warns its 1.8 billion Gmail users of a new threat to cyber security with indirect rapid injections utilizing AI advance. The company improves protection in its AI -Chatbot, twin, to prevent unauthorized access to sensitive information. Google has warned its 1.8 billion Gmail users worldwide against a new threat to cyber security that utilizes the progress in artificial intelligence, reports Men’s Journal. Google has warned its 1.8 billion Gmail users worldwide against a new threat to cyber security that utilizes the progress in artificial intelligence, reports Men’s Journal. What are indirect quick injections? The company reportedly raised the alarm too many injections, a form of attack that he believes can target individuals, businesses and even governments. In a recent blog post, Google explained that unlike direct rapid injections, where hackers enter malicious assignments into an AI instrument, indirect attacks hide the hiding of harmful instructions within external sources such as email, documents or calendar invitations. Once processed, these instructions can mislead the system to expose sensitive information or perform unauthorized actions. “With the rapid acceptance of generative AI, a new wave of threats is coming across the industry,” the company wrote and warned that the risk became more important as AI is widely used for professional and personal tasks. Experts explain risk technology expert Scott Polarman told the Daily Record that attackers Gemini, Google’s own AI -ChatBot, exploits to carry out such scams. He explained that malicious code can be hidden inside ‘Ne post and, when read by twins, is used to extract sign -up details without the user realizing it. “The danger is that people don’t have to click on something,” Pollerman said. “Hidden instructions can result in the AI passwords and other data revealing and effectively turning the system against itself.” Google reportedly said it was already rolling out new protection. This includes strengthening its Gemini 2.5 model, launching machine learning systems to detect suspicious directions and add larger system level safety measures. According to the company, these layers are designed to increase the problems and costs of such attacks, forcing cyber criminals to use less subtle and more detectable methods. The warning comes amid increasing concerns about how artificial intelligence can be manipulated for malicious purposes, and highlights the potential risks of introducing AI instruments into everyday services on which billions of users rely worldwide.