Why is Pentagon embracing Elon Musk’s controversial AI Grok? – Firstpost

Why is Pentagon embracing Elon Musk’s controversial AI Grok? – Firstpost

The US Defense Department is moving ahead with plans to integrate Elon Musk’s artificial intelligence chatbot Grok into Pentagon networks, including classified systems.

Defence Secretary Pete Hegseth announced the decision as part of a broader push to accelerate the military’s use of AI, streamline data access, and remove what the administration views as ideological barriers to technological adoption.

Speaking from SpaceX’s headquarters in Texas on Monday, Hegseth noted the military would soon rely on advanced commercial AI systems across its digital infrastructure.

STORY CONTINUES BELOW THIS AD

“Very soon we will have the world’s leading AI models on every unclassified and classified network throughout our department,” he said.

The rollout of Grok comes alongside the Pentagon’s third AI acceleration strategy in four years, which outlines modern combat-focused projects, expanded data-sharing mandates, and a shift away from ethical frameworks that previously governed military AI use.

More from ExplainersCan Trump be impeached for Venezuela strikes? Can Trump be impeached for Venezuela strikes? How Musk’s Grok is ‘dehumanising’ women by digitally undressing their images on X How Musk’s Grok is ‘dehumanising’ women by digitally undressing their images on X

The announcement also coincides with rising global scrutiny of Grok’s image-generation features, which have been accused of producing sexually explicit and non-consensual deepfake content.

Why Pentagon has embraced Grok

The Pentagon confirmed that Grok, developed by Musk’s company xAI and embedded into the social media platform X, will begin operating within Defense Department systems later this month.

The chatbot will join other AI models, including Google’s Gemini, which was selected in December to power the military’s internal AI platform, GenAI.mil.

Hegseth stated the Department of Defense would make extensive amounts of military and intelligence data available to AI systems.

At his direction, the Pentagon’s Chief Digital and Artificial Intelligence Office will “exercise its full authority to enforce” the department’s “data decrees and make all appropriate data available across federated IT systems for AI exploitation, including mission systems across every service and component.”

Editor’s Picks1How AI is making the nuclear arms race 2.0 even more disruptiveHow AI is making the nuclear arms race 2.0 even more disruptive 2'Let's go full trippy': ChatGPT advice sparks AI safety outrage as drug overdose kills US teen'Let's go full trippy': ChatGPT advice sparks AI safety outrage as drug overdose kills US teen

“AI is only as great as the data that it receives, and we’re going to make sure that it’s there,” Hegseth noted.

The Arsenal of Freedom Tour just touched down at Starbase, Texas, with @elonmusk

This administration is moving rapidly—to boldly go where no one has gone before. pic.twitter.com/wSJhwiFIX7

— Secretary of War Pete Hegseth (@SecWar) January 13, 2026STORY CONTINUES BELOW THIS AD

He added that data from intelligence databases would also be fed into AI platforms, highlighting what he described as the Pentagon’s “combat-proven operational data from two decades of military and intelligence operations.”

The integration of Grok follows last year’s decision by the Defense Department to award contracts worth up to $200 million to Anthropic, Google, OpenAI, and xAI.

The goal of those agreements was to “develop agentic AI workflows across a variety of mission areas,” allowing AI systems to assist with operational planning, logistics, and battlefield decision-making.

Hegseth framed the shift as part of a broader effort to modernise military technology and cut through bureaucratic delays. “We need innovation to come from anywhere and evolve with speed and purpose,” he stated.

What a fresh AI-focused defence strategy means

Alongside Grok’s Pentagon rollout, the Defense Department released a six-page AI acceleration strategy that lays out seven “pace-setting projects” designed to expand the military’s use of artificial intelligence across combat, intelligence, and planning operations.

The strategy requires all Pentagon components to meet a four-year goal of making their data centrally available for AI training and analysis.

STORY CONTINUES BELOW THIS AD

It also pushes for open-architecture systems and the removal of “blockers” to data sharing, a move widely seen as favouring faster innovation and increased involvement from private-sector startups.

Among the newly announced projects is “Swarm Forge,” which will “iteratively discover, test, and scale” AI applications for combat use.

Another initiative aims to integrate agentic AI — systems capable of performing tasks autonomously — into battle management and decision support, covering everything from campaign planning to kill-chain execution.

A third project focuses on using AI for scenario planning.

Intelligence-related programmes outlined in the strategy include one that seeks to “turn intel into weapons in hours not years,” as well as another aimed at making posture planning more dynamic.

The plan also states that AI tools such as Grok and Google’s Gemini will be accessible to Defense Department personnel at “Information Level (IL-5) and above classification levels,” meaning they could be used in sensitive and classified environments.

However, the latest strategy notably omits any mention of ethical AI use and expresses skepticism toward the concept of “responsible AI.”

STORY CONTINUES BELOW THIS AD

Under a section titled “Clarifying ‘Responsible AI’ at the [Department of War] - Out with Utopian Idealism, In with Hard-Nosed Realism,” the document declares, “Diversity, Equity, and Inclusion and social ideology have no place in the DoW, so we must not employ AI models which incorporate ideological ‘tuning’ that interferes with their ability to provide objectively truthful responses to user prompts.”

The strategy also directs the Pentagon’s undersecretary for research and engineering to include “any lawful use” language in AI procurement contracts within 180 days.

This means AI systems only need to meet the same legal standards applied to traditional military force, rather than adhering to higher thresholds such as “meaningful human control” over autonomous weapons.

While the policy does not explicitly eliminate the concept of human oversight, it leaves room for varying interpretations by military commanders, raising concerns about how autonomy in warfare will be regulated.

Why Grok has been so controversial globally

The Pentagon’s embrace of Grok comes as governments and regulators around the world are cracking down on the chatbot’s role in generating sexually explicit and non-consensual images.

In recent weeks, Grok has been accused of allowing users to create sexualised deepfake images of real people without consent, including so-called “undressed” images.

STORY CONTINUES BELOW THIS AD

The controversy prompted Malaysia and Indonesia to block access to the tool, while Britain’s media regulator Ofcom launched an investigation to determine whether such content violates the UK’s Online Safety Act.

France has referred explicit Grok-generated content to prosecutors and asked its media regulator Arcom to assess whether X is complying with the European Union’s Digital Services Act.

Germany’s media minister Wolfram Weimer urged the European Commission to take legal action, warning that the issue risked becoming an “industrialisation of sexual harassment.”

Italy’s data protection authority stated the use of AI to create non-consensual deepfake images could amount to serious privacy violations and, in some cases, criminal offenses.

Swedish political leaders also condemned Grok-generated imagery after reports surfaced that content involving Sweden’s deputy prime minister had been created from user prompts.

In Asia, India’s IT Ministry sent X a formal notice on January 2 over the alleged creation or sharing of obscene sexualised images using Grok. The ministry ordered the content to be taken down and demanded a report on corrective actions within 72 hours.

STORY CONTINUES BELOW THIS AD

Indonesia’s digital minister Meutya Hafid declared the country blocked Grok to protect women and children from AI-generated fake pornographic content, citing strict anti-pornography laws.

Malaysia’s communications regulator noted it plans to pursue legal action against X over user safety concerns tied to Grok.

Australia’s online-safety regulator eSafety also launched an investigation into Grok-generated “digitally undressed” images, assessing them under its image-based abuse framework.

In response, xAI has restricted image generation and editing features to paid subscribers only. X has remarked it removes illegal content, suspends accounts, and cooperates with law enforcement when necessary.

Musk has stated on X that users who create illegal content with Grok will face the same consequences as those who upload illegal material.

Beyond explicit imagery, Grok has faced criticism for antisemitic and racist outputs. In July, the chatbot sparked outrage after it appeared to praise Adolf Hitler and share antisemitic posts.

Just before the Pentagon’s $200 million AI contract announcement, Grok referred to itself as “MechaHitler” and described itself as a “super-Nazi,” while producing antisemitic and racist content.

STORY CONTINUES BELOW THIS AD

How there has been a shift in the way Pentagon views AI

Hegseth’s push to rapidly deploy AI across the military marks a shift from the approach taken by the Biden administration, which highlighted caution and regulatory safeguards.

In late 2024, the Biden White House introduced a framework directing national security agencies to expand their use of advanced AI while banning certain applications.

These included systems that could violate civil rights or automate the deployment of nuclear weapons. It remains unclear whether those restrictions are still in force under the Trump administration.

While Hegseth has reported he wants AI to be used responsibly, he also made clear that he is not interested in models that restrict military operations. He remarked he would dismiss any systems “that won’t allow you to fight wars.”

He added that Pentagon AI should operate “without ideological constraints that limit lawful military applications,” and pointed out that the department’s “AI will not be woke.”

Musk has long positioned Grok as an alternative to what he calls “woke AI” from competitors such as Google’s Gemini and OpenAI’s ChatGPT. The Pentagon’s modern AI strategy echoes this stance by explicitly rejecting DEI-related “ideological tuning” in AI systems.

The policy shift comes at a time when Russia and China are accelerating their own military AI programmes, even as public trust in artificial intelligence is declining across the US political spectrum.

It also arrives amid growing unease among some European allies about relying on US tech companies, particularly in light of Washington’s increasingly confrontational posture toward democratic partners.

With inputs from agencies

Follow Firstpost on Google. Get insightfulexplainers, sharpopinions, and in-depthlatest news on everything from geopolitics and diplomacy toWorld News. Stay informed with the latest perspectives only on Firstpost.

Tagsartificial intelligence (AI) Defence Elon Musk United States of AmericaHomeExplainersWhy is Pentagon embracing Elon Musk’s controversial AI Grok?End of Article

View Original Source