Telegram (AI) YouTube Facebook X
Ру
Pentagon to Integrate Grok Chatbot into Its Network

Pentagon to Integrate Grok Chatbot into Its Network

Grok chatbot to join Pentagon's network with Google's AI engine, says Defense Secretary.

The AI bot Grok is set to be integrated into the Pentagon’s network alongside Google’s generative AI engine. This was announced by U.S. Secretary of Defense Pete Hegseth, according to the New York Post.

“Soon we will have the world’s leading AI models in every unclassified and classified network of our department,” the official stated.

Hegseth mentioned that the chatbot will commence operations in the Department of Defense by the end of January. It will “provide all necessary data” from military IT systems, including information from the intelligence department.

In his speech, the politician emphasized the need to optimize and accelerate technological innovations in the armed forces. According to him, the Pentagon possesses “battle-tested operational data obtained over two decades of military and intelligence operations.”

“Artificial intelligence is only as good as the data it receives. We will ensure its availability,” Hegseth added.

The Secretary of Defense stated that he wants to see “responsible AI systems” in the Pentagon. He promised to “cut through the bureaucratic jungle and remove the clutter—preferably with a chainsaw.”

“We must ensure the dominance of American military AI so that no adversary can use the same technology to threaten our national security or citizens,” the Pentagon chief declared.

Despite the Controversy

The announcement comes just days after Grok became embroiled in another controversy, this time over the use of the tool for generating sexual content.

Malaysia and Indonesia have blocked access to the chatbot. Regulators in the EU, UK, Brazil, and India are demanding an investigation into Grok’s role in the spread of deepfakes.

The British organization Internet Watch Foundation noted that its analysts discovered “criminal images” of children aged 11 to 13, allegedly created using the chatbot.

Previously, Grok had been repeatedly criticized for spreading fake and dubious information.

In December, the chatbot provided inaccurate data about a mass shooting on Bondi Beach in Australia. In response to a question about a video showing a passerby Ahmed al-Ahmed wrestling with the shooter, the AI replied:

“It appears to be an old viral clip showing a man climbing a palm tree in a parking lot, possibly to trim it. As a result, a branch fell on a damaged car. Searches across various sources did not yield confirmed information about the location, date, and injuries. It may be staged, authenticity not confirmed.”

In July, users noticed that the neural network relies on Elon Musk’s opinion when forming responses. This included topics such as the Israel-Palestine conflict, abortion, and immigration legislation.

Observations suggest that the chatbot was specifically configured to consider Musk’s political views when responding to controversial questions.

Previously, the billionaire stated that his startup would rewrite “all human knowledge” to train the new version of Grok, as there is “too much garbage in any base model tuned on uncorrected data” today.

Subsequently, Grokipedia was launched—an AI-based online encyclopedia “focused on truth.”

Back in November, users noted the bias of Grok 4.1. The new model significantly overestimated Musk’s abilities.

Подписывайтесь на ForkLog в социальных сетях

Telegram (основной канал) Facebook X
Нашли ошибку в тексте? Выделите ее и нажмите CTRL+ENTER

Рассылки ForkLog: держите руку на пульсе биткоин-индустрии!

We use cookies to improve the quality of our service.

By using this website, you agree to the Privacy policy.

OK