
Grok Spreads Misinformation About Shooting in Australia
Grok spreads misinformation about Bondi Beach shooting in Australia.
The AI-based chatbot Grok has repeatedly spread misinformation about a mass shooting at Bondi Beach in Australia.
This appears to be an old viral video of a man climbing a palm tree in a parking lot, possibly to trim it, resulting in a branch falling and damaging a parked car. Searches across sources show no verified location, date, or injuries. It may be staged; authenticity is uncertain.
— Grok (@grok) December 14, 2025
On December 14, Sydney hosted the annual Hanukkah by the Sea celebration, organized by Chabad of Bondi, marking the start of Hanukkah. Approximately 1,000 people attended.
Later in the day, two attackers—a father and son dressed in black—opened fire on a crowd at a playground in Archer Park, discharging about 50 rounds. Police classified it as a terrorist act with an anti-Semitic motive, the second deadliest mass shooting in Australia’s history.
Sixteen people were killed, and 42 were injured.
One of the attackers was disarmed by a passerby—43-year-old Ahmed Al-Ahmed. Video of the incident spread on social media, with many praising the man’s heroism.
Elon Musk’s xAI company chatbot provided users with inaccurate information about the incident. In response to a question about the video showing Al-Ahmed struggling with the shooter, the AI replied:
“This appears to be an old viral video of a man climbing a palm tree in a parking lot, possibly to trim it. As a result, a branch fell on a damaged car. Searches across sources show no verified location, date, or injuries. It may be staged; authenticity is uncertain.”
Grok also claimed that a photograph of Al-Ahmed was taken on October 7, 2023. He was allegedly held hostage by Hamas for over 700 days and was released in October 2025.
In response to another user’s question, the chatbot wrote an unrelated paragraph about whether the Israeli army deliberately targeted civilians in Gaza.
Reports from doctors in Gaza, such as Dr. Nick Maynard in September 2025, describe cases of Palestinian boys and men with gunshot wounds to the groin, suggesting deliberate targeting by Israeli forces. However, Israeli officials, including Netanyahu, have denied similar…
— Grok (@grok) December 14, 2025
In one instance, Grok described a video showing a shootout between the attacker and police in Sydney as footage of Tropical Cyclone Alfred.
Additionally, the AI confuses information about the beach incident with a shooting at Brown University that occurred hours before the attack in Australia.
The glitch extended beyond the shooting. Throughout December 14, Grok:
- misidentified well-known football players;
- provided information on the use of acetaminophen during pregnancy when asked about abortion pills;
- discussed the likelihood of Kamala Harris running for U.S. president when asked to verify a completely different claim regarding a British law enforcement initiative.
An Alternative Reality
Grok has repeatedly found itself in controversies due to incorrect and false statements about various events and topics.
In July, users noticed that the neural network relies on Elon Musk’s opinion when forming responses. This included topics such as the Israel-Palestine conflict, abortion, and immigration law.
Observations suggest that the chatbot is deliberately configured to consider Musk’s political views when responding to contentious questions.
Previously, the billionaire stated that his startup would rewrite “all human knowledge” to train a new version of Grok, as “too much junk is used in any base model trained on uncorrected data” today.
Subsequently, Grokipedia emerged—an AI-based online encyclopedia “focused on truth.”
Earlier, in November, users highlighted the bias of Grok 4.1—the new model significantly overestimated Elon Musk’s capabilities.
Рассылки ForkLog: держите руку на пульсе биткоин-индустрии!