Site iconSite icon ForkLog

A fake OpenAI repository spread an infostealer, Mistral AI faced extortion, and other cybersecurity news

A fake OpenAI repository spread an infostealer, Mistral AI faced extortion, and other cybersecurity news

We compiled the week’s most important cybersecurity news.

  • ZachXBT unmasked the organiser of $19 million in phishing thefts.
  • Three suspects were charged over a series of “wrench attacks” in California.
  • A fake OpenAI repository spread an infostealer.
  • “AI slop” is flooding platforms used by hackers and cyberfraudsters.

ZachXBT unmasks organiser of $19 million phishing heists

On-chain sleuth ZachXBT detailed an investigation into phishing-enabled cryptocurrency thefts exceeding $19 million.

The prime suspect is US hacker Dritan Kapllani Jr. The trail began with his own carelessness.

On 23 April 2026, during a Discord video call, Kapllani argued with a user about the size of his capital (band 4 band). To prove his point, he showed the screen of his Exodus wallet with a $3.68 million balance.

ZachXBT analysed the transaction history of the associated Ethereum address. The funds were tied to the theft of 185 BTC on 14 March 2026. The research showed that on 15 March, Kapllani’s wallet received his cut—$5.3 million. By the April call he had spent or laundered roughly $1.6 million.

The investigation also linked Kapllani to earlier incidents. A cybercriminal, John Dagita, previously arrested for stealing more than $40 million from the US government, aided this connection. Seeking revenge over prior disputes, he posted one of Kapllani’s old addresses in Telegram.

ZachXBT confirmed ownership: the cash-out pattern matched that used in the 185 BTC theft. He also found that in autumn 2025 the wallet processed over $5.85 million stolen in five phishing attacks.

The investigator assisted one victim but deliberately withheld his findings until authorities acted.

On 11 May 2026, court filings in the 185 BTC case were unsealed.

Charges have already been filed against:

Kapllani flaunts a lavish lifestyle on social media, showcasing private jets and luxury cars. He had long avoided arrest—something the sleuth links to routine deferrals of prosecutions involving minors. Now that he has turned 18, ZachXBT expects charges soon.

Three suspects charged over a series of “wrench attacks” in California

US prosecutors charged Elija Armstrong, Nino Chindavong and Jayden Rucker with robbery, kidnapping and conspiracy in connection with a string of cryptocurrency thefts.

According to the case file, the trio moved from Tennessee to California. To enter victims’ homes, they posed as couriers.

In November 2025 in San Francisco, a “courier” carrying a box attacked a recipient at an apartment entrance. The victim was bound with tape, pistol-whipped and forced under threats to transfer $10 million in bitcoin and $3 million in Ethereum.

In another “wrench attack”, the victim lost $6.5 million in crypto.

Armstrong and Rucker were arrested in Los Angeles on 31 December 2025; Chindavong was detained in Sunnyvale on 22 December 2025. They face:

According to CertiK, there were 72 “wrench attacks” worldwide in 2025, up 75% on the previous year. Losses reached a record $41 million.

A fake OpenAI repository spread an infostealer

A malicious Hugging Face repository impersonated OpenAI’s Privacy Filter project to deliver an infostealer, researchers at HiddenLayer reported.

Hugging Face is a platform where developers and researchers share AI models, datasets and machine-learning tools.

According to the team, scammers used a lookalike name in the Open-OSS/privacy-filter repository, which contained a loader.py file that launched Windows-targeting data-theft malware.

Source: HiddenLayer.

The Python script included bogus AI-related code to appear benign. In the background it disabled SSL key checks, decoded a URL pointing to an external resource, then fetched and executed a PowerShell command.

Running in a hidden window, the code downloaded a batch file, start.bat, which escalated privileges and fetched the final payload while adding it to Microsoft Defender exclusions. The payload was a Rust-written infostealer capable of taking screenshots. It stole:

The researchers noted that the vast majority of the 667 accounts that “liked” the malicious repository appeared automatically generated. The reported 244,000 downloads could also have been artificially inflated.

“AI slop” is flooding platforms used by hackers and cyberfraudsters 

Complaints about “AI slop” are increasingly common in the dark web, as it seeps into discussions, guides and technical posts, Wired reports, citing research by the University of Cambridge and the University of Strathclyde.

Analysts reviewed about 98,000 AI-related threads on hacker forums from the launch of ChatGPT in 2022 through the end of 2025. Over that period, attitudes toward generative models shifted markedly in criminal circles.

Where hackers once discussed using neural networks to write malware or find vulnerabilities, they now increasingly gripe about a flood of “AI slop”: useless posts and simplistic guides on basic topics.

Some forum participants also complain that LLM answers in Google search results depress traffic to the forums themselves, hurting their marketing.

The researchers saw little meaningful impact on inexperienced fraudsters: AI has not lowered the barrier to entry for novices nor produced sweeping changes in cybersecurity.

Belarus-linked hacking group targeted Ukrainian state bodies

In March 2026, a new campaign by the Ghostwriter group (also known as UNC1151 and FrostyNeighbor) targeted Ukrainian government and defence organisations, according to ESET researchers.

Ghostwriter, which specialises in cyber-espionage in Eastern Europe, has been linked to Belarus.

Attackers sent phishing PDFs masquerading as documents from Ukrtelecom. Malicious links in the files delivered the PicassoLoader malware, which then deployed the popular attack framework Cobalt Strike.

The hackers used IP-based checks: the infected archive downloaded only if the victim was in Ukraine.

Researchers noted the group’s operational maturity. PicassoLoader can send a system “fingerprint” to the attackers’ servers every ten minutes. Based on this, Ghostwriter operators decide whether to continue the intrusion.

Unlike its campaigns in Poland or Lithuania, where it hits a broad spectrum of targets from logistics to healthcare, in Ukraine the group focuses exclusively on military and government sectors.

TeamPCP hackers put Mistral AI repositories up for sale

The TeamPCP hacking group threatened to leak Mistral AI’s project source code unless a buyer emerges for the stolen data, BleepingComputer reports.

Mistral AI is a French artificial-intelligence firm founded by former researchers from Google DeepMind and Meta. It develops open-weight LLMs and proprietary software.

In a post on a hacker forum, the attackers asked $25,000 for a bundle including nearly 450 repositories.

In an official statement to BleepingComputer, Mistral AI confirmed a compromise of its code-management system. The breach followed a large-scale software supply-chain attack dubbed Mini Shai-Hulud.

Mistral AI says the affected data are not part of its core source code.

According to published information, the attack unfolded in stages. First, the perpetrators accessed official TanStack and Mistral AI packages using stolen CI/CD credentials. The campaign then spread to hundreds of projects in the npm and PyPI registries, including work by UiPath, Guardrails AI and OpenSearch.

Mistral AI acknowledged that attackers briefly inserted malicious code into some of the company’s SDK packages.

Source: BleepingComputer.

TeamPCP claims to have downloaded nearly 5GB of internal data that Mistral uses for training, fine-tuning, benchmarking and experiments.

The hackers said they will dump the information publicly if no buyer is found within a week.

Also on ForkLog:

What to read this weekend?

In a new piece, ForkLog examines how Palantir Technologies, the principal software contractor to the US Department of Defense and intelligence agencies, “ensures the West’s obvious superiority”.

Exit mobile version