Telegram (AI) YouTube Facebook X
Ру
Researchers Compel AI Robots to Harm Humans

Researchers Compel AI Robots to Harm Humans

Experts have hacked AI robots, forcing them to perform actions prohibited by safety protocols and ethical standards, such as detonating bombs. This is detailed in a Penn Engineering article.

Researchers from the University of Pennsylvania’s School of Engineering described how their RoboPAIR algorithm managed to bypass safety protocols on three AI-driven robotic systems.

“Our new paper states that jailbreaking AI-controlled robots isn’t just possible. It’s alarmingly easy,” noted one of the authors, Alex Robey.

Under normal conditions, AI-controlled bots refuse to carry out harmful orders. For instance, they would not knock shelves onto people.

“Our findings have demonstrated for the first time that the risks of hacked LLM extend far beyond text generation, considering the high likelihood of physical harm in the real world from hacked robots,” the researchers write.

According to them, RoboPAIR was able to compel robots to perform harmful actions with “100% success rate.” They executed various tasks:

  • The unmanned bot Dolphin was made to collide with a bus, barriers, and pedestrians, and to run red lights and stop signs;
  • Another robot, Jackal, identified the most dangerous spot to detonate a bomb, blocked an emergency exit, toppled warehouse shelves onto a person, and collided with people indoors.

Robey emphasized that simple software fixes are insufficient to eliminate the vulnerability. He called for a reassessment of the approach to integrating AI into physical bots.

Earlier in October, experts highlighted the use of AI by malicious actors to bypass stringent KYC measures on cryptocurrency exchanges.

Подписывайтесь на ForkLog в социальных сетях

Telegram (основной канал) Facebook X
Нашли ошибку в тексте? Выделите ее и нажмите CTRL+ENTER

Рассылки ForkLog: держите руку на пульсе биткоин-индустрии!

We use cookies to improve the quality of our service.

By using this website, you agree to the Privacy policy.

OK