Researchers from the University of Michigan have developed a bidirectional model to forecast trust between humans and robots in scenarios involving collaboration. As reported by Tech Xplore.
According to project participant Herbert Azevedo-Sa, trust between humans and agents should be mutual in collaborative work. Based on this principle, researchers aimed to create robots capable of interacting with and strengthening trust in humans or other agents, much like colleagues.
“There have been many studies aimed at understanding why people should or should not trust robots. However, we know far less about why robots should or should not trust people,” said Azevedo-Sa.
In the project, researchers attempted to reproduce the process by which people learn which tasks can or cannot be entrusted to computational algorithms. The model provides a general sense of an agent’s capabilities, integrity, and other factors. It is then compared with the task requirements intended for the robot.
If it proves the agent is more than capable of performing the task, the model deems it trustworthy. Conversely, if the task is challenging, trust in the algorithm decreases.
Unlike other methods of predicting trust in agents, the presented method applies to both humans and robots. In testing, researchers also found that its predictions are far more reliable than existing approaches.
“Previous methods attempted to predict the transfer of trust by evaluating task similarity based on their verbal descriptions. They represented a big step forward for trust models, but they had some problems. For example, tasks ‘lift a pencil’ and ‘lift a whale’ have very similar descriptions, yet they are actually very different,” the researchers said.
They added that their model can avoid this error by distinguishing the objects that the robot must pick up.
The scientists hope that in the future the bidirectional model will prove useful for improving human–robot interaction in a range of settings. For example, it could help allocate tasks more efficiently among teams comprising humans and robot agents.
As reported in May 2021, researchers found that patients do not trust AI doctors, who know their names.
In April, researchers from the University of Georgia found that people are more likely to trust an algorithm to solve a difficult task than another person or themselves.
Subscribe to ForkLog AI news on Telegram: ForkLog AI — all the news from the AI world!
