Site iconSite icon ForkLog

Study finds patients distrust AI doctors who know their names

uskoryayushhei-sya-fragmentatsii-mirovogo-interneta

People are less likely to follow the advice of an AI ‘doctor’ who knows their name and medical history. That conclusion was reached by researchers at the University of Pennsylvania and the University of California, Santa Barbara.

In the experiment, 295 people took part. Scientists randomly assigned them to chatbots presented as a human doctor, an artificial intelligence, and a doctor with an AI assistant.

Each bot was programmed to ask eight questions about COVID-19 symptoms and, at the end of the conversation, to offer self-diagnosis guidance for the coronavirus developed by the US Centers for Disease Control and Prevention.

Ten days later, participants were invited to a second session. The same ‘doctor’ they spoke with in the first part of the experiment interacted with each participant. In some cases, the bot actively referenced the patient’s personal information and medical history, while in others it did not.

After the chat, participants were given a questionnaire to evaluate the doctor and their interaction. Only then were they told that all the ‘doctors’ were bots, regardless of their persona.

The study found that patients were less likely to heed the advice of AI doctors who used personal information, and were more likely to find the chatbot intrusive.

“Personalisation by AI is likely to be seen as an excuse, i.e., not a sincere attempt at care and closeness,” says the study.

However, according to the researchers, the opposite pattern was observed for chatbots presented as humans: when a human doctor repeatedly asked for the name and medical history, patients perceived it as intrusive.

The team hopes the study will lead to improvements in the design of medical chatbots. It could also enhance the quality of online interactions between doctors and patients.

In April, researchers from the University of Georgia found that people tend to trust an algorithm to solve a complex task more than another person or themselves.

In March, British scientists developed a COVID-19 screening tool that, with 98% accuracy, can diagnose the virus by analysing the sound of a cough using AI.

Subscribe to ForkLog’s news on Telegram: ForkLog AI — all the news from the world of AI!

Exit mobile version