If an artificial intelligence doctor knows your name and history, you are more unlikely to follow their advice.

If a medical artificial intelligence knows your name and history, it is more unlikely that you will follow their advice

Engineers always strive to make our interactions with AI more human, but a new study suggests that a personal touch isn’t always welcome.

Researchers from Penn State and the University of California, Santa Barbara suggest that people are less likely to follow the advice of a medical AI who knows your name and medical history .

Privacy

Their two-phase study randomly assigned participants to chatbots that identified themselves as an AI, with humans, or with AI-assisted humans. The first part of the study was framed as a visit to a new doctor on an electronic health platform .

The 295 participants were first asked to complete a health form. They then read the following description of the doctor they were about to meet:

  • Human Doctor : Dr. Alex received a medical degree from the University of Pittsburgh School of Medicine in 2005. His area of ​​care includes coughing, obstructive pulmonary disease, and respiratory problems. Dr Alex says, ‘I strive to provide accurate diagnosis and treatment to patients.’
  • AI Doctor : Dr. Alex is a deep learning based artificial intelligence algorithm for the detection of flu, lung diseases and respiratory problems. The algorithm was developed by several research groups at the University of Pittsburgh School of Medicine with a huge set of real-world data. In practice, Dr. Alex has achieved high precision in diagnosis and treatment.
  • AI-Assisted Human Physician : Alex is a board certified pulmonary specialist who received a medical degree from the University of Pittsburgh School of Medicine in 2005. The artificial intelligence medical system that assists Dr. Alex is based on algorithms of deep learning for the detection of flu, lung diseases and respiratory problems.

Each chatbot was programmed to ask eight questions about COVID-19 symptoms. Finally, they offered diagnoses and recommendations. About ten days later, the participants were invited to a second session. Each of them was assigned a chatbot with the same identity as in the first part of the study. But this time, some were assigned a bot that referred to details of their previous interaction , while others were assigned a bot that did not refer to their personal information.

After the talk, the participants received a questionnaire to evaluate the doctor and their interaction. They were then told that all the doctors were bots, regardless of their stated identity.

The study found that patients were less likely to follow AI doctors’ advice regarding personal information, and more likely to view the chatbot as intrusive. However, the reverse pattern was observed in views of chatbots that were presented as humans .