Source: www.patientengagementhit.com
A survey by Propeller Insights for Carta Healthcare reveals that American patients are divided over the use of artificial intelligence (AI) in healthcare. Of 1,027 surveyed, 49% are comfortable with AI use by healthcare providers, but 51% are not. However, when AI’s potential to improve diagnostic accuracy is considered, 51% become comfortable. Despite AI’s growing presence in healthcare, 75% of patients lack trust in it, citing concerns about data privacy and impacts on patient-provider relationships. Many patients admit to limited understanding of AI in healthcare, with 80% emphasizing the importance of AI education. AI chatbots show promise in patient interactions, yet trust in AI remains an issue.
American patients are split as to whether they are comfortable with the use of artificial intelligence (AI) in their healthcare experience, according to survey data from Propeller Insights on behalf of Carta Healthcare, although many patients admit there’s more they need to learn about the technology. Among the 1,027 adult patients surveyed, 49 percent said they were comfortable with their healthcare provider using AI, while 51 percent were not. When asked to consider how AI could improve diagnostic accuracy, those numbers shifted to 51 percent being comfortable and 42 percent being uncomfortable with their providers using AI.
These numbers come as the potential for artificial intelligence in healthcare builds. Data has shown that AI can improve healthcare for patients by boosting diagnostic accuracy, while generative AI and tools similar to ChatGPT have opened doors for providers searching for patient information and for patients looking for medical advice from already time-strapped doctors.
AI is starting to proliferate the healthcare market, the Propeller Insights and Carta survey showed, with researchers contending that 100 percent of provider offices are using the technology. But patients are wary of healthcare AI, both in terms of trusting it and even knowing much about it.
Overall, 75 percent of patients do not trust AI in a healthcare setting, with three in five saying they aren’t sure their provider would be able to use the technology properly. Two in five respondents indicated that their provider could provide better information than AI could, while a third said they’d be on par, and a quarter said AI would outperform their provider. Regardless of the accuracy of AI-communicated information, patients are worried about their data privacy, with 63 percent saying they are concerned that the use of AI in healthcare would put their health information at risk.
Others are worried about how AI would affect their patient-provider relationships, which most respondents reported are currently good. Three-quarters of respondents said their providers already provide empirical scientific data about their conditions, 62 percent reported a good experience with the healthcare system, and 61 percent said they have good health data access.
Just under half (46 percent) said their healthcare visits have gotten longer in the past two years, a good sign for a meaningful patient-provider relationship. But 63 percent fear use of AI in healthcare would change that by decreasing the amount of face time they get with their clinicians.
Still, patients know there are gaps in their understanding of AI in healthcare (43 percent said their knowledge is limited), and that could change how they feel about the technology. In fact, 80 percent of respondents said their knowledge of AI use in healthcare is important for improving their comfort levels. Around two-thirds said healthcare providers should explain how they use AI to make patients feel more comfortable with the technology.
Preliminary data has shown that AI has benefits for the patient experience. Notably, AI-powered chatbots can supplement how patients communicate with their healthcare providers, many of whom are time-pressured and unable to quickly respond to every patient portal query.Using generative AI and large language models, chatbots have proven effective for patient education, as some studies have shown. In a separate report, researchers found that AI chatbots were able to communicate empathy in patient portal messages when providers could not. This isn’t because providers lack empathy, the team cautioned; rather, clinicians are often pressed for time in a way chatbots clearly are not.
Even still, patient trust in healthcare artificial intelligence is left wanting, as data separate from the Propeller Insights and Carta Healthcare report has shown. A report from the University of Arizona Health Sciences showed that around half of patients don’t fully trust AI-powered medical advice, like the information issued from chatbots like ChatGPT. Instead, patients still trust and prefer their traditional healthcare providers.