Ethical application of AI in healthcare in the spotlight
Photo: Laurence Dutton/Getty Images
Artificial intelligence in healthcare has the potential to improve patient outcomes, but with that comes an imperative to deliver AI-enabled products responsibly.
That means discussing the ethical and regulatory considerations about data privacy. Enter Jody Long, director of clinical solutions at PointClickCare, who will raise the topic at the HIMSS24 global conference in a discussion around the principles of responsible AI.
“Ultimately what brought me to this topic was a passion to get clinicians to have a better understanding of AI and not be afraid of it and to build confidence in the use of the tools,” said Long. “AI is not replacing bedside practitioners, it is here to help them make better, more informed decisions – empowering their human connection by improving their clinical decision making with putting data in their hands within their workflows.”
The discussion will come at a crucial point for AI. It has shown promise in crunching large amounts of data and in informing clinical decisions and insights, but because those applications are so disparate and wide-ranging, it can be a complex landscape for healthcare leaders, who want to remain on the forefront of technological progress while applying the tech in an ethical manner.
Many healthcare leaders have been keen to adopt AI because of the perceived benefits – according to Medical Economics, one of the most significant of those benefits is improved diagnostic speed and accuracy, which can make it easier for providers to diagnose and treat diseases. Using AI to analyze X-rays, MRI scans and other medical images, for example, can identify patterns and anomalies that a human might miss.
AI algorithms can also provide real-time data and recommendations, helping providers respond quickly to potential emergencies, and they can assist in managing chronic conditions.
The technology also has a potential role to play in increasing access to care, such as in the case of telehealth, which – with an AI boost – can provide remote consultations and diagnoses, eliminating the need for patients to travel.
However, Medical Economics pointed out that there can be potential risks, particularly when it comes to security and privacy. One of the biggest risks is the potential for data breaches, since large quantities of patient data are often targets for cybercriminals. Other types of unique AI attacks include data input poisoning – in which a bad actor inserts bad data into a training set, affecting the model’s output – and model extraction – in which an adversary might extract enough information about the algorithm to create a substitute model.
According to Statista, the healthcare AI market, valued at $11 billion in 2021, is projected to be worth $187 billion in 2030. Better machine learning algorithms, more access to data, cheaper hardware and 5G connection speeds are all contributing to the increased application of AI within healthcare.
“As a clinician and end-user of many technology platforms, it is important for healthcare providers to be educated on the principles of responsible AI and how technology partners are investing in the production of these new innovative tools,” said Long. Hearing how we have thoughtfully and purposefully approached AI is a great start for those wanting to understand the foundations and the impacts of our learnings in this space.”
Her session, “Responsible AI to Improve Patient Outcomes,” is scheduled for March 12 from 10:30-11:30 a.m. in Room W208C at HIMSS24 in Orlando. Learn more and register.
Jeff Lagasse is editor of Healthcare Finance News.
Email: jlagasse@himss.org
Healthcare Finance News is a HIMSS Media publication.
Source link