Friday, Oct 22, 2021

AI in Healthcare: Barriers

Artificial Intelligence: Once science fiction, now transforming healthcareHealthcare has improved rapidly over the last 20 years, raising life..

Nurse using iPad

Artificial Intelligence: Once science fiction, now transforming healthcare

Healthcare has improved rapidly over the last 20 years, raising life expectancy around the world. However, aging populations have consequently placed increasing strain on healthcare services. Managing these patients is expensive and requires healthcare systems to focus on long-term care management – versus episodic care management.

As many healthcare market research agencies will testify, artificial intelligence has the potential to revolutionise healthcare and help address this challenge. It is already successfully being used in areas such as disease detection and diagnosis, although there are still barriers preventing the expansion of AI in healthcare.

The three main barriers to AI in healthcare:

1. Regulations

Firstly, there is the challenge of regulation. There are many governing bodies unique to different markets. For the purposes of this blog, let’s narrow it down to one: The US.

In April 2019, the FDA published a discussion paper which sparked debate around what regulatory frameworks should be in place for the modification and use of AI in the medical environment.

At the start of this year, they issued a new action plan which built on that debate, laying out the planned approach to regulation of software as a medical device that utilises AI or ML (machine learning). You can read more about the action plan here.

According to FDA guidelines in the US, AI software programmes and devices are most likely to fall under Class 3.

Class 3 is defined as high risk. This represents ~10% of medical devices on the market and is the primary category artificial intelligence systems fall into as they can pose serious threats to patients if they malfunction.

Whilst most AI software programmes and devices serve to assist medical professionals, it’s difficult to say whether these devices will override the judgement of health professionals.

This leads us onto the next hurdle: Patient and provider trust. Even if the FDA does approve these medical devices, will they be trusted?

2. Patient and provider trust

AI innovation is everywhere in our lives, and sometimes we don’t even notice it. Whilst it is relatively harmless in most cases, trusting AI to provide accurate health recommendations is far more complicated.

There have been numerous examples in other industries where AI has struggled. Specific to the healthcare industry, IBM’s Watson for Oncology (an AI powered super-computer) promised to revolutionise the treatment of cancer.

However, according to a STAT investigation into the technology, it is not living up to its promises and is still struggling to differentiate between different forms of cancer. Moreover, hospitals outside of the US complain that the machine’s advice is biased towards American patients and methods of care.

Related Artificial Intelligence 101 Whilst the technology is still in its infancy, IBM has not published any scientific papers demonstrating how the technology affects patients and providers, making it more difficult for providers to trust.

Both providers and patients want to understand why certain treatments have been recommended, and because machine learning algorithms are far too complicated for the average user to understand, the ‘why’ is missing. It is no surprise that patients trust the opinion of a human doctor over a machine.

It is vital that manufacturers of AI and ML are transparent about how the technology works, its data sources, the benefits, and its limitations.

Understanding the ‘why’ behind AI and machine learning is complex, so helping patients understand how AI can support their care and convince providers that they can trust these machines is important.

3. Privacy concerns

Related to this issue of trust is the concern of privacy and cybersecurity. First, with regards to patient data. There are already tight regulations around this and how the data can be shared and used.

In some use cases, it might be possible to anonymise the data enough to let the AI machine do its work. However other areas may be more problematic, such as image-dependent diagnoses like ultrasounds.

Secondly, as AI grows in its capabilities so will cyberattacks. Techniques like advanced machine learning, deep learning, and neutral networks enable computers to look for patterns in data but also to find and exploit vulnerabilities.

AI can also be part of the solution. Already, advanced machine learning techniques combined with cloud technology analyze a huge amount of data and identify real-time threats. AI can identify hotspots where cyberattacks have originated and generate cybersecurity intelligence reports.

In conclusion

AI is still in its infancy in the healthcare industry, and we are constantly learning more about what AI can offer. We’re also learning about its limitations. AI cannot replace human doctors, but it has a range of capabilities to assist in clinical decision making. It is capable of picking up on complex patterns that can only become apparent when patient data is viewed in aggregate, something that would be unreasonable to expect a doctor to recognise.

While there are several other barriers to AI and ML that have not been discussed in this article, patient and provider trust is one of the biggest. As long as trust issues hold patients and providers back, the widespread adoption of AI in healthcare remains just out of reach.

Company URL:

Header Image: metamorworks, Shutterstock

The post Barriers to AI in Healthcare first appeared on GreenBook.


By: Martha Wyatt
Title: Barriers to AI in Healthcare
Sourced From:
Published Date: Wed, 13 Oct 2021 11:00:49 +0000

Read More