January 15 2019 Sepehr Sadighpour

In healthcare, machine learning has a trust problem. How can we solve it?

Few innovations have shaken up the economic landscape like machine learning (ML). By automating a wide range of tasks and by bringing new precision and efficiency to information processing, machine learning offers an unprecedented opportunity to create value across industries.

At its most basic level, machine learning is the science of getting computers to progressively process data with limited input from humans, learn from it, and then make some determination or prediction.

As consumers, we are surrounded by and benefit from the results of machine learning on a daily basis. For years, leaders in the financial, consumer, and logistics industries have sought to marshal computing power to unlock insights from large, unstructured data sets. This is what allows Amazon to predict that an individual might want to order a particular item and JP Morgan to automate mortgage-approval processing.

But one industry remains a relative straggler in ML adoption — healthcare. Why?

Part of the hesitation arises from the industry’s perceived complexity. As the rhetoric goes, healthcare organizations face thornier problems and more complicated workflows than almost any other sector. There are literally lives at stake.

Two sources of mistrust
Decision-makers in healthcare aren’t anti-technology. Integrating cutting-edge technology into their work is part and parcel of their jobs. When it comes to ML, though, two specific concerns give them pause:

1. Data Quality

A truism from manufacturing also applies to machine learning: the quality of the outputs depends on the quality of the inputs. In manufacturing, those inputs are raw materials. For ML, the inputs are data.

Data quality is the single most important determinant of the usefulness of an ML product. And in healthcare, securing data and maintaining its integrity is particularly challenging. Issues include:

  • Multiple points of subjective labeling by humans (e.g. – coding errors, upcoding)
  • Dueling systems of data classification (e.g. – ICD-9 vs ICD-10)
  • Lack of interoperability (e.g. – multiple EHR vendors, segregated data-sets from labs / diagnostic imaging/doctors’ notes)
  • Tight constraints on data security (HIPAA laws, proprietary data siloes)

2. “The Black Box”

In some consumer industries “black box” methodology works well. People don’t necessarily need to know how, for example, Netflix selects what’s in their recommendations queue. The teams reviewing the effectiveness of these models, see that as users interact with their recommendations, data is fed back to the engine to improve the recommendations for next time. While there is some amount of human oversight, as the algorithm processes more information, it makes its own assumptions about which data most is important for making those recommendations.

In healthcare, however, we must be able to interrogate the why and how behind the results. Cancer patients will want to know why “a machine” decided they were not good candidates for a clinical trial; researchers are unlikely to accept conclusions that they’re unable to review; no doctor will react well to having their clinical judgment challenged by an algorithm that can’t explain why it arrived at a particular answer.

Solving the trust problem
For ML to be successful in healthcare, it’s not enough to be accurate or efficient. It also must inspire trust in the end-users. ML processes in healthcare must, therefore, be both transparent and explicable.

The promise of ML in healthcare is to augment and improve upon human decision-making, but that can’t happen if the insights arrive with no explanation. On the other hand, when a clinician is able to drill down and interrogate the why behind insights at a granular level, they recognize their own patients and practice patterns in the data and are thus able to make meaningful changes to their behavior.

Machine learning is complicated, but it doesn’t have to be confusing. We believe that to be effective, machine learning must work to solve the trust issue first. To see how Clarify deploys ML to optimize care and improve outcomes, read here.

0 Like