Machine Learning Demystified

An interview with a clinical informatics expert.

Big data. Artificial intelligence. Cognitive computing. Natural language processing. Machine learning. These terms have become the latest buzzwords in healthcare. But what do they mean?

In our March blog post, we focused on natural language processing (NLP), which is a technology that can be programmed to comprehend free text contained in, for example, medical records. In this post, we follow up to discuss machine learning and its applications. Once again, I spoke with Dr. Rob Kalfus, who is board certified in internal medicine and Health Fidelity’s Lead Clinical Informaticist, to learn more.


What is machine learning?

Machine learning is a type of artificial intelligence that, as its name suggests, provides computers with the ability to learn new information without being explicitly programmed. After the computer is exposed to a set of data, it “learns” and can then identify patterns, create associations, make predictions, etc.

How is Health Fidelity using machine learning?

The type of machine learning we use is association rule learning. Essentially, it is a method for discovering relationships between variables in very large datasets. This technique has been used in the retail sector for a long time. A simple example of this is when Amazon suggests other products that are commonly purchased with the one that you are currently shopping for.

When applied to healthcare analytics, machine learning can be used to determine which medications, lab values, procedures, or comorbidities tend to pair with certain conditions, which can be used to generate rules that can identify potential gaps in chart documentation.

What makes Health Fidelity’s approach to machine learning unique?

Good question. A few things come to mind.

    1. Confidence scoring
      Each of our clinical predictions is accompanied by a confidence score – i.e., the probability that a suspected condition is associated with other pieces of information that are known about the patient. These confidence scores help us determine the suspects that are most likely to be correct. This is useful because suspects with the highest confidence scores along with sufficient clinical indicators could be shown directly to the clinician; suspects in the next highest tier of confidence scores could first be reviewed by the care team.
    2. Complex rules
      Not only can suspect algorithms be based on one-to-one associations, such as insulin suspecting diabetes, but algorithms can be based on complex machine-learned rules – i.e., rules where multiple factors lead to a clinical prediction. Below is an example – the presence of three specific evidences in combination lead to a more confident prediction of heart failure than just one piece of information (the medication) alone.
      The suspected diagnosis, I50.9 (Heart failure, unspecified) in this case, can also be grouped with the other heart failure codes to more accurately capture the association of these three pieces of evidence and the patient’s likelihood of having heart failure, therefore increasing the confidence of the suspect.  The machine learning algorithm could also take into account exclusionary criteria, such as what conditions a patient does not have, to further increase confidence score, and ultimately the validity and confirmation rate of a suspect.
    1. Clinical data
      The example above just uses information in administrative data, and the confidence of that machine learned suspect rule is not high enough to use by itself. Clinical data needs to be incorporated to improve the accuracy and validity of suspects.  For example, the accuracy of the suspect rule above could be improved if the NLP found evidence of “Diastolic Dysfunction” on an echo report (for a refined suspect target of diastolic heart failure), a finding of “edema” or “rales” in the physical exam of a physician’s progress note, and findings consistent with volume overload on a chest x-ray report, information that likely would not be contained in administrative data.  This unstructured data accounts for roughly 80 percent of patient information, and our NLP engine can accurately capture these clinical insights contained within the documentation.  The machine learning algorithm can then also be used to determine the confidence scores of rules containing the added specified clinical support, further allowing for better validation and refinement of the suspects, to ultimately increase the likelihood that a suspect will get confirmed.

What are common misconceptions about machine learning?

I think that some people might assume complete automation when they hear “machine learning.” In reality, the association rules generated by machine learning still need to be reviewed by a clinical expert and further refined with clinical support.

Final question – what are the limitations of machine learning?

While machine learning is great at discovering clinical associations within large datasets – that is, at a population level – it doesn’t tell you what’s happening for a given individual. We must keep in mind that each patient’s clinical situation is unique, and that a suspect rule should always include sufficient clinical indicators to support suspecting the condition.

Another key limitation of machine learning is that its performance is only as good as the data the system is provided. If the dataset supplied for machine learning is small or not representative of the patient population of interest, the resulting rules won’t be the most accurate. As the volume of historical data grows, however, suspecting algorithms will keep getting better and better.

 

Rob, thank you for helping me understand more about machine learning!