AI in Healthcare: Finding the Right Answers Faster

Summary

Health systems rely on data to make informed decisions—but only if that data leads to the right conclusion. Health systems often use common analytic methods to draw the wrong conclusions that lead to wasted resources and worse outcomes for patients. It is crucial for data leaders to lay the right data foundation before applying AI, select the best data visualization tool, and prepare to overcome five common roadblocks with AI in healthcare:

1. Predictive Analysis Before Diagnostic Analysis Leads to Correlation but Not Causation.
2. Change Management Isn’t Considered Part of the Process.
3. The Wrong Terms to Describe the Work.
4. Trying to Compensate for Low Data Literacy Resulting in Unclear Conclusions.
5. Lack of Agreement on Definitions Causes Confusion.

As AI provides more efficiency and power in healthcare, organizations still need a collaborative approach, deep understanding of data processes, and strong leadership to effect real change.

Downloads

Download

This report is based on a 2019 Healthcare Analytics Summit presentation given by Jason Jones, Chief Data Scientist Officer, Health Catalyst, entitled, “Getting to the Wrong Answer Faster: Shifting to a Better Use of AI in Healthcare.”

Data and analytics can be the driving force behind the successes or failures of a health system. To transform healthcare delivery, data is critical—but only if the data leads you to the right conclusion. Wrong conclusions within your analytics can cause suboptimal outcomes for patients and wasted attempts to utilize artificial intelligence (AI) in healthcare.

Commonly used analytic methods—particularly with AI in healthcare—can often lead analysts and leaders to unknowingly draw the wrong conclusions. Therefore, it is imperative that data leaders understand and leverage important strategies and tools to derive the right conclusions and recognize the wrong answers.

Data Stewardship—Key to Data Leadership

Leaders in healthcare have a significant amount of power with the use of AI/ML; yet, with the same data sets, different leaders can draw completely different conclusions. Therefore, data leaders and analysts have a responsibility to act as stewards of data and help colleagues and team members use data correctly so they can arrive at the right answers faster. It is not uncommon for data users to arrive at the wrong answers, but have no idea, because the answers still look aesthetically pleasing. That’s why data stewards play a key role in overseeing appropriate use, and display, of data.

Laying the Right Foundation for AI in Healthcare

Before using any form of AI in healthcare, the key is to first identify the problem that needs to be solved. It is recommended that AI/ML projects have at least two years’ worth of historical data, a statistical process control chart that clearly identifies the population, and a clearly defined outcome.

One of the first mistakes organizations make with data and AI/ML is leveraging AI/ML without the proper foundational data, which inevitably leads to the wrong conclusions based on insufficient data and wasted resources that can take years to recover.

When organizations have laid a strong data foundation, AI/ML take raw, tabular data (Figure 1) and turn it into something people can use to make decisions (Figure 3).

The Right Data Visualization Tools Drastically Change Data Interpretation

Another step in the journey to find the right answer is to understand which data visualization tool is right for your data sets. The right data visualization tool will radically change the way people consume, see, and then interpret data.

For example, if someone was interested in buying a house, but heard there were higher rates of cancer in certain areas of the region, that person might want to view the cancer rates vs. location data in order to identify which areas have higher rates of cancer (where to avoid buying a house).

The data could be displayed myriad ways. In the example below (Figure 1), the cancer by location data is displayed in a tabular list that shows each location where cancer has been reported, grouped into geographic regions. For example, there are four reported cancers in region A01, two cancers in region A02, etc.

Tabular list of cancer diagnosis rates by region
Figure 1. A tabular list of cancer diagnosis rates by region.

Another way to display the same data is to use a scatterplot (Figure 2), a powerful data visualization tool that allows users to more easily consume and interpret data.

Scatterplot showing cancer diagnosis rates by region
Figure 2. A scatterplot showing cancer diagnosis rates by region.

A person can clearly identify cancer-free regions, empowering data-driven decisions. The right visualization tool also allows users to take the data one step further (Figure 3) and identify trends, patterns, and clusters in the data to target opportunities for improvement.

Scatterplot showing cancer diagnosis rates by region showing cancer free areas
Figure 3. A scatterplot showing cancer diagnosis rates by region showing cancer free areas.

However, were the person to use AI/ML, in addition to using data to make a decision, she may find that there is an even better way to display the exact same data. A bar chart (Figure 4) overlaid with an algorithm that predicts cancer rates, the blue line represents the bell curve based on the actual data (cancer rates based on geography) and the purple line (AI/ML) is what was expected if there was no relationship between cancer diagnoses and geography. Because the bell curve of the actual data aligns so closely with the AI/ML bell curve, there is strong evidence that there is no relationship between cancer diagnoses and region. Therefore, someone looking to buy a house in this region should not base their decision on cancer rates by region.

Bar chart showing cancer diagnosis rates by region
Figure 4. A bar chart showing cancer diagnosis rates by region.

Same Data, Different Conclusions—Which Conclusion Is Correct?

The cancer rates by region example illustrates that the data never changed, only the way the data was displayed. It is crucial for data leaders to understand how data visualization tools can drive people (or health organizations) to the right answer or the wrong answer.

For example, the East Africa Institute of Certified Studies’s (ICS) attempted to improve academic performance of Kenyan kids in grade school. At first, ICS provided more books (rather than the one book for the entire classroom), flip charts, and teachers. The changes made no difference. In fact, ICS saw an increase in inequity. The initial hypothesis was that the kids who benefitted from the books were already high performers, and with their own book and more individual attention from teachers, the high performers will keep outperforming the low performers, increasing the disparity.

A leader at ICS mentioned the findings to a colleague at the World Health Organization (WHO), who suggested school absences due to worm-based illnesses might be the problem. It turned out that many of the students were missing a significant amount of classroom time due to worm infections. ICS decided to implement deworming days at school, an opportunity for students to safely seek treatment for their illnesses at school. Overtime, ICS saw school absenteeism decrease by 25 percent and income levels increase by 20 percent levels over 10 years.

Although ICS’s first attempts at academic improvement led it to the wrong answers, the leadership collaborated with leaders at WHO and were willing to try something new in an attempt to get to the right answer. Because of its collaborative efforts, humility, and commitment to improve academic performance, ICS identified the right answer—deworming programs at schools—that caused massive change for Kenyan kids, both now and in the future.

Look Ahead: Five Common Roadblocks for AI in Healthcare

Although AI in healthcare seems ubiquitous, and even straightforward, there are common challenges that arise. Five in particular occur as data analysts and leaders try to leverage AI/ML to get to the right answers :

  1. Predictive Analysis Before Diagnostic Analysis Leads to Correlation but Not Causation. In the Gartner Analytic Ascendency Model (Figure 5), the diagnostic analysis does not always have to precede predictive analysis. Predictive analysis, an exercise of correlation that does not reveal the why behind the correlation, is sometimes easier to focus on/identify before someone focuses on the diagnostic analysis, trying to draw causation and understand why something happens. When trying to understand data, sometimes analysts become too focused on following the Ascendancy Model with exactness—trying to understand the ‘why’ (Diagnostic Analytics step) and then the ‘what’ (Predictive Analytics step). There are instances in which identifying the ‘what’ will then help someone identify the ‘why’, but it requires a flexible mindset.
Graphic showing the Gartner Analytic Ascendancy Model
Figure 5. The Gartner Analytic Ascendancy Model
  1. Change Management Isn’t Considered Part of the Process. In the Gartner Model above, the “prescriptive analytics” benchmark seems like a technical challenge; however, it is a challenge with leadership. At this point in an organization’s analytics journey, the changes that leadership need to make are obvious, but it takes unwavering leaders to implement new processes that effect real change. Without change management, the insight gained from analytics reach a dead end and the work to identify opportunities for improvement were in vain.
  2. The Wrong Terms to Describe the Work. Rather than using the term “We will evaluate your program” to measure a program’s success, data architects, analysts, and leaders should use the term “Let’s work together to optimize your program” because it engages team members and emphasizes the collaboration aspect, resulting in more effective, data-informed programs. Using words like “work together” and “optimize” (instead of “We will evaluate…”) are more welcoming to team members and send a message that the analytics team is there to work with them, rather than judge/evaluate their work.
  3. Trying to Compensate for Low Data Literacy Resulting in Unclear Conclusions. To overcome decision makers’ lower levels of data literacy, data architects often oversimplify information. Instead they should leverage the features of AI/ML in “standard” reporting—such as adding confidence limits, computer forecasting, etc.—to facilitate interpretation and make conclusions clear and easier to understand for leaders. Including more information, rather than less, provides more context, decreases guessing, and empowers leaders to make decisions based on a complete picture.
  4. Lack of Agreement on Definitions Causes Confusion. The idea of a single version of truth is illusory and does not exist in healthcare. Leaders need to pursue convergence of evidence and discuss all possible options, then make an informed decision centered on that evidence-based discussion. If teams focus on agreeing on one single idea of truth, they will never progress past that point because it does not exist. That is why it is imperative to leverage the diversity of thinking that comes from multidisciplinary teams—to brainstorm ideas, define the problem based on everyone’s input, and then implement changes to address that problem.

AI in Healthcare Isn’t Enough Without Humans

AI/ML bring power, utility, and efficiency to the healthcare world, but it does not replace the invaluable role that humans play. Analytic processes require guidance from data leaders and stewards in order to draw the right conclusions.

Specific AI/ML tools and techniques are both useful and attainable, but they are not enough for healthcare organizations to arrive at the right answers. In order to eliminate the wrong answers faster and ultimately find the right answer, health systems need a collaborative approach, an understanding of data and analytic processes, and leaders who remove common barriers and stay focused on moving forward.

Additional Reading

Would you like to learn more about this topic? Here are some articles we suggest:

  1. Meaningful Machine Learning Visualizations for Clinical Users: A Framework
  2. Artificial Intelligence in Healthcare: A Change Management Problem
  3. Machine Learning Tools Unlock the Most Critical Insights from Unstructured Health Data
  4. How Artificial Intelligence Can Overcome Healthcare Data Security Challenges and Improve Patient Trust
  5. Healthcare Data Management: Three Principles of Using Data to Its Full Potential

PowerPoint Slides

Would you like to use or share these concepts? Download the presentation highlighting the key main points.

Click Here to Download the Slides

https://www.slideshare.net/slideshow/embed_code/key/IjmLBNO3sCD3f