As artificial intelligence (AI) permeates the healthcare industry, analytics leaders must ensure that AI remains ethical and beneficial to all patient populations. In absence of a formal regulatory or governing body to enforce AI standards, it’s up to healthcare professionals to safeguard ethics in healthcare AI.
The potential for AI’s use in support of the pandemic response can have enormous payoffs. However, ensuring its ethical implementation may prove challenging if healthcare professionals are not familiar with the accuracy and limitations of AI-generated recommendations. Understanding how data scientists calculate algorithms, what data they use, and how to interpret it is critical to using AI in a meaningful and ethical manner to improve care delivery. By adhering to best practices for healthcare AI, health systems can guard against bias, ensure patient privacy, and maximize efficiencies while assisting humanity.
This article is based on a Healthcare Analytics Summit (HAS 20 Virtual) presentation by Tom Lawry, National Director for AI, Health & Life Sciences, Microsoft, titled, “The Ethics in AI in Health: Creating Value that Benefits Everyone.”
Artificial Intelligence (AI) has already permeated many industries, and healthcare isn’t far behind. However, while AI has great potential for good in healthcare, analytics leaders must make certain considerations to ensure AI remains ethical and benefits all patient populations.
AI excels with tasks that involve patterns, recognition, and correlation. However, there’s a human element that no analytics application can replicate—people skills, such as reasoning, imagination, and empathy. While AI can improve care delivery, healthcare still needs human oversight to ensure it does the greatest good for the greatest number of people. Analytics leaders can best leverage AI by first understanding the principles of ethics in AI and adopting best practices to support those principles.
Responsible analytics stewardship will determine how AI impacts the healthcare ecosystem. Analytics leaders can ensure their AI is fair and ethical by implementing best practices that support the following principles:
Three best practices help healthcare AI users meet the above principles for ethics in healthcare AI:
Similar to the early days of the internet, healthcare is just beginning its AI journey. The internet initially had no privacy laws, lawyers, or governing bodies to regulate and enforce its usage. It wasn’t until the internet was mainstream that specialists and policies emerged.
Likewise, AI’s impact trajectory has gotten ahead of the legislators and regulators, making the application of ethical principles even more critical while the industry catches up. Practicing medicine without AI guidelines and regulatory oversight can result in unintended harm to patients or the health system. The same federal laws restricting release of medical information need to apply to AI and the data that will be fueling it. Further, healthcare AI users should consider aspects not covered by privacy laws but referenced frequently by AI, such as genetic testing.
While AI use aims to lighten the provider workload and improve patient care, it can result in unfairness or harm if staff doesn’t fully understand its limitations or accuracy. In particular, healthcare AI users need to know when to separate correlation from causation and recognize when disparity may impact an algorithm’s output.
For example, when a health system used a common AI algorithm to predict which patients may need follow-up care, it found that 82 percent of its patients were white, while only 18 percent were Black. This algorithm assumed higher healthcare spending correlated to worse health. And, since white Americans appear to spend more on healthcare than Black Americans, even in equivalent circumstances, the algorithm classified whites as being “more ill.” After adjusting the algorithm to pull out financial spending as a correlational value, rates adjusted to 53 percent white and 43 percent Black.
Developers of the algorithm in this example have since fixed it, and organizations have used its predictive output across millions of patients. Its initial design, however, shows how disparity can unintentionally enter into AI. As such, ethical AI requires a level of human reasoning and understanding to ensure fair results that benefit all humanity.
If healthcare professionals can’t look at what’s driving predictive capabilities, the ability to undo AI-generated recommendations becomes ever challenging. Transparency into how a system calculates recommendations allows adjustments to the algorithms for the good of all.
Additionally, healthcare AI users should always take sources of bias into account. For example, data for AI often looks solely at the data of patients served. However, the lack of data from underserved populations is likely to skew results. And, continuous learning AI systems—those that get smarter and more refined as data comes in—may require regular subjective human evaluation to ensure their output is still providing the intended clinical results.
As the use of AI grows, it will impact all healthcare workers and patients at some level. By understanding how algorithms are calculated, what data is fueling them, and what the results mean for patient care, healthcare professionals can work with AI responsibly. Until regulations and safeguards are in place, it is up to healthcare professionals to ensure that AI delivers fair, accountable, transparent, and ethical results to all. Ensuring the understanding and transparency of AI-driven systems by following these principle-driven best practices will help improve quality and enhance care delivery.
Would you like to learn more about this topic? Here are some articles we suggest:
PowerPoint Slides
Would you like to use or share these concepts? Download the presentation highlighting the key main points.
Click Here to Download the Slides
https://www.slideshare.net/slideshow/embed_code/key/a7qAHsWc4bCcVx