Machine Learning Has Enormous Potential For Patient-Centric Groups — But It’s Not Without Risks

July 19, 2023

By Kanchana Padmanabhan

Kanchana Padmanabhan is Vice President of Engineering & Product at Array Insights. Dr. Padmanabhan, who holds a Ph.D. in Computer Science from North Carolina State University, brings deep experience in productizing machine learning solutions that serve the data needs of clients. During her Ph.D. work she developed biomarkers for Alzheimer’s disease. Prior to Array Insights, Dr. Padmanabhan was director of machine learning at Kinaxis (after the company acquired Rubikloud), where she delivered solutions for large CPG and retail clients. 

Artificial intelligence is making an impact on every industry — and especially in the world of healthcare. 

For clinical research, the most prominent application of AI might be machine learning (ML), which Google Cloud aptly defines as “a subset of artificial intelligence that automatically enables a machine or system to learn and improve from experience.” In most cases, machine learning involves the use of algorithms to analyze large swaths of data and pull actionable insights to help human operators make informed decisions.

Prior to joining Array Insights, I spent the past decade implementing machine-learning models for large organizations in the social media and supply chain management industries. Developing clinical data solutions with Array Insights is a different animal altogether. Machine learning in the world of healthcare comes with its own set of considerations.

Data is essential for medical breakthroughs. When researchers can access insights from rich volumes of patient data, they have a greater opportunity to discover cures and treatments. ML’s ability to distill large datasets fits naturally into this equation.

Machine learning stands to help many patient-centric organizations accelerate their research goals while keeping patient privacy at the forefront. However, the use of ML models also brings about various questions regarding bias, data ownership and other thorny topics. The medical community is still navigating the answers, but it’s essential to understand the intricacies of ML before implementing it at scale.

Potential uses of ML for patient-centric groups

It’s hard to conceptualize how useful ML is – and could be —  in the context of healthcare. Here are four potential applications that apply specifically to patient advocacy and non-profit health organizations:

Analyzing large volumes of patient data

This is an opportunity for humans and machines to complement each other. As long as patients give their permission, ML models can analyze millions of patient EHRs to help accelerate research. ML can use multi-model datasets (such as clinical data or imaging) and automatically learn nuances for different segments based on patient demographics, age, disease subtype, etc. It can handle manual data analysis and leave human researchers more time to pursue the insights that ML has uncovered.

Early diagnoses

Researchers rely heavily on biomarkers, i.e. measurable substances that help predict how a condition might progress. Machine Learning could help researchers by uncovering “computational biomarkers,” or trends within data that serve as similar predictors of future health.

It’s possible that we could figure out the patterns of a disease at an earlier stage (when treatment is easier and more effective) because ML models can crunch a lot of data and variables together.

This relates closely to research that was part of my Ph.D., on problems pertaining to biomarker discovery through computational methods for Alzheimer’s disease. Using clinical, cognitive and genetic data from the Alzheimer’s Disease Neuroimaging Initiative (ADNI), we studied how the presence of mild cognitive impairment (MCI) related to future Alzheimer’s diagnosis.

Risk stratification tools

On a similar note, machine learning tools can also help patients assess their risk for certain conditions. ML can help analyze large volumes and data and give a patient an risk assessment based on that patient’s own health data.

Array Insights is already partnering with a leading patient advocacy organization to make this a reality. Earlier this year we announced our partnership with the Fatty Liver Foundation on the development of its AI Fatty Liver Risk Stratification Tool. The tool utilizes a machine learning model that our team trained using data from a major FLF study. Early testing showed that the tool registered a 77% detection rate for NAFLD, an early-stage liver condition. Best of all, the tool will be available for patient use, increasing its accessibility.

Patient story management

A core part of every patient-centric organization’s role is to serve as a voice for their patient community. However, it can often be difficult to stay engaged with patients; many organizations receive thousands of emails every year, each of which contains potentially vital information.

ML and generative AI have the potential to help streamline the communication process. Organizations could use these tools to help tag, respond to and process messages from their patients. This could help add qualitative data to existing clinical data sets, and also assist patient advocates in identifying stories and bios for grant writing and sponsorship opportunities.

Challenges to implementing ML in healthcare

For all of ML’s potential, it also raises questions:

Bias

ML learns from data, and data is biased by humans. It’s possible that human bias can enter these models — as we saw with recent legal action taken against Workday for alleged bias in AI models that analyzed job applicants. Given our long history of discrimination and mistrust in healthcare, what happens when bias becomes encoded in models that are processing information 100x faster than humans?

Explainability

Researchers will need to work on answering the question: “Why did we get the result we got?” Some large language models have in the order of billions of parameters; understanding how all of those neurons work together is not simple. 

Model training

Models are dependent on the data they’re given. These models learn from the context and patterns in that data alone. ML systems won’t learn from other models unless they’re explicitly set up to do so. Thus, two models for the same use case could behave differently and provide different outcomes. Two human researchers could come to different conclusions, but they can more easily work together to achieve better outcomes. ML models aren’t quite there yet.

Self-diagnoses

When patients search for their symptoms or condition on a search engine, they encounter links. It’s up to the patient to read this information and come to their own conclusion. ML and generative AI models such as ChatGPT often produce direct answers, which aren’t always 100% accurate. Patient-centric organizations might want to focus on first understanding how these models respond to their disease area questions and then educating patients on the validating of the results provided.

Patient-centric data practices

Perhaps the largest issue to tackle relates to data ownership and privacy-preserving practices. We’ll have to answer questions such as:

  • Who is responsible for owning and monitoring the data model?
  • What can patients do to remove their data from the model if they no longer wish to participate?
  • How can we ensure models don’t accidentally share confidential patient information?

The entire AI community is grappling with data privacy concerns, and the healthcare world will find that debate to be exceptionally prominent.

Responsible ML implementation for patient-centric organizations

Patient-centric organizations have an opportunity to be stewards and shepherds of ML implementation. They can serve as a bridge between ML tools and their patients, helping them reap the extraordinary benefits of ML while keeping the needs of the patient as the primary driver. We must remember that although these models seem “human,” they are simply algorithms and math tools designed to help humans process and understand incredible amounts of data.

Array Insights is excited to continue implementing and creating ML solutions to serve patient-centric groups and their patients. As an agnostic software and service provider, Array Insights uses a pioneering form of patient-centric AI technology — machine learning and analytics on federated data — to ensure that patient data stays contained, and all data uses are logged and auditable by the patient advocacy organization.

Check out more information about our risk stratification tool with FLF or reach out to us directly to hear how we can help your organization deploy ML to accelerate your research goals.