The Ethical Use of Patient Data

April 13, 2023

Contributors: Anne Kim, Miguel Morachimo and Christopher Scalchunes, MPA

Medicine should be developed and designed for everyone. Unfortunately, many of our medical treatments don’t account for the diversity of conditions in our communities. We’re missing out on positive health outcomes because our care options aren’t as representative as they need to be.

Our country has a long history of medical and clinical abuse and negligence when it comes to underrepresented populations. From using prisoners of war as test subjects to taking advantage of marginalized communities as guinea pigs for treatments, many questionable practices have been enacted in the name of medical research.

Look up terms such as “smallpox blankets,” “birth control testing in Puerto Rico,” “Henrietta Lax cells” and “Tuskegee airmen syphilis experiments.” While these instances have rapidly declined in today’s world, medical ethics is still incredibly relevant and more important than ever.

The ethical question has shifted from the physical world of experimentation into the realm of patient data and decision-making around its usage. With this history of either malicious or negligent abuse, it’s no wonder that there’s so much hesitancy from underrepresented communities to participate in trials and contribute data.

We can’t make clinical data more representative without considering this history – and using it to inform how we adapt to new ethical questions around data implementation and usage. It’s an ongoing conversation that needs collaboration and iteration. Patient advocacy organizations and other non-profit health organizations will play a vital role in this conversation: as patient watchdogs, ethics educators and collaborators with the research community.

No solution is going to be perfect, so how do we future-proof solutions and avoid permanently damaging the data ecosystem? We’ll never be “done” with this conversation because new technology will continuously prompt new questions about our ethical standards. That’s why an active conversation is vital to staying ahead of the curve.

A Real-World Example: Clinical Trials and Juvenile Asthma

Asthma is estimated to be the most common chronic disease in the world. Puerto Rican and African-American people are typically more affected by asthma. Unfortunately, Albuterol, the active chemical in the most commonly prescribed inhaler, isn’t as effective in these populations.  Nevertheless, albuterol is often the only form of treatment available to minority children in lower-income environments.

So why is the most common treatment the least effective for the largest affected population? This simple, short answer, is that we’re conducting studies and trials on the wrong subject group. Most studies of lung disease and juvenile asthma have been performed on people of European American descent – showing inherent bias in the way we gather clinical data.

How do we balance data usage, medical ethics, and privacy?

Sure, the most privacy-preserving approach would be to store the data, lock it up and never use it after the initial study is done.

But as any patient advocacy organization understands, this wouldn’t be ethical. Without understanding the insights within the data, we can’t use it to advance medical research or break ground in efforts to cure rare diseases or change the outcomes for minority or underrepresented demographics. It would be unethical not to use the data and not consider its idiosyncrasies. As clinical researchers, we have a responsibility to patients and communities to use their data for the greater good.

But what happens if the patient willingly provides their data but doesn’t understand the full breadth of how their information could be used?

Another Real World Example: DNA Kits

Take genetic ancestry kits for example. Consumers are willingly purchasing the product and submitting their DNA for analysis. While they may be curious about their lineage and interested or concerned about potential health risks associated with their genetic makeup, they probably aren’t as focused on other risks, such as privacy. Once their DNA is sent off and received by the testing company, what happens to that incredibly sensitive information beyond just giving the customer their genetic and health assessment?

A few years ago, the FTC began investigating some of the major players in this space over privacy concerns of how genetic data was being handled. Spurred by the long-awaited arrest of a notorious serial killer traced back through genealogy data, the question of how ancestral data was being used got brought to the forefront. While solving a cold case was an incredible discovery, it also prompted questions. Those who had submitted their DNA realized they didn’t understand the full usage of their data — or that third parties had access to it.

What happens if there is a hacking attempt or data gets unlawfully acquired? How do we ensure this data is truly secure for the patients? Should these companies be allowed to profit from the data of patients?

Practices to improve medical ethics with patient data

Patient privacy must be at the forefront of every decision we make. Patients have to be sure that their data is secure, private and only used for the right reasons. Our standards for these three considerations are constantly changing.

A patient’s data contains some of the most sensitive information imaginable. From medical history like prescriptions, sexual history, and familial data to hyper-personal information like basic demographic data and history of drug usage, this data is now entering the digital space. While this presents an incredible opportunity for medical advancement and collaboration, it also underscores the immense focus that must be placed on the privacy, security and ethical usage and distribution of this information.

Due to the sensitive nature of this information, it is no surprise that there are various laws and protections put in place to protect patient privacy across the globe. Since these vary from country to country, it is important to understand the legal obligations and requirements associated with the data being used.

We also should consider the intricacies of these laws. In the US, the Health Insurance Portability and Accountability Act (HIPAA) is technically a disclosure regulation law, not a patient privacy law. So in this instance, HIPAA regulates how your medical information may be disclosed, with or without patient consent, yet consent is not needed if the data sharing falls under treatment, payment, or healthcare operations.

That’s to say: a doctor can share your information and consult with a fellow practitioner regarding a recent injury or condition without needing to ask you, as this falls under the umbrella of treatment and care. But should they?

Part of the challenge with data is knowing what actually needs to be and should be collected. Adopting a lean data practice can help medical organizations build trust with their patients and reduce the overall risk associated with their data.

The foundations of these practices include making the decision if the data being collected ultimately delivers value for its intended usage, ensuring proper security measures for data use and access, and keeping patients informed on how and when their data might be used.

Additionally, the medical community must find ways to ensure that patients can easily understand what’s happening with their data. We need straightforward communication, not 15 pages of legalese in a data agreement. That’s something that many patient advocacy groups are taking on themselves, including the Immune Deficiency Foundation. IDF has built their own patient Bill of Rights, which aggregates complicated legal language into a single document that patients can wrap their heads around.

Understanding the risks of data usage

Moreover, just like we consider collective interests when carrying out clinical studies, we should think about the potential harm to communities as a whole when using their data.

For instance, data about the recent monkeypox outbreak revealed a higher number of cases among men who have sex with men. But we of course can’t assume that all members of that group are infected, or that they’re the only group exposed to monkeypox. Doing so would lead to unnecessary discrimination — not to mention a lack of focus on other potential at-risk groups.

If we go back to asthma example from earlier, a collective data practice approach would have involved the most at-risk populations (in this case, Puerto Ricans and African Americans) in the design of the studies.

The exploitative use of collective health data, even when it’s not personally identifiable, can lead to discrimination, stigma, harassment and bias. Researchers should understand the impact certain data usage could have on a community. Stakeholders should have a clear say in how and why their data is used.

The Array Insights Solution

Ethics is hard. Even when we act with the best intentions, outcomes can still cross into gray areas at times. The individual(s) impacted may not be fully informed or have enough time or resources to take action and respond accordingly.

But this still is not an excuse to try. Medical and patient data ethics will always be an ongoing conversation needing collaboration and iteration for ideation. As long as we continue to prioritize patient privacy, we will continue to grow and get better.

Array Insights offers software for creating a Clinical Data Federation for connecting patient advocacy groups, hospitals and clinical researchers. Array Insights’ turnkey clinical data research platform unlocks more expansive and representative patient data while using advanced security tools. Array Insights’ secure data analysis solution, helps connect the right sources and parties allowing hospitals to share their patient data safely.

Through Array Insights’ clinical data federations, traditionally siloed data is able to be accessed and shared securely, helping to include underrepresented populations in the clinical datasets and reducing the innate data bias that occurs from using the same datasets over and over. Array Insights harnesses analytics and machine learning on federated data – a methodology that helps preserve the privacy of patient data. It’s one of the many privacy components Array Insights uses to make it easier for hospitals to launch and participate in our unified patient registry.

Schedule a demo to see how Array Insights is bringing PAOs and researchers together to enable disease-ending insights.

About the Contributors

Anne Kim is co-founder and CEO of Array Insights, which is based on her graduate work at MIT. She worked with Professor Pentland on federated learning and blockchain solutions for clinical trial optimization using Open Algorithms (OPAL). Outside of her research, Anne has led a number of different projects in computer science and molecular biology and cyberbiosecurity work with the EFF, ACLU, and DEFCON.

Miguel Morachimo is a program officer at the Mozilla Foundation, the not-for-profit behind Firefox that stands for a better web, working at the Data Futures Lab.

Christopher Scalchunes, MPA, is vice president of research at the Immune Deficiency Foundation, a non-profit that improves the diagnosis, treatment, and quality of life of people affected by primary immunodeficiency through fostering a community empowered by advocacy, education, and research.