Data Science

Removing Data Bias from AI and Machine Learning Tools in Healthcare White Paper

Removing data bias in healthcare

Healthcare is rapidly growing and evolving to become a data science, using data to make decisions and guide clinical care at every opportunity. Two aspects are driving that transformation: 1) new ways of processing data, especially AI and machine learning and 2) the incorporation of new types of data such as patient payment claims, social determinants of health, device data and genomics. Through data we have the potential to fundamentally improve the healthcare system. Yet, we also know with respect to healthcare that racial minorities and those living in poverty tend to receive lower quality healthcare than non-Hispanic Whites and people with higher levels of disposable income and accumulated wealth.

In the world of data science there have already been concerning episodes where bias has crept into healthcare algorithms, even when the creators had tried hard to ensure data integrity. In one case, reported by Obermeyer, data bias occurred when the algorithm used health costs as a proxy for health needs. Under normal circumstances less money is spent on Black patients with the same level of need, the algorithm concluded that Black patients were healthier than they in fact were.

The key challenge then is how we introduce and unlock the benefits from data science and machine learning in healthcare in a way that combats data bias, rather than exacerbating already present and widespread inequity.

Use of AI and Machine Learning in Clinical Care

Algorithms are used throughout medicine and have been for decades. For instance, when a primary care practitioner decides what dose of medicine to prescribe for a 4-year-old child, they use one or more standard algorithms based on the child’s age and possibly on other factors such as the child’s weight, or renal function to decide on the safe effective dose for the child to take.

More recently, thanks to improved digitization of medical records, the evolution of health information exchanges (HIE’s) and significant advances in computing power, artificial intelligence techniques have significantly advanced. We have reached a point where AI and machine learning techniques can rapidly assess very large volumes of data almost in real time and provide the outputs to clinicians while they perform their traditional role providing patient care.

The development and implementation of data science and machine learning into population health management as well as in the care of individual patients places a powerful new technology into the hands of clinicians and managers. As with any powerful technology, it is essential that end users understand how the technology can help and they are aware of any potential issues that may arise as a result of its use. AI and machine learning as clinical tools can provide real-time guidance about the care of patients with the expectation that clinicians will take account of that guidance in the management of their patients as part of their workflow.

In the clinical care world, clinicians are very well educated with respect to the rights of their patients and ethical approaches to care. They understand the need to handle all patient interactions with respect based on the relevant ethical frameworks applicable to their jurisdiction and their local jurisdiction’s unique requirements. An important and well recognized challenge comes when applying ethical frameworks to the situation where essentially a “black box” makes detailed recommendations to clinicians, and the individual clinician must decide whether to accept the recommendation or not. Clearly, we need to introduce new approaches and frameworks to ensure outcomes that are consistent with the clinician’s ethical mandates and ensure we don’t perpetuate existing biases or inequities.

There is a growing discussion around how we manage and process data in the process of clinical decision-making. The need for legislative improvements to address safe handling of personal health information including data governance has been recognized in the form of initiatives such as the recent updates to HIPAA in the U.S. and the General Data Protection Regulations (GDPR) in Europe. While AI and machine learning tools offer significant opportunities to advance healthcare and the practice of medicine, they do come with risks and it's important that the risks are recognized and addressed as much as possible. 

To err is human, but to really foul things up you need a computer.”

- Paul R. Ehrlich

Ensuring Efficient Tools Result in Equitable Decisions

AI and machine learning are powerful tools for the improvement of healthcare, yet at the same time have the potential to perpetuate already built-in, often system-wide, inequalities and to add new and hidden biases. Automation of the logic could mean more efficiently making large numbers of biased decisions. The most egregious international examples of unrecognized data bias in AI and machine learning include decisions on child welfare and criminal sentencing implicitly based on race. In the U.S. Black Americans and other minorities frequently experience disproportionate negative outcomes, an equity gap which could be exacerbated using biased algorithms.

One of the major sources of data bias in AI and machine learning comes from the use of limited original data sets. The use of HIEs offers the chance to reduce that bias because HIEs do typically contain large amounts of data, potentially available for purposes of AI or machine learning.

AI and/or machine learning tools developed against large data sets combined with high quality governance and oversight processes can be deployed and used safely with minimal risk of data bias to within acceptable limits. Furthermore, unbiased AI and machine learning tools once developed and tested rigorously can be a tool in the cause of reducing systemic injustice—by virtue of the fact they are unbiased in the recommendations they come up with.

A related challenge comes from the use of convolutional neural networks and other related approaches where the reasoning is effectively done in a ‘black box’ and the relationship between inputs and outputs cannot be easily explained. The importance of preventing, detecting and removing data bias in the AI/machine learning space has led to the emerging field of explainable AI and interpretable machine learning.

Building a Framework for Humane Technology

The problem of developing ethical unbiased algorithms is well recognized internationally. An exciting model to reduce unexpected data bias is the example of the AI Forum of New Zealand, which developed a framework for the governance of AI and machine learning. The framework was developed in part because of the realization that AI introduces the potential for algorithms themselves to be designed by other algorithms, making it ever more difficult to validate the recommendations and challenging the existing (though initially very limited) approaches to oversight.

Part of the framework, The Algorithm Charter for Aotearoa New Zealand, outlines a commitment to transparency, partnership, data, people, privacy, ethics, human rights, and clinical oversight designed to ensure that the deployment of AI and machine learning in New Zealand does not introduce new bias and represents a significant attempt at ensuring the software is truly humane technology.

Applying the Charter to New Zealand Healthcare’s Use of Algorithms

An example of the application of the Algorithm Charter is its application to a project in New Zealand, the New Zealand Algorithm Hub.

Orion Health launched a platform for health-related algorithms developed in the health sector to be shared and made available across the entire New Zealand health system, with the potential for rapid deployment internationally where appropriate. The concept started as a COVID-19 challenge. The idea initially was to give providers access to best practice standardized guidelines for all aspects of COVID-19 treatment and management, including the care of individual patients, resource allocation and disease spread epidemiological modelling. In addition to the care of individual patients, the hub is utilized by national and localized pandemic response teams for scenario planning to fine tune New Zealand’s successful containment of COVID-19.

There are many questions that could be answered or at least helped to answer using the predictive modelling capabilities now available. For example, if someone contracts COVID-19, how likely are they to need hospitalization? Or intensive care unit care? Or to die? How likely are they to develop the long-term complications that we increasingly recognize? This information can be invaluable for individuals, family, clinicians and coordinators as they seek the best possible path for treatment and management.

AI and machine learning tools can make a difference, and this technology enables widespread access. The key question faced by the hub team was which algorithms to deploy.

Informed by the Algorithm Charter and AI principles, the team initiated a governance process around the selection and deployment of algorithms recommended to be made available to clinicians and healthcare managers. To do this, a standardized set of questions is asked before recommending any algorithm for use. The questions cover the way the algorithm was developed, how it is intended to be used, and how it could possibly be misused. This draws upon consumers, Māori (members of New Zealand’s indigenous population), ethicists, lawyers, clinical experts, data scientists and policy professionals to contribute to the decision—making process. In order to support better decision-making, there is a need to ask hard questions before proceeding in order to avoid any potential for patient harm. 

One of the keys to good governance is to make it part of the validation and release process, rather than simply a gatekeeper role. Balancing potential benefits with potential harms, it can be tempting to avoid all risk. However, a collaborative governance process will improve the likelihood of positive outcomes by sharpening the guidance given to users of the content.

Another key question to ask is whether an algorithm has been validated for the New Zealand population. With a unique mix of ethnicities and underlying conditions, the appropriateness and accuracy of models developed offshore cannot be guaranteed. Especially in the case of COVID-19, where New Zealand has experienced very few cases, it was not always possible to test for accuracy on the local population. This led to strong guidelines being issued, allowing the country to be prepared if an outbreak were to occur.

We recently took a surgical risk calculator developed in the U.K. and recognized that it was under-estimating risk for some within the New Zealand population. We supported the development of an alternative surgical risk calculator, nzRISK, where the model for surgical mortality was specifically tuned to the local New Zealand population, reversing the previous bias by tuning for the most vulnerable groups including Māori.

Newly developed clinical tools such as AI and machine learning are best used for clinical and population decision making when placed in context as one of many tools available to assist payers and providers make better decisions. Their proper place is alongside validated clinical evidence and their effectiveness should be tested in similar ways to any new technology or treatment. Using the combination of scientific evidence and data science is a best practice for applying AI and machine learning to healthcare.

The New Zealand experience of combining technology and governance, to assure that algorithms benefit the intended subjects in a local setting, can be applied in any region. COVID-19 provided a catalyst for nationwide adoption, but all areas of health can benefit from similar processes in assessment and deployment of some of the most significant advances that AI is bringing.

Conclusion

AI and machine learning tools when applied to healthcare bring with them a risk of perpetuating old biases and even introducing new ones into clinical and population level decision-making. Nevertheless, by taking due care, it is possible to reduce and eliminate data bias with the result being a solution that not only avoids bias, but it can also actively help to reduce the impact of systematic bias due to other causes, including the inequitable treatment of population subgroups. Done properly, data science approaches have the potential to address these challenges on a large scale.

Adhering to a charter is akin to agreeing to a code of practice, and our data science community will serve our patients well by holding ourselves to high standards as we develop, train, select and deploy algorithms to support human decision-making.

The views and opinions expressed in this content or by commenters are those of the author and do not necessarily reflect the official policy or position of HIMSS or its affiliates.

Global Health Equity Network

Help advance the cause of health and wellness for everyone, everywhere by advocating for underserved groups in health information and technology around the globe, and collaborate to find real-world solutions to challenges and roadblocks.

Join Us and Become a Changemaker