At a Northwell Health event about artificial intelligence (AI) in healthcare, two McKinsey partners gave a panel talk where they provided examples of how AI can be harnessed to solve care inequity challenges, while also warning about the potential for bias.
Senior Partner Lucy Pérez and Associate Partner and former academic physician Melvin Mezue framed barriers to healthcare equity through the lenses of accessibility, affordability, quality of care, and supportive social context. They described the ways that AI could be leveraged to resolve challenges in all four areas, from personalizing the care experience to assisting care providers in determining when a patient is ready for discharge.
Pérez and Mezue suggested that providers begin by centering equity into the organization by prioritizing equity efforts and making them a core part of operations. They also suggested engaging with stakeholders impacted by health inequity before designing an AI solution. This will help avoid potential instances of bias, such as the recently-discovered issues with pulse oximeters, which researchers determined were three times more likely to overestimate oxygen levels in severely ill Black COVID-19 patients.
Tech in general has a strong potential for bias, as it is incredibly easy to unintentionally design in and deploy biases at scale in sensitive applications. For instance, automated predictive policing algorithms and job application processing were both found to be vulnerable to bias introduced from “training” data and flawed data sampling.
According to Mezue, organizations also need to carefully analyze their population data and develop governance frameworks as part of a deliberate plan to uncover and rectify disparities.
“That’s where things start to get a little bit more difficult,” he said. Ingesting and understanding data is not always straightforward and can get expensive. But “there are things you can do without a lot of money,” Mezue stressed in a post-panel talk with Fierce Healthcare, like asking the right questions and designing the right solutions.
In the panel discussion, the pair described asking the right questions as considering how AI could be biased in serving specific populations, as well as asking whether the technology permits patient consent and ownership of health data, recommending an alignment in use of AI with organizational mission, establishing use boundaries and a group focused on oversight, and setting up risk controls.
Like many new technologies, there are both opportunities and risks for early adopters, but with strong planning and intentional development and deployment, healthcare organizations can ensure that AI is there to help while doing no harm.