McKinsey Study Advises On Prevention Of AI Bias

Artificial intelligence is irrefutably playing a growing role in the technological revolution of the 21st century. Many companies look to AI and machine learning for data correlations and predictions that can give important insight on a plethora of processes. However, if left unchecked, human bias has a tendency to transfer to the objects of our creation, and AI is not immune to this phenomenon.

According to McKinsey consulting group, in a new report entitled Booting Out Bias: How To Derisk Advanced Analytics Models In The Public Sector, there are steps that public agencies should take to minimize bias in AI and ML. McKinsey states that although all AI and ML carry the potential to malfunction due to the complexity of the algorithms used, it is important to incorporate mindfulness in how software is operated to prevent degradation as a result of misuse and infrequent re-analyzation of outcomes.

Become a Subscriber

Please purchase a subscription to continue reading this article.

Subscribe Now

Because of how data has been collected over the years, and the under-representation of marginalized communities in said data, AI malfunction is likely to negatively affect the most vulnerable subsets of society. This has, unfortunately, already occurred in many instances of AI use in the public sector. Advanced analytics models have shown that people of color are sentenced more harshly, low-income and immigrant families are accused of fraud at a much higher rate, and that lower grades are given to students from under-resourced neighborhoods.

To reduce the risk of potential bias, McKinsey stated, public agencies should outline a model for accountability when dealing with AI. This entails having a senior leader who is in charge of risk management, who has enhanced accountability for the final result of advanced analytics outcomes. Potentially appointing an ombudsman to act as a spokesperson for external stakeholders may be a valuable step, according to the consulting group.

Developing clear and efficient guidelines on standards and practices is also critical. This should include protocol for certain outcomes, peer and empirical review processes for detecting bias, and a diverse team of individuals to address problems from different perspectives. Agencies should also implement algorithm review panels to meet regularly and examine software discrepancies and potentially take collective accountability when problems do arise.

However, one of the biggest steps the public sector should take, according to McKinsey, is creating transparency and agency-wide education rather than implementing it in specific areas. The more employees are versed in AI, the less likely they are to let a mistake slip by. With proper utilization, AI has the potential to advance our understanding about society without inhibiting its growth.