Mitigating Machine Bias

May 6, 2021

Artificial intelligence has the power to transform our society. It can be used by school administrators to identify at-risk students and intervene to help. It can help doctors detect diseases and cancers. It could even help alleviate caregiver shortages for seniors with dementia. Yet such benefits do not come without risk. In its 2018 report, Artificial Intelligence: A Roadmap for California, the Little Hoover Commission assessed the potential advantages and pitfalls of AI, and recommended a series of steps the state should take to develop strategic leadership. Now, a new report from the Greenlining Institute, a California-based non-profit focused on advancing economic opportunity and empowerment for people of color, echoes many of the Commission’s concerns, and highlights the continued importance of our recommendations.

Simply put, AI empowers machines to think like people. While this enables AI to detect dangers and identify solutions beyond human capabilities, it also means that machines utilizing AI are prone to bias. Engineers building AI systems – many of whom are male, educated, and highly compensated -- can inadvertently pass along their own prejudices, assumptions, and preconceived notions about the very topic their AI program will analyze. Absent a rich diversity among AI innovators, the Commission wrote in its report, the technology could fail to achieve the economic, social, and environmental benefits it otherwise could.

According to the Greenlining Institute’s new report, AI biases across a number of policy areas – healthcare, education, finance, housing and development, employment – disproportionately impact marginalized communities who need these services the most. The results are devastating, as the Institute highlights in their report:

  • An algorithm designed to help a hospital system identify patients needing additional medical attention recommended White patients for resources while failing to do so for equally sick Black patients.  

  • Online banking algorithms designed to combat racial discrimination in mortgage lending charge White borrowers less than Black and Brown borrowers. Each year, borrowers of color are overcharged by $765 million compared to equally qualified White borrowers.

  • Algorithms used to classify and map neighborhoods by market strength and investment value encouraged disinvestment in distressed city areas, leading Detroit and Indianapolis to withhold much-needed investment from poorer neighborhoods of color.  

Mitigating the harmful effects of AI bias begins with greater transparency and accountability of AI algorithms, the Institute concluded, along with allowing algorithms to access data on race and gender. Further, algorithms should be optimized for equity, a strategy that “requires a shift away from designing algorithms that maximize profit and return on investment and towards algorithms that optimize for shared prosperity, racial justice and resilient ecosystems.”

The Little Hoover Commission expressed similar concerns in Artificial Intelligence: A Roadmap for California. Algorithmic bias can amplify racial discrimination and harmful stereotypes about marginalized communities, the Commission found, reflecting prejudices deeply embedded in our society. To encourage transparency and accountability of AI programs, the Commission urged California to take targeted action to improve AI outcomes.  

First, policymakers must be educated about AI algorithms – how they are created, tested, and used. As the state with the best and strongest set of research universities, California must stimulate research and development into AI technologies within an ethical framework that promotes AI for economic, social, and environmental good.

Second, California should improve its data collection, particularly data on jobs and skills for future workers, and share this data within all parts of state government provided privacy is assured. Many experts in private industry have called for the use of broader, more reliable data from government to use in AI algorithms as a way to mitigate biased outcomes. Doing so will improve the likelihood that state policies focus on equity, mitigating income inequality, and ending unfair distribution of government benefits, funding, and services based on impermissible discrimination.

In addition to these recommendations related to equity, the Commission also made a series of other recommendations, including the appointment of a cabinet-level gubernatorial adviser on the topic, the designation of a chief AI officer in each agency, and the development of strategic plans regarding AI.

As California increases investment in AI in the years to come, it must seek to maximize the advantages and mitigate the shortcomings, including inadvertent bias, in algorithms used by AI programs. The Commission urges policymakers to consider these recommendations for unbiased, equity-focused AI that will benefit all of California.

Sign up to receive email updates on specific study topics here or stay up-to-date with our efforts by following us on our social channels: