Science

Researchers say it's time to crack open AI 'black boxes' and look for the biases inside

As algorithms make more critical decisions affecting our lives, it has become more difficult to understand and challenge how those decisions are made, a new report says.

It's becoming more difficult to understand and challenge algorithms that make big decisions in our lives

ProPublica found last year that a proprietary algorithm used in the U.S. to predict the likelihood that a person who committed a crime would reoffend was biased against black offenders. (Mike Laanela/CBC)

Courts, schools and other public agencies that make decisions using artificial intelligence should refrain from using "black box" algorithms that aren't subject to outside scrutiny, a group of prominent AI researchers says.

The concern is that, as algorithms become increasingly responsible for making critical decisions affecting our lives, it has become more difficult to understand and challenge how those decisions — which in some cases, have been found to have racist or sexist biases — are made. 

It's one of a handful of recommendations from New York University's' AI Now Institute, which examines the social impact of AI on areas such as civil liberties and automation. The group — which counts researchers Kate Crawford of Microsoft and Meredith Whittaker of Google among its members — released its second annual report on Wednesday afternoon. 

AI Now is part of an increasingly vocal group of academics, lawyers and civil liberties advocates that has been calling for greater scrutiny of systems that rely on artificial intelligence — especially where those decisions involve "high stakes" fields such as criminal justice, health care, welfare and education.

Given the growing role algorithms play in so many parts of our lives — such as those used by Facebook, one of its data centres pictured here — we know incredibly little about how these systems work. (Jonathan Nackstrand/AFP/Getty Images)

In the U.S., for example, there are already automated decision-making systems being used to decide who to promotewho to loan money and which patients to treat, the report says.

"The way that these systems work can lead to bias or replica the biases in the status quo, and without critical attention they can do as much harm if not more harm in trying to be, supposedly, objective," says Fenwick McKelvey, an assistant professor at Concordia University in Montreal who researches how algorithms influence what people see online.

Obscuring inequalities

McKelvey points to a recent example involving risk assessments of Canadian prisoners up for parole, in which a Métis inmate is going before the Supreme Court to argue the assessments discriminate against Indigenous offenders.

Were such a system to ever be automated, there's a good chance it would amplify such a bias, McKelvey says. That was what ProPublica found last year, when a proprietary algorithm used in the U.S. to predict the likelihood that a person who committed a crime would reoffend was shown to be biased against black offenders.

"If we allow these technical systems to stand in for some sort of objective truth, we mask or obfuscate the kind of deep inequities in our society," McKelvey said.

Part of the problem, says AI Now, is that although algorithms are often seen as neutral, some have been found to reflect the biases within the data used to train them — which can reflect the biases of those who create the data sets. 

"Those researching, designing and developing AI systems tend to be male, highly educated and very well paid," the report says. "Yet their systems are working to predict and understand the behaviours and preferences of diverse populations with very different life experiences.

"More diversity within the fields building these systems will help ensure that they reflect a broader variety of viewpoints."

Going forward, the group would like to see more diverse experts from a wider range of fields — and not just technical experts — involved in determining the future of AI research, and working to mitigate bias in how AI is used in areas such as education, health care and criminal justice.

There have also been calls for public standards for auditing and understanding algorithmic systems, the use of rigorous trials and tests to root out bias before the systems are deployed, and ongoing efforts to monitor those systems for bias and fairness after release.

ABOUT THE AUTHOR

Matthew Braga

Senior Technology Reporter

Matthew Braga is the senior technology reporter for CBC News, where he covers stories about how data is collected, used, and shared. You can contact him via email at [email protected]. For particularly sensitive messages or documents, consider using Secure Drop, an anonymous, confidential system for sharing encrypted information with CBC News.