EU citizens might get a 'right to explanation' about the decisions algorithms make

Algorithms discriminate. It’s not their fault, they’re strings of math, but people program them. Software used to predict future criminals is biased against blacks. Google Images thinks C.E.O’s are men and shows women listings for lower paying jobs. Our generally racist society leads to racist search results even when searching for something simple like “three black teenagers.”

https://twitter.com/robinsloan/status/748917426985717761

A big step towards countering discriminatory algorithms is being able to understand them. This is easier said than done, since the way proprietary algorithms work is usually a closely guarded secret, and external research into the matter is potentially criminal in the U.S.

Late last week, though, academic researchers laid out some potentially exciting news when it comes to algorithmic transparency: citizens of EU member states might soon have a way to demand explanations of the decisions algorithms about them.

In April, the EU approved a data protection law called the General Data Protection Regulation (GDPR). While the GDPR is technically already on the books, replacing a law from 1995, EU member states don’t have to begin enforcing its new provisions until 2018. At that point things could change drastically.

In a new paper, sexily titled “EU regulations on algorithmic decision-making and a ‘right to explanation,'” Bryce Goodman of the Oxford Internet Institute and Seth Flaxman at Oxford’s Department of Statistics explain how a couple of subsections of the new law, which govern computer programs making decisions on their own, could create this new right.

These sections of the GDPR do a couple of things: they ban decisions “based solely on automated processing, including profiling, which produces an adverse legal effect concerning the data subject or significantly affects him or her.” In other words, algorithms and other programs aren’t allowed to make negative decisions about people on their own.

The law itself doesn’t include language about any such right, but Goodman and Flaxman believe it’s present in a non-binding description of the law written by its authors. This description, known as a recital, “state that a data subject has the right to ‘an explanation of the decision reached after [algorithmic] assessment,'” write Goodman and Flaxman.

A “right to know why an algorithm negged you” makes sense coming from the EU as Europe has historically taken a pretty protective stance in its policies on citizens’ personal data and privacy. EU citizens have a “right to access” information held about them by corporations (used very effectively against Facebook by a 20-something law student). EU courts have also granted a “right to be forgotten” in search results.

While the new provision may seem great at first glance, the word “solely” makes the situation a little more slippery, says Ryan Calo, a University of Washington law professor who focuses on technology. Calo explained over email how companies that use algorithms could pretty easily sidestep the new regulation.

“All a firm needs to do is introduce a human—any human, however poorly trained or informed—somewhere in the system,” Calo said. “[V]oila, the firm is no longer basing their decision ‘solely on automated processing.'”

Calo wonders if the nebulous phrasing of this “right” will make it easy to satisfy in a way that’s ultimately unhelpful.

“Is it so clear, even in this supporting documentation, that firms will have to walk data subjects through the exact inputs and processes that led to the decision?” said Calo. “Or could they provide a general explanation of how the system works, including the kinds of data the system took into account? That wouldn’t be so hard.”

So despite being a very appealing idea, the “right to explanation” might amount to bupkis. (If you want a sign that’s likely, how about the fact that a coalition of telecom companies and internet services are using the GDPR to argue for doing away with another privacy law.) Which is, to put it lightly, a huge bummer.

Both the authors of the paper and Calo agree that interpreting the decisions algorithms make is only going to get more difficult as the systems they rely on (e.g. neural networks) become more complex. A robust right to explanation could encourage taking that interpretability problem seriously not just at an academic level, but in how new decision-making algorithms are presented to the public. But the new regulation is still two years out . So for now we get to sit tight and wonder about how European courts will interpret the new law.

Correction: On Twitter, Paul Olivier Dehaye points out that the GDPR has been voted on since the Goodman and Flaxman’s article was written, and isn’t subject to change. Pardon the error.

Ethan Chiel is a reporter for Fusion, writing mostly about the internet and technology. You can (and should) email him at [email protected]

 
Join the discussion...