What might someone else do in five years, when they’re sitting in your chair?

Gretchen Greene is a policy advisor, computer vision scientist, and lawyer who works to  develop best practices for AI and ethics in government, universities and industry. She is an AI and Governance Affiliate Researcher at the MIT Media Lab and the Harvard Berkman Klein Center.

 

When you’re advising a government on the ethics of new technology, what’s one of the first steps that you ask them to take?

To stop and think. I’ll give you an example. A lot of cities in the last five years have been replacing street lights with LEDs for efficiency. One city I talked with had relaxed their RFP for getting new lights, and they got some proposals that said, “While we’re up there, would you like to consider other technologies? Would you like to detect smog? Monitor air quality? Listen for the sound of gunshots? Would you like to put up cameras and see what people are doing, for safety?” And they thought, “oh, we should stop and think about this.”

So that’s one piece of advice: stop and think. What are the unintended consequences? Beyond that, can you foresee what someone else might do in five years when they’re sitting in your chair? How compelling is the positive thing that you’re trying to do, that it might raise the acceptable risk of a negative?

 

We tend to think of modern technology as raising new kinds of ethical questions, but is that actually the case? Are there historical examples where governments wrestled with ethical issues and unintended consequences?

If you look at credit scores, the FICO score was invented in the late 1950’s. And it has a huge impact on people, in an automated way, but it’s regulated in a way that other algorithms generally are not. It’s designed to say whether or not you’re good candidate to lend money to for a mortgage. But it also can determine whether or not you can get an apartment because now landlords looks at your credit score. And your employer might look, too. It wasn’t designed to do that.

 

When you’re considering these kinds of impacts, it raises the question, do you think that governments want to behave ethically?

At government agencies, there are people proactively trying to do the right thing. I’m optimistic that government is trying to do the right thing. But with automation and with the scale and speed and uniformity that it can be done with computers today, it doesn’t actually matter whether you can make accurate predictions of bad outcomes. If a system is put into place that is unchallengeable, whether it’s with computers or not, it’s concerning.

The Federal Sentencing Guidelines are a good example of a precedent to today’s use of computer generated risk scores used in courts. They are exactly the same idea as using an algorithm to calculate risk—they consider certain factors and provide a sentence. The idea behind the guidelines was that judges are unfair, they have conscious and unconscious bias. That is presumably also a motivation behind using recidivism risk scores in bail and sentencing decisions today. But there’s a big difference in transparency between the publicly available federal sentencing guidelines and the black box algorithms of private vendors. If a black box recidivism risk score makes the difference between a five or ten year sentence, you can’t see how the score was made, and you can’t tell whether the decision was fair. The guidelines show that decades ago, we were already trying to design things to overcome existing societal biases, which is good. But now, we’re at a tipping point for being able to do that at scale and at much lower cost than we ever could, and there’s danger in that.

 

If you’re working in government and you deem a project to be unethical or creepy, what can you do? Can you say you don’t want to work on a project?

There are jobs that I don’t pursue because of ethical concerns. I guess, if you’re asked to work on a specific project, is it for the individual to question the ethics of it? Or the team, or the government to question? Or all of them? I remember one conversation I had at a company where the employees didn’t want to work on certain projects. And in the context that we were talking about, it was the company’s main business. And so, I also think the employer, whether it’s the government or just a company, has the right to turn around and say, “This is actually what we do. If you want to walk out, the door is there.” To the extent that you can question things, great. But you can’t say you won’t do something foundational and stay at the organization. What if Google workers had gone on strike and said, “We’re not going to work on anything that has to do with the ad business,” for example? That is 95% or 97% of Google’s revenue. “Great, you’re all fired, we’ll get some more engineers.”

 

If you don’t have qualms with the design of a system, what else can technologists do to ensure their system works ethically and fairly?

I once saw a panel where an ACLU lawyer talked about a successful class action suit against a government agency in Idaho. Medicaid and Medicare were using a black box algorithm to determine benefits for a class of disabled citizens, and it reduced benefits for people who had had stable benefits for decades. They ultimately found out that it had been miscoded. So if they had just tested it against their old system, they would have noticed it was erroneously reducing benefits by 25%.

They could have rolled it out in a more staggered way, or tested it out on smaller groups, or at least been prepared for the calls saying, “What’s happened to my benefits?”

The case found due process requirements for notifying people about procedures and changes in benefits, but we should extrapolate beyond what’s merely the constitutional minimum bar. Once we do, we can say, “If important decisions are being made about people, there should be ways for them to meaningfully understand them and to challenge them.”