Ethics is a commitment to struggle with the murk

Nishant Shah is a product manager who formerly worked for the U.S. Digital Service.

 

As a member of the U.S. Digital Service’s Department of Homeland Security team from 2016-2017, I had a chance to work on projects – like refugee admissions and disaster response – that were straightforward applications of USDS’s principle, “do the most good for the most people.” A common approach we took to dive deep into problems was an intense 2-3 week “discovery sprint,” looking holistically at the underlying people, processes, and technologies. One of these sprints was with Customs & Border Protection, evaluating the agency’s route to piloting the use of biometrics – and specifically, facial recognition – at airports to track visa overstays by foreign nationals.

For context: Congress had called for an automated system to record arrivals and departures of non-US citizens since 1996, and creating a “biometric exit” (BioEx) at airports had been a congressional mandate since 2004. If you’ve traveled much abroad, you know that many industrialized democracies have had the capability for years, with passport checks when you enter and when you leave. It was a requirement by law, and if implemented effectively, had the possibility of serving as a backbone for a reimagined airport experience – think no lines or clearance frustrations. Trust in federal government was low, partly because of the broken user experience at common interaction points like airport screenings, and a significant improvement in airports could have real knock-on effects in perception of government.

Despite the potential benefits, I wrestled with the ambiguity of the project’s ethical implications.

My thought process went something like this: “OK, facial recognition is a powerful tool that can make life easier for many millions of travelers if accompanied by strong safeguards and implemented with a healthy belief in privacy protections. It is also an obvious example of technology as a double-edged sword, with dystopian potential and proven reliability issues. It has dubious efficacy in deterring visa overstays, an unclear route to really strengthening national security, and doesn’t neatly fit with a vision of America I believe in — ‘give me your tired, your poor, your huddled masses yearning to breathe free…’ As a son of immigrants and having worked with refugees for years, the idea of systems – maybe? perhaps? over the fullness of time and after a few wrong turns? – that could be leveraged to deport folks who mean no harm, back to a tougher situation, is anathema.

Add in the background noise of the time: this was shortly after the presidential transition in early 2017 and the first executive order, there were protests on a massive scale over a refugee ban, and many were frightened by the President’s hateful rhetoric. All this summed to a distrustful environment for how a nascent technology might be used in the future despite its limited application and intention today. And yet – yet! – if the project was moving forward anyway, my logic was: is it not better for someone with this mindset to be part of the work than separate from it? Spoiler alert: I’m still not sure. I’m not an ethicist, so I’ll just note some observations and questions gained with the benefit of hindsight:

  • My concerns during the project were vague, almost embarrassingly so. Taking the time to map out the exact mechanism of harm (playing out the pathways, in detail, at a system and human level, for what would need to happen to lead to harm) would have helped make concrete any worry. This is hard…
  • …because bureaucracies can be opaque. There’s a story about blind mice who each come across a different body part of an elephant. What they feel determines what they believe is true about the elephant, and of course they’re all right – and all wrong. This is the way with complex institutions. Perhaps the difference between “IT staff” and “public interest technologist” is an understanding of self, with the latter working across silos and seeking to discern the whole elephant. Taking the time to comprehend the broad landscape of actors & systems at an institution can help build the aforementioned mechanism to harm.
  • The concept of “two-way doors” would have been useful to keep top-of-mind. Policy is set by lawmakers; agencies are executors. While policy won’t change overnight, a relevant question during execution is: for a given approach, is there some way to undo damage and go back if things go awry? At both a micro level and a macro level? If so, it might be worth the risk. If not, more understanding of the risks are probably needed.
  • I hadn’t developed my own red lines in enough depth. But even if I had, BioEx wouldn’t have crossed them. If I find myself in a similar situation in the future, I would ask: at what point would I need to recuse myself from the work? What would I need to discover? And prior to that, what are the institutional safeguards I could influence (is there a privacy ombudsmen to run recommendations by, for example)?

After the completion of the sprint, I didn’t continue with the project because I felt I didn’t understand well enough how the technology would be used as it was scaled. But that ambiguity is the point: builders of technology & services in government operate at a particularly interesting, difficult, important, weird moment in American history. For those facing these questions now and building the services that fulfill our government’s promises, I can think of no easy approaches, only a commitment to struggle with the murk, keep people at the center, and make decisions that first do no harm.