Predicting Technology Futures by Examining Our Past

As a former technologist, I had long believed that it was impossible to really foresee how a new technology development might impact society. But through my academic research on facial recognition in schools, I learned that we can make surprisingly nuanced and accurate predictions. 

Take, for example, facial recognition. While law enforcement has used facial recognition in public settings, schools have only just begun installing these systems. Schools in Australia, France, Sweden, and China have piloted  facial recognition systems for a variety of uses including automated attendance, security, and behavioral control. Here in the U.S., schools are jumping in, too,  using facial recognition primarily for security purposes. However, just because a technology exists doesn’t mean it should be disseminated widely. 

In the case of facial recognition, GDPR can effectively be used to ban it overseas. Here in the US, the New York State Assembly recently passed a moratorium that will go into effect immediately if the governor signs the bill. However, no country in the world regulates the use of the technology at the national level. While its proponents argue that the technology is helpful for school security and a good way to maximize limited school resources, those arguments ignore its potential to create or exacerbate other issues for teachers, parents, and students. 

With University of Michigan Science, Technology, and Public Policy’s (STPP) Technology Assessment Project (TAP) report, Cameras in the Classroom, we set out to develop a method that would give us a more complete and complex understanding of what may happen with facial recognition if it is introduced to schools. In this case, our research led us to believe that the only appropriate way to deal with the technology is with an outright ban, but in other cases you may find that the method leads to guidance on how the tech should be used and what aspects of it should be invested in during development. By performing our analysis earlier in the lifecycle of this technology, we hoped to feed back into nonlinear development cycles, and provide analysis that policymakers can use to more actively shape or curb the sociotechnical system that emerges around this technology instead of struggling to control it down the line. Momentum is hard to change. Better to get to policy makers when they are asking “should we use this technology at all? And how can we use it safely?” Instead of “how can we control facial recognition?”

Expanding this approach to other technologies can help move our community closer to capturing the promise of public interest technology while reducing opportunities for unchecked harm. 

How to evaluate the potential for harm 

With a proactive methodology for technology assessment, technologists and policymakers could better prevent harm and cultivate social benefit than if they wait on a traditional, retrospective analysis. We used the facial recognition case as a pilot to develop and apply an analogical case comparison approach that will help research new technologies and anticipate how they will be implemented and regulated. Using this method, we were able to make policy recommendations largely before facial recognition is entrenched in school systems. With an even earlier-stage technology, a similar analysis could lead to development changes that shape the product itself. 

In principle, our analogical case comparison approach is straight-forward: identify cases with similar characteristics to the target technology, study how they were regulated and implemented, and examine how those learnings might apply to the new tech. 

In practice, we spent months systematizing this process to make it robust and rigorous, while allowing room for our evolving understanding of facial recognition in schools to grow from, and inform, our case study choices. We drew heavily on work from the field of Science and Technology Studies, which has long acknowledged that technologies are fundamentally tied to social systems and are more predictable than policymakers and scientists tend to believe.

We chose facial recognition in schools, because, while there is a long history of facial recognition use in other settings, this is a relatively new application. From there, the steps in our method are more feedback loop than simple checklist. Each cycle complicated the problem further, raising new questions to explore in the following round. As we learned more about our analogical cases, they illuminated new aspects of facial recognition, which led us to new questions and new cases. 

This method constantly challenged us to find connections between seemingly unrelated technologies and social contexts, and to consider how these technologies could have played out with a few altered characteristics. Scheduling in time to cycle through the steps several times and to explore cases that may not turn out to be helpful is critical to developing a deep enough understanding to anticipate potential future outcomes with confidence. The robustness of the method is derived from collecting overlapping cases that show similar trends or that illuminate what characteristics are critical to changing a technology’s trajectory, such that any other researchers would likely have come to the same conclusions even if they identified different cases. Thus, it is helpful to have a team of creative researchers with backgrounds in different areas to identify as many cases for each aspect of your studied technology as possible.  Our team included expertise in education and school security policy, but also in biotechnology, intellectual property, and big data consumer technology. We ended up exploring case studies covering topics from metal detectors in schools to biobanks and breathalyzers, along with nine more core cases and several additional technologies that appeared only once or twice such as Google’s database of consumer information, Sensorvault.  

At our Technology Assessment Clinic, we also wanted to create a report that was actionable for policymakers and stakeholders, so in addition to our conclusions about the research, we performed a landscape analysis of regulations governing the use of facial recognition in schools internationally (there are none) and developed a set of policy recommendations for the national, state and local levels. Based on our findings, we primarily advocated for a blanket ban, but also provided guidance to mitigate concerns if an area is determined to move forward.  

Mapping Essential Elements of a Technology

Our method begins by breaking down the technology that you are researching into a list of essential elements and characteristics. We considered two categories of essential elements: facial recognition in schools’s functions and its moral and social implications. Function can be broadly defined as including both what roles the technology is playing and how it works. While you start with your target technology, you will also have to break down the case studies in order to understand in which ways they are similar and which ways they are different from your tech of interest. In both cases, the list will expand over time.

How facial recognition works seems relatively straight forward at first. First, developers train an algorithm using a database of pictures. Then, the algorithm looks for elements of faces and either matches them with an entry in a collection of many faces or determines whether or not the face is a match with a single face. However, we learned from tracing the evolution of our case studies that we needed to look at other elements including how the database of training images was selected, who was analyzing results, how they were  confirmed, the kinds of maintenance required, how operators learn to use the system, how system integration happens at the school level, and what kinds of schools were most likely to implement the tech. Similarly, we expanded our use of the roles that facial recognition might play at schools from the formal ones listed by proponents to include informal roles by learning about how school security officers and fingerprinting systems have been used in schools, among other cases. For facial recognition, moral and social elements arose, too. For example, we identified loss of privacy and collection of biometric data early on in the project and chose case studies like CCTV in UK schools and India’s Aadhar biometric identification system as an entry point to the research on these subjects.

Collecting Case Studies to Cover Possibilities

As you continue to map technology elements, you will need to find cases that are sufficiently similar to learn something applicable. Some technologies may be very similar – we were able to learn a lot from predictive policing – while others may only overlap at one or two points, so should be analyzed in conjunction with other technology to establish that they are relevant and instructive.

For our pilot, we chose to limit our cases to technologies that appeared in academic literature. In a few instances, we started with recent investigative journalism as a source, like a New York Times investigation for our breathalyzer case study, but we always tried to use those facts as a basis for uncovering new literature. However, the rapid pace of technology development means that future iterations of this methodology may benefit from incorporating cases that are too new to have been published on. 

We investigated the development, regulation, and outcomes of each technology identified through news articles, academic literature, and some primary source material such as company websites. It was an iterative process that we returned to for each case as we learned more. Over time, we began to identify patterns in the histories and outcomes of our cases. Specifically, we organized our report into sections on racism, normalizing surveillance, defining the acceptable student, commodifying data, and institutionalizing inaccuracy, with subsections exploring specific findings from cases and anticipating how they might apply to facial recognition.

Getting to the Root Cause

An anticipatory study also has the potential to change how stakeholders and the public imagine a technology, which can shift the framing and regulatory options that are available to policymakers. Companies and advocates tend to present the most optimistic version of a technology that best serves the case for using it. Their narratives can become entrenched in people’s minds long before the technology becomes ubiquitous, even when they bear little resemblance to how the complete sociotechnical system will function. Complicating that story immediately, and fully situating the technology in its social context at least for regulators, may be more successful than doing so in reaction to an already dominant narrative.

One popular story about facial recognition, for example, has been that any problems current versions have with race can be fixed by further development. This narrative divorces the hardware and software from the people who build and operate it, instead seeing it as a stand-alone object. Our research showed that once we expand the concept of what a technology consists of to include human factors, “Could facial recognition be accurate across races in the future?” is not a relevant question because an extensive history of similar technology demonstrates that even if it were technically feasible, that kind of development and maintenance is unlikely to ever happen. Instead we should start with an understanding that our social systems are such that while facial recognition may improve, it will always be racist and regulate based on that reality. This will require changing the popular conception of what are the essential characteristics of a facial recognition system within a school.  

The fact that we were able to assess previous work to inform future work should not be lost on the technology community. By following our lead, you can identify ways that your technology may contribute to societal problems and  mitigate issues including potential racism in your applications, systems, and hardware. Remember, in some cases this method will also reveal unexpected benefits from a new technology. By shifting away from technologies and features that will create harm, and towards those that will bring benefits, you can better align your technologies with your values before expending resources on new projects. 

To learn more about ethical considerations in the PIT world, please check out these two stories from past issues, What might someone else do in five years, when they’re sitting in your chair? and Ethics is a commitment to struggle with the murk. When you’re done reading, head over to our new LinkedIn group to discuss these issues — and more.

Hannah Rosenfeld is a Master of Public Policy student at University of Michigan, where she is in the Science, Technology, and Public Policy and Diversity, Equity, and Inclusion graduate certificate programs. She previously worked in the tech and tech regulation industry for more than seven years on education technology, medical devices, and personal security. She formerly led the New York City chapter of the LGBTQ+ non-profit Out in Tech becoming the Head of Diversity, Inclusion, and Belongingness for the international organization. Connect with her on twitter @hannahrosenfeld.