Currently more than 72% of American adults are on social media, sharing more personal information and data than ever before. While the internet has had significant benefits for society, including democratizing access to information, it has also been exploited by nefarious groups and actors including terrorist organizations, foreign state-backed operatives, cyber criminals, and online trolls.
To help investigate these national security research trends, a growing subset of public interest technologists commonly referred to as open source researchers are using quantitative skills to analyze data across major news, social media, and public information archives. This budding community of researchers has helped keep the public informed and shine a light on malicious attempts to disrupt election integrity in the digital age.
Beginnings of Open Source Research through U.S. Intelligence Agencies
Open source research (OSINF) is an information gathering tactic used by governments, independent researchers, cybersecurity groups, and even private companies to collect information from public sources, including print and televised media, public government data, research, social media, and beyond.
Open source research tactics date back to open source intelligence (OSINT) which was employed by the U.S. since the start of the 1940s Foreign Broadcast Monitoring Service. Following the September 11 terrorist attacks, these tactics saw even greater widespread growth. The National Commission on Terrorist Attacks Upon the United States (also known as the 9/11 Commision) recommended the creation of an open source intelligence agency — a major catalyst towards the formation of the Director of National Intelligence Open Source Center. This center spurred a new breed of intelligence analysts focused on understanding trends in the open source information ecosystem.
Investments in open source research was stimulated with counterrorism funding from defense related agencies, especially as terrorist groups including the Islamic State of Iraq and the Levant (ISIS) leveraged social media for recruitment and radicalization. As demand for homegrown extremism and counterrorism intelligence increased, private analytics companies (i.e. SITE Intelligence Group, Moonshot CVE), academia (George Washington Program on Extremism), and nonprofits (Tech Against Terrorism) began to hire open source resasrchers to track the online activity of growing extremist organizations online. Many of these researchers have been critical in helping major social media companies report and take down illicit content from terrorist groups, an almost textbook example of public interest tech.
Pivoting Open Source Research into Election Security
In late 2016 and early 2017, upon revelations of Russian’s Interference in the US elections and the Cambridge Analytica scandal, many open source researchers previously focused countering violent extremism turned their attention to online disinformation and election security including studying the tactics of the Internet Research Agency (IRA), a state-sponsored group involved in online influence operations on behalf of Russian business and political interests. Open source researchers such as Renee DiResta, from Stanford’s Internet Observatory and Camille Franciois, an open source researcher from the social media analytics company Graphika were pivotal in the space. Without their work and research institutions like Oxford and Intelligence Community Assessment from OSINT researchers, it would have been much more difficult for an equivalent investigation to take place from Senate Intelligence alone.
Unlike funding for defense related open source research, which was born from government-linked grants and contracts, the funding for open source research related to election security included the heavy participation of non-governmental philanthropic funders. Since 2016, philanthropies including Open Society, Hewlett Foundation, Knight Foundation, Craig Newmark Philanthropies, Democracy Fund, Omidyar Network, and more began investing steadily in efforts around information integrity online. Furthermore, as social media companies faced public scrutiny for their lack of response around election-related- disinformation and misuse of personal data for election campaign-related targeting, companies also began funding open source research efforts. Besides recruiting former open-source analysts within corporate trust and safety teams, companies have also funded programs such as the Digital Forensic Research Lab (DFRL), based at the think tank Atlantic Council, to publish open source research analyzing elections around the world.
Cybersecurity companies have also helped support election related open source reporting. Since 2016, cybersecurity company FireEye has conducted open source reporting around election related information operations, and in 2018, the company’s reporting helped Facebook identify and remove 652 fake accounts and pages linked to Iran linked influence operations.
While some open source researchers began with footing in the cybersecurity and counterrorism space, others have honed their craft through involvement in citizen-journalism. The organization Bellingcat started its investigations in 2014 with a team of eight volunteers. Today, the website publishes open-source investigations about war zones, human rights abuses, and the criminal underworld. Public interest technologist practitioners who want to get involved only need to look as far as the organization’s website, where it lists courses and bimontly workshops– all virtual at the moment. Other efforts also include the German Marshall Fund’s Alliance for Securing Democracy (ASD), which offers research tools, including the Hamilton 68 Dashboard, and tracks the narratives and topics promoted by Russian, Chinese, and Iranian government officials and state-funded media on Twitter, YouTube, and state-sponsored news websites. Another group, Code for Democracy, builds tools to automate the analysis of open data in order to understand the relationship between campaign finance and political narratives.
The growing community of open source researchers in providing election-related analysis has made enormous impact over the last four years to improve election integrity. In the 2020 Presidential elections, DFRL along with the Stanford Internet Observatory and Program on Democracy and the Internet, Graphika, and the University of Washington’s Center for an Informed Public helped launch the Election Integrity Partnership (EIP), a coalition of research entities focused on supporting real-time information exchange between the research community, election officials, government agencies, civil society organizations, and social media platforms. As the 2020 Presidential Election results were being announced, the EIP provided daily debunking of election security claims, ranging from voter machine malfunctioning to fake live streams of election results, which helped media outlets amplify accurate news.
With these aforementioned efforts underway, there is a growing need to bolster open source capabilities and practices to keep the public informed and shine light on nefarious actors in the information age. However, there remains limited standardization by open-source researchers on how to conduct research, which has made it difficult for media outlets, related policymakers, governments, and business executives to discern how credible their analyses are. In 2019, for example, two reports were published by Safeguard Cyber and Symantec, analyses which were subsequently amplified across mainstream news sources. In both of these reports, long-time independent open source researchers criticized methods used in the reports and questioned whether these cybersecurity companies can accurately verify that the accounts were associated with a foreign influence campaign. This reporting also raised three important questions: 1) Whether private cybersecurity companies or OSINT researchers can attribute campaigns without direct access to more infrastructural data from social media companies 2) Whether the profit motive of certain cyber or threat intel companies to publish exaggerated or erroneous reports to market their products would undermine overall public faith in open source research 3) Whether there was a way to better assess and standardize open source research so that the improper tactics used by dishonest players can be better explained to the public. In response to these training gaps, think tanks such as the DFRL, which leverages its 360/Open Source conference, to bring together journalists, activists, innovators, and leaders to develop better hands on training for open source research.
In the four years between the 2016 and 2020 Presidential Elections, the amount of public interest technologists engaging in open source election security research has been rapidly expanding as a field, including participation from students passionate about open source research at universities. For example Facebook’s Social Science One partnership, housed at Harvard University, was launched between corporate and academic researchers in an effort to study Facebook’s effect on elections. In early 2020, Georgetown University began offering a course on Disinformation, Information Operations and influence in the Digital Age. Additionally, private companies including Facebook, Twitter, and Google, have also been rapidly hiring open source, OSINT, and election security researchers to support their traditional cybersecurity and trust & safety work. In addition, as foundations look to refresh their funding strategies in 2020, many are pivoting away from broader disinformation learnings and instead investing in more solutions to counter the growing and evolving online tactics of foreign adversaries. While there continues to be improvements ahead for open source analyses, there is no doubt that today this growing community of analysts and researchers will continue to play a frontline role in exposing disingenuous online activities subverting the ability of citizens to participate in a fair and secure election process.
Going forward, it’s important to note that open source research around elections doesn’t stop just because the 2020 elections are in the books. The need for open source researchers will only increase over time, especially as misinformation and disinformation continues to make the rounds. The good news is that open source is an inclusive community, and research around elections is a growing field since more data is being released from elections than ever before. PIT practitioners who are interested in jumping in can easily get involved. There are low barriers to entry, high support from others in the community, and the ability to find even more support from philanthropic foundations, think tanks, and educational institutions. All you need to do is raise your hand.
Clara Tsao is an online disinformation expert and a civic tech entrepreneur, who recently co-founded the Trust & Safety Professional Association and the Trust & Safety Foundation to support the global community of professionals who develop and enforce principles and policies that define acceptable behavior and content online. Clara is also a non-resident senior fellow at the Atlantic Council’s Digital Forensic Research Lab & German Marshall Fund’s Alliance for Securing Democracy. Her previous roles include CTO at the US Department of Homeland Security’s Countering Foreign Influence Task Force and the interagency US Countering Violent Extremism Task Force and Senior Advisor for Emerging Technology at the Cybersecurity Infrastructure Security Agency. She has spent a decade working in the technology industry across global teams at Microsoft, Apple, Sony PlayStation, AT&T, and also as a Google and Mozilla Technology Policy Fellow. Clara is also the Board Chair and President of the White House Presidential Innovation Fellows Foundation and a Senior Advisor at Tech Against Terrorism.