It seems like everywhere you look these days there’s someone screaming “fake news.” In many cases, they are right. Over the past few years, as the amount of dubious content flooding the internet has gone up, our trust of the media and what we read has plummeted. According to one 2017 Pew Research study, 62 percent of people surveyed said they felt there was a “fair extent of or great deal of fake news on online websites and platforms” and it’s only gotten worse. New America’s Open Technology Institute (OTI), which examines and explains the intersection of technology and policy to ensure that every community has equitable access to digital technology and its benefits, recently published several reports about misinformation and disinformation including one that focuses on election misinformation and another about how platforms are addressing mis/disinformation about COVID-19. The Commons editor-in-chief Karen Bannan interviewed Koustubh “K.J.” Bagchi, OTI’s Senior Policy Counsel about what PIT practitioners need to know about disinformation and misinformation. Here are his answers.
What’s the difference between misinformation and disinformation, and which is more dangerous to society?
The important distinction between these two terms is around intent. Misinformation is mainly false information, while disinformation is false information that is deliberately misleading. While disinformation can appear to be more dangerous, the truth is that in our current environment where factual information is critical for dealing with a global pandemic or knowing a state’s rules around voting in an election, even misinformation can have devastating consequences.
Who is responsible for combating misinformation and disinformation? What role should online platforms themselves play?
Every player in the digital ecosystem where misinformation/disinformation flourishes has some role to play — this includes users, platforms, and even lawmakers. We, as users, have to be careful about not playing a part in spreading false information. Being vigilant and incorporating a diversity in legitimate information sources can help us ensure we aren’t falling prey to disinformation or sharing wrong information.
Given their position though, platforms play an outsized role in stopping the spread of harmful information. We have seen many examples of how the information spread online can impact our national discourse and how the public’s willingness to even accept certain facts as true can be delayed. In our reports, we’ve listed out a number of recommendations that call for platforms to do the following:
- Provide users access to and uplift authoritative and legitimate information
- Introduce and enforce clear content moderation and content curation policies
- Introduce and enforce advertising policies
- Provide adequate transparency and accountability around these efforts
What do we need from policymakers?
We have seen the consequences of having elected leaders spread disinformation and how the initial reluctance by larger platforms to label misinformation or disinformation had major consequences on our society — namely, members of the public refusing to accept election results or conspiracy theories around COVID-19 flourishing online.
Our reports outline a number of steps policymakers should take, but I would argue three of the most important recommendations we call for are:
- Having government agencies and representatives ensure they are only posting verified information and not spreading unproven or debunked information
- Having policymakers fund vetted fact-checking organizations around the world to ensure that fact-checking efforts can adequately tackle the growing volume of misinformation and disinformation and
- Having policymakers enact rules to require greater transparency from online platforms.
What do we still not know about disinformation and how to combat it? How can PIT practitioners and researchers help?
Identifying the source of disinformation is still a challenge and while platforms have become more savvy at identifying disinformation in certain subject areas (i.e. COVID-19 related or election-related disinformation) and shutting down purveyors of false info, not all subject areas of disinformation have sources that are easily identified. What’s even more concerning is that the sources that spread this harmful information are not even confined to the borders of one country.
For example, after the 2016 elections, intelligence agencies researched and uncovered foreign actors playing a major role in trying to disrupt our electoral processes, but observers have noted that, during this election cycle that just passed, there has been an uptick of false information being deliberately spread by domestic actors as well. There’s a role for researchers here to help identify sources of disinformation. We also need third parties to research and understand the effectiveness and implementation of policies that platforms put in place to combat disinformation.
What can PIT practitioners do today to educate themselves and gain the skills that are needed to combat mis/disinformation?
Whether you work at a social media platform, you’re in policy, or you’re a researcher, reviewing the work of organizations and researchers who dig into this type of research is a must. There are also numerous civil society coalitions dedicated to combating misinformation and disinformation, so engaging there is an important starting point. For example, OTI is a member of a cohort of organizations that focus on election-related disinformation. In this coalition we provide feedback on inoculation tools and messaging to preempt and debunk misinformation and disinformation.
What skills do the next generation of PIT practitioners need to help combat the problems?
Platforms obviously have a huge role to play here. We need people at these companies who are not only building the right tools and policies to combat the spread of misinformation and disinformation, but who also understand the value of transparency and accountability. Still, that’s really only one piece of the solution. We also need practitioners—both inside and outside of the companies—to advocate for these tools and policies to be implemented, enforced, and improved along the way.
Koustubh “K.J.” Bagchi is senior policy counsel at New America’s Open Technology Institute, focusing on platform accountability and privacy issues. He has more than ten years of experience in public policy and legislative strategy at the local, state, and federal levels. In addition to advising members of the Washington State Senate, he worked as legislative counsel for a D.C. city councilmember and for a member of congress who served on the influential House Appropriations Committee. Prior to joining OTI, he served as senior counsel for telecommunications and technology issues at Asian Americans Advancing Justice | AAJC, a national civil rights organization. His major work there included advocating on behalf of Asian American and other underserved communities for beneficial technology policies, such as robust digital inclusion and consumer privacy protections, before executive branch agencies and Congress.