NSF grants nearly $7.5 million to universities developing anti-'misinformation' tools
The National Science Foundation has awarded nearly $7.5 million for projects at ten universities developing 'tools and techniques' against misinformation.
One project at UW-Madison will test social media fact-checking tools on platforms like Twitter and Facebook to counter “misinformation” about COVID-19 vaccine hesitancy and election integrity.
The National Science Foundation (NSF) is funding a nearly $7.5 million in grants to 10 universities' various "Trust & Authenticity in Communication Systems" initiatives countering "misinformation."
“Modern life and economic growth are dependent on access to communications systems that offer trustworthy and accurate information,” the NSF announcement states. "The overarching track goal is to address the urgent need for tools and techniques to help the nation effectively prevent, mitigate and adapt to critical threats to communication systems.”
One project at the University of Wisconsin-Madison aims to create social media fact-checking tools to fight misinformation in “two democratic and public health crises facing the U.S.”
In an email to Campus Reform, UW–Madison Engineering Professor Michael Wagner, principal grant investigator for the project, defined misinformation as “a deceptive message or messages that may cause harm without the disseminators’ knowledge.”
Wagner's grant proposal outlines COVID-19 vaccine hesitancy and election integrity as key areas where misinformation is spread.
Additionally, his project also includes plans for developing "intervention" techniques that provide communities deemed susceptible to misinformation with "pre-exposure inoculation and post-exposure correction." Interventions will be field-tested through "a combination of ad-purchasing, automated bots, and online influencers."
Wagner also confirmed that this testing would be done on public platforms.
“The lab-based interventions developed to counter misinformation would most likely be tested in the field (the world outside research labs) on platforms like Twitter and Facebook,” he said.
An NSF spokesperson told Campus Reform that the NSF “does not partner directly” with platforms like Twitter and Facebook for these projects but leaves it up to the institutions awarded grants.
“Once the awards are issued, Convergence Accelerator awardee institutions are responsible for establishing the parameters of involvement and some may establish partnerships with these platforms,” she said.
The NSF additionally approved a University of Michigan grant application to explore how human juries can render judgments about whether a particular piece of information is misleading and how the data from these decisions can be used to inform a platform’s misinformation procedures, like marking articles or limiting distribution.
Paul Resnick, the principal grant investigator, told Campus Reform that determining who the jurors should be, what content they will review, and how they review it will be the first step of the process.
“A major initial activity for this project is to try to determine the conditions of legitimacy for the juries who would provide the gold-standard judgments about whether particular content items are misleading enough to deserve some enforcement action from platforms,” he said.
Another project at the University of Washington, called the “Verified Information Exchange,” will develop tools to help people “understand the trustworthiness of the information they receive.”
When asked what method would be used to determine whether or not a source is trustworthy, principal grant investigator and UW Professor Scott David told Campus Reform the work will be done from “a human perspective” to help understand how different communities verify information.
“We will seek to identify and generalize different patterns and practices of information verification in human interactions in different communities and cultures and across business, operating, legal, technical and social (BOLTS) domains to invite consideration of the many variables that affect verified information (and its subset, identity-related information) beyond just technical considerations,” he said.
Some grants will also go towards education on digital literacy, such as one at the State University of New York at Buffalo. That project abstract compares digital literacy to an “immunization” strategy against the spread of “corrupted and misleading information.”
“Just as a vaccine inoculates individuals from a virus, we want to inoculate media consumers from disinformation. Inoculated users form the first line of defense against the spread of corrupted and misleading information,” UB professor and grant principal investigator Siwei Lyu said in a news release.
Lyu told Campus Reform that the main aspect of the project is the creation of a learning platform called “DRange,” which he said will help “improve disinformation awareness and build resilience among different demographic collectives and diverse communities.”
“Inclusive, age-appropriate, and culturally responsive” curriculum and materials will accompany the platform, Lyu explained, stating those will first be deployed for “underrepresented 8th and 9th-grade students and senior citizens.”
Two other grants were given to private companies, Hack/Hackers and Meedan. The Meedan grant focuses on developing a solution to “racially targeted misinformation and hate speech.”
Follow the author of this article on Twitter @katesrichardson.