Courtesy of Cornell University

Cornell professor awarded grant by facebook to research intent and misinformation.

August 30, 2019

Facebook Awards Cornell Professor $1.77 Million to Study Malicious Intent on Social Media

Print More

Facebook announced this summer they would invest $7.5 million in new research partnerships with academics from three universities: Cornell University, University of California at Berkeley and University of Maryland.

Prof. Serge Belongie, computer science, an associate dean at Cornell Tech, received $1.77 million in a three-year grant, to be a part of the project. His specific work will develop technologies for identifying content with malicious intent.

The announcement of the partnership comes as a response to the use of Facebook to cause harm and spread hate, namely following the terrorist attacks in Christchurch, New Zealand last March. The shooter live-streamed the massacre from his Facebook account.

In response, Facebook is now searching for new methods to improve detection of fake news and misinformation campaigns, as well as new ways to more quickly identify and remove harmful posts when the body of content becomes too overwhelming for humans to inspect and moderate, according to their press release.

Belongie’s work focuses on understanding the reasoning behind social media posts — why people share what they shared and what impact does it have.

“It’s a sort of taxonomy of intent,” Belongie explained. “Imagine you did a study of your 20 closest friends on social media, so you look at their [posts]. If you were to collect all this data, and look at the things they post, [you’d ask], ‘why did they post it?’”

Belongie said that reasons for posting vary wildly, from casual observations or seeking information to garnering sympathy or bragging.

While Belongie’s broader work is about why all people share what they do, Facebook is more specifically concerned about when that intent is to deceive and/or cause harm, he said.

Facebook already has an extensive network of human annotators, who work to label content with potential intents, according to Belongie. However, the bulk of content is too vast, so part of their work is to contribute to a dataset.

The goal is to eventually create an artificial intelligence system that can use the dataset to predict intent and alert about problematic posts. Belongie and his graduate students will be at the forefront of this innovation, working with Facebook to develop new technologies to protect the public from misinformation.