Reading Time: 3 minutes
As more people get news from social media, the spread of misinformation is a risk. (Illustration/Michelle Henry)

The growing popularity of social media raises all sorts of questions about online security. According to a recent Twitter SEC filing, approximately 8.5 percent of all users on Twitter are bots — fake accounts used to produce automated posts. While some of these accounts have commercial purposes, others are influence bots used to generate opinions about a certain topic.

Concerned by the future potential of fake social media accounts, the Defense Advanced Research Projects Agency’s Social Media in Strategic Communication program held a four-week challenge in February in which several teams competed to identify a set of influence bots on Twitter.

A USC team of faculty and graduate students received first place for accuracy and second place for timing. Aram Galstyan, a research associate professor at the USC Viterbi School of Engineering’s Department of Computer Science and project leader at the USC Information Sciences Institute led the victorious Trojans.

People have realized that they can be used for propagating certain kind of information.

Aram Galstyan

“Spamming behavior has evolved,” Galstyan said. “Current bots tend to be more human-like, and people have realized that they can be used for propagating certain kind of information, possibly influencing discussions on specific topics.”

A threat to society?

Bots represent a threat to society, according to experts. In our increasingly digital age, more and more people get their news from social media, which is one of the main reasons why the spread of misinformation within these channels is a risk.

Bots can also be used for political purposes, which some organizations have taken advantage of. For example, it is well known that terrorist groups such as ISIS have used online social media as a way of reaching younger audiences and persuading them to join their cause. Examples of this persuasive behavior include the use of hashtags to focus group messaging and the creation of multiple accounts that post a high amount of tweets, pictures and links to people’s accounts.

Back in the Cold War, magazines were the main vehicles for propaganda. Today, the propaganda war takes place online.

“People normally trust online content,” said Farshad Kooti, one of the Ph.D. candidates at USC Viterbi who worked with Galstyan. “Unfortunately, this introduces an opportunity to spread misinformation by using automated bots that are very hard to detect.”

Three steps toward detection

For the DARPA competition, Galstyan’s team created a bot detection method that he said has proven 100 percent accurate. The overall process can be divided into three steps: initial bot detection, clustering and classification.

During initial bot detection, the team uses linguistics, behavior and inconsistencies as parameters to uncover a first set of bots. These cues include unusual grammar, number of tweets posted and stock pictures used in profile photos.

The second step focuses on identifying the accounts to which the first set of bots is linked. Studies suggest that most bot developers create clusters in which bots are connected to each other to increase retweets.

Finally, once a certain number of bots has been found, algorithms can be used to classify them.

The overall message of the DARPA challenge is that we need to watch out for what may be coming ahead.

Aram Galstyan

While bot detection methods have become more accurate, bot creators are also enhancing their programming skills. The number of influence bots, as well as their degree of sophistication, will likely increase in the future, Galstyan said. We can expect new sets of more complex bots to engage in advertising activities and political influencing.

“The overall message of the DARPA challenge is that we need to watch out for what may be coming ahead,” he noted. “This is a dynamically evolving problem, and the solutions that work today may become ineffective tomorrow. I believe that next step is to refine our methods and test them by analyzing specific campaigns.”

 

 

via news.usc.edu