Categories

Social Engineering

Twitter’s Bane: Bad Bots

Are you a human, bot or cyborg?

This question was posed by researcher Jerry Chu, along with Haining Wang, Steven Gianvecchio, and Sushil Jajodia in a paper published in 2012.

Two years later, it appears that little has changed about user perspective in online social networks. Specifically Twitter, when it comes to telling apart accounts that are run by a human, a bot, or a cyborg, a new study revealed recently.

We will find out why in a few but first, a few definition of terms.

A bot is an automated program created to do tasks.

Since it’s found in online social networks, we call it a socialbot, and it’s capable of posting updates and interacting with contacts on behalf of its human master, the account owner.

On Twitter, bots that merely tweets news from various sources, making an account a one-stop RSS feed, are deemed good.

On the other hand, bots that post spam and links that would likely lead to dodgy, third-party sites are deemed bad.

A cyborg, as defined by Chu, can either be one of these two: (1) a bot-assisted human or (2) a human-assisted bot. Tasks can be done, in essence, by both man and machine.

A Twitter account is a cyborg if the owner uses automated programs to post on his/her behalf whenever he/she is unavailable. He/She then replies or reacts to posts from his/her contacts freely and when available without any interference from a bot.

Socialbots may be deemed as more disruptive and, in most cases, more threatening than cyborgs. Sophisticated bot masters with ill-intent can program their bots to be human enough that real people can’t tell that they’re interacting with a computer program.

Such was the case of Carina Santos. “She” wasn’t only human-like, she was also influential and, at some point, became almost as popular as the Dalai Lama and Conan O’Brien.

This new study, entitled “Reverse Engineering Socialbot Infiltration Strategies in Twitter”, isolated socialbots from the human and cyborg accounts on Twitter, aiming to answer its own sets of questions:

  • Can socialbots really infiltrate Twitter so easily?
  • What are the characteristics of socialbots that would enable them to evade current Twitter defences?
  • What strategies could be more effective to gain followers and influence?
  • What automatic posting patterns could be deployed by socialbots without being detected?

All of which in turn reveal how social metrics services can be misled, how influential bots can sway public opinion and damage reputation, and how vulnerable Twitter users are into believing these bots are one of us.

Here are other findings by the researchers of this study:

  • Other than what was stated above, socialbots can also be programmed to do the following: conduct a Sybil attack, farm links, pollute content of the platform, and spam search results.
  • The more a socialbot seem to exhibit characteristics of being human (in terms of visibility, interactivity, and contribution to the community it chose to belong to), it’s highly unlikely that Twitter will flag it as a bot.  Retweeting can make a bot appear more human online, thus, it is an effective infiltration strategy.
  • The informal and grammatically incoherent styles in tweets generated by these bots – a flaw, actually – ironically served them well given that general Twitter users appear to talk this way online.

You can read more about the technicalities of this study here.

Jovi Umawing


Leave a Reply

Subscribe to our YouTube Channel