Advertisement

You Are Known by Your Friends: Leveraging Network Metrics for Bot Detection in Twitter

  • David M. BeskowEmail author
  • Kathleen M. Carley
Chapter
  • 11 Downloads
Part of the Lecture Notes in Social Networks book series (LNSN)

Abstract

Automated social media bots have existed almost as long as the social media platforms they inhabit. Although efforts have long existed to detect and characterize these autonomous agents, these efforts have redoubled in the recent months following sophisticated deployment of bots by state and non-state actors. This research will study the differences between human and bot social communication networks by conducting an account snow ball data collection, and then evaluate network, content, temporal, and user features derived from this communication network in several bot detection machine learning models. We will compare this model to the other models of the bot-hunter toolbox as well as current state of the art models. In the evaluation, we will also explore and evaluate relevant training data. Finally, we will demonstrate the application of the bot-hunter suite of tools in Twitter data collected around the Swedish National elections in 2018.

Notes

Acknowledgements

This work was supported in part by the Office of Naval Research (ONR) Multidisciplinary University Research Initiative Award N000140811186 and Award N000141812108, the Army Research Laboratory Award W911NF1610049, Defense Threat Reductions Agency Award HDTRA11010102, and the Center for Computational Analysis of Social and Organization Systems (CASOS). The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the ONR, ARL, DTRA, or the U.S. government.

References

  1. 1.
    K.S. Adewole, N.B. Anuar, A. Kamsin, K.D. Varathan, S.A. Razak, Malicious accounts: dark of the social networks. J. Netw. Comput. Appl. 79, 41–67 (2017)CrossRefGoogle Scholar
  2. 2.
    L.M. Aiello, M. Deplano, R. Schifanella, G. Ruffo, People are strange when you’re a stranger: Impact and influence of bots on social networks. Links 697(483,151), 1–566 (2012)Google Scholar
  3. 3.
    A. Almaatouq, E. Shmueli, M. Nouh, A. Alabdulkareem, V.K. Singh, M. Alsaleh, A. Alarifi, A. Alfaris, et al., If it looks like a spammer and behaves like a spammer, it must be a spammer: analysis and detection of microblogging spam accounts. Int. J. Inf. Secur. 15(5), 475–491 (2016)CrossRefGoogle Scholar
  4. 4.
    A. Ananthalakshmi, Ahead of Malaysian Polls, Bots Flood Twitter with Pro-government… (2018)Google Scholar
  5. 5.
    D.A Bader, S. Kintali, K. Madduri, M. Mihail, Approximating betweenness centrality, in International Workshop on Algorithms and Models for the Web-Graph (Springer, Berlin, 2007), pp. 124–137Google Scholar
  6. 6.
    A. Bavelas, A mathematical model for group structures. Hum. Organ. 7(3), 16–30 (1948)CrossRefGoogle Scholar
  7. 7.
    A.I. Bawaba, The Loop, in Thousands of Twitter Bots are Attempting to Silence Reporting on Yemen (2017)Google Scholar
  8. 8.
    M. Benigni, K.M. Carley, From tweets to intelligence: understanding the islamic jihad supporting community on twitter, in Proceedings of Social, Cultural, and Behavioral Modeling: 9th International Conference, SBP-BRiMS 2016, Washington, DC, USA, June 28-July 1, 2016 (Springer, Berlin, 2016), pp. 346–355Google Scholar
  9. 9.
    M.C. Benigni, K. Joseph, K.M. Carley, Online extremism and the communities that sustain it: detecting the ISIS supporting community on twitter. PLoS One 12(12), e0181405 (2017)Google Scholar
  10. 10.
    D. Beskow, K.M. Carley, Bot conversations are different: Leveraging network metrics for bot detection in twitter, in 2018 International Conference on Advances in Social Networks Analysis and Mining (ASONAM) (IEEE, Piscataway, 2018), pp. 176–183Google Scholar
  11. 11.
    D. Beskow, K.M. Carley, Introducing bothunter: A tiered approach to detection and characterizing automated activity on twitter, in International Conference on Social Computing, Behavioral-Cultural Modeling and Prediction and Behavior Representation in Modeling and Simulation, ed. by H. Bisgin, A. Hyder, C. Dancy, R. Thomson (Springer, Berlin, 2018)Google Scholar
  12. 12.
    D. Beskow, K.M. Carley, Using random string classification to filter and annotate automated accounts, in International Conference on Social Computing, Behavioral-Cultural Modeling and Prediction and Behavior Representation in Modeling and Simulation, ed. by H. Bisgin, A. Hyder, C. Dancy, R. Thomson (Springer, Berlin, 2018)Google Scholar
  13. 13.
    A. Bessi, E. Ferrara, Social Bots Distort the 2016 US Presidential Election Online Discussion (2016)Google Scholar
  14. 14.
    S.Y. Bhat, M. Abulaish, Community-based features for identifying spammers in online social networks, in 2013 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining (ASONAM) (IEEE, Piscataway, 2013), pp. 100–107Google Scholar
  15. 15.
    D.M. Blei, A.Y. Ng, M.I. Jordan, Latent dirichlet allocation. J. Mach. Learn. Res. 3(Jan), 993–1022 (2003)zbMATHGoogle Scholar
  16. 16.
    V.D. Blondel, J.-L. Guillaume, R. Lambiotte, E. Lefebvre, Fast unfolding of communities in large networks. J. Stat. Mech: Theory Exp. 2008(10), P10008 (2008)Google Scholar
  17. 17.
    R.S. Burt, Structural Holes: The Social Structure of Competition (Harvard University Press, Cambridge, 2009)Google Scholar
  18. 18.
    K.M. Carley, G. Cervone, N. Agarwal, H. Liu, Social cyber-security, in International Conference on Social Computing, Behavioral-Cultural Modeling and Prediction and Behavior Representation in Modeling and Simulation, ed. by H. Bisgin, A. Hyder, C. Dancy, R. Thomson (Springer, Berlin, 2018)Google Scholar
  19. 19.
    D. Cartwright, F. Harary, Structural balance: a generalization of Heider’s theory. Psychol. Rev. 63(5), 277 (1956)Google Scholar
  20. 20.
    N. Chavoshi, H. Hamooni, A. Mueen, Debot: Twitter bot detection via warped correlation, in IEEE International Conference on Data Mining (ICDM) (2016), pp. 817–822Google Scholar
  21. 21.
    N. Chavoshi, H. Hamooni, A. Mueen, On-demand bot detection and archival system, in Proceedings of the 26th International Conference on World Wide Web Companion. International World Wide Web Conferences Steering Committee (2017), pp. 183–187Google Scholar
  22. 22.
    C.-M. Chen, D.J. Guan, Q.-K. Su, Feature set identification for detecting suspicious urls using bayesian classification in social networks. Inform. Sci. 289, 133–147 (2014)CrossRefGoogle Scholar
  23. 23.
    S. Cresci, R. Di Pietro, M. Petrocchi, A. Spognardi, M. Tesconi, Fame for sale: efficient detection of fake twitter followers. Decis. Support. Syst. 80, 56–71 (2015)CrossRefGoogle Scholar
  24. 24.
    S. Cresci, R. Di Pietro, M. Petrocchi, A. Spognardi, M. Tesconi, Social fingerprinting: detection of spambot groups through DNA-inspired behavioral modeling. IEEE Trans. Dependable Secure Comput. 15(4), 561–576 (2018)Google Scholar
  25. 25.
    C.A. Davis, O. Varol, E. Ferrara, A. Flammini, F. Menczer, Botornot: a system to evaluate social bots, in Proceedings of the 25th International Conference Companion on World Wide Web. International World Wide Web Conferences Steering Committee (2016), pp. 273–274Google Scholar
  26. 26.
    D.J. Dekker, Measures of Simmelian Tie Strength, Simmelian Brokerage, and, the Simmelianly Brokered (2006)Google Scholar
  27. 27.
    R.I.M. Dunbar, Coevolution of neocortical size, group size and language in humans. Behav. Brain Sci. 16(4), 681–694 (1993)CrossRefGoogle Scholar
  28. 28.
    E. Ferrara, Measuring social spam and the effect of bots on information diffusion in social media (2017). arXiv preprint:1708.08134Google Scholar
  29. 29.
    S. Fortunato, Community detection in graphs. Phys. Rep. 486(3–5), 75–174 (2010)MathSciNetCrossRefGoogle Scholar
  30. 30.
    O. Frank, Sampling and estimation in large social networks. Soc. Networks 1(1), 91–101 (1978)MathSciNetCrossRefGoogle Scholar
  31. 31.
    L.C. Freeman, Centrality in social networks conceptual clarification. Soc. Networks 1(3), 215–239 (1978)CrossRefGoogle Scholar
  32. 32.
    L.C. Freeman, Centered graphs and the structure of ego networks. Math. Soc. Sci. 3(3), 291–304 (1982)MathSciNetzbMATHCrossRefGoogle Scholar
  33. 33.
    K. Gani, H. Hacid, R. Skraba, Towards multiple identity detection in social networks, in Proceedings of the 21st International Conference on World Wide Web (ACM, New York, 2012), pp. 503–504Google Scholar
  34. 34.
    M. Gjoka, M. Kurant, C.T. Butts, A. Markopoulou, Practical recommendations on crawling online social networks. IEEE J. Sel. Areas Commun. 29(9), 1872–1892 (2011)CrossRefGoogle Scholar
  35. 35.
    A. Glaser, Russian Bots are Trying to Sow Discord on Twitter After Charlottesville (2017)Google Scholar
  36. 36.
    L.A. Goodman, Snowball sampling, in The Annals of Mathematical Statistics (1961), pp. 148–170Google Scholar
  37. 37.
    M.S. Granovetter, The strength of weak ties, in Social Networks (Elsevier, Amsterdam, 1977), pp. 347–367Google Scholar
  38. 38.
    C. Grimme, M. Preuss, L. Adam, H. Trautmann, Social bots: human-like by means of human control? Big Data 5(4), 279–293 (2017)CrossRefGoogle Scholar
  39. 39.
    A. Hagberg, P. Swart, D.S. Chult, Exploring Network Structure, Dynamics, and Function Using Networkx (Los Alamos National Lab.(LANL), Los Alamos, 2008). Technical reportGoogle Scholar
  40. 40.
    F. Heider, Attitudes and cognitive organization. J. Psychol. 21(1), 107–112 (1946)CrossRefGoogle Scholar
  41. 41.
    P.W. Holland, S. Leinhardt, Transitivity in structural models of small groups. Compar. Group Stud. 2(2), 107–124 (1971)CrossRefGoogle Scholar
  42. 42.
    P.W. Holland, S. Leinhardt, A method for detecting structure in sociometric data, in Social Networks (Elsevier, Amsterdam, 1977), pp. 411–432Google Scholar
  43. 43.
    P.N. Howard, B. Kollanyi, Bots,# strongerin, and# brexit: computational propaganda during the uk-eu referendum, in Browser Download This Paper (2016)Google Scholar
  44. 44.
    B. Huang, K.M. Carley, On predicting geolocation of tweets using convolutional neural networks, in International Conference on Social Computing, Behavioral-Cultural Modeling and Prediction and Behavior Representation in Modeling and Simulation (Springer, Berlin, 2017), pp. 281–291Google Scholar
  45. 45.
    L. Katz, A new status index derived from sociometric analysis. Psychometrika 18(1), 39–43 (1953)zbMATHCrossRefGoogle Scholar
  46. 46.
    D. Krackhardt, The ties that torture: Simmelian tie analysis in organizations. Res. Sociol. Organ. 16(1), 183–210 (1999)Google Scholar
  47. 47.
    S. Kudugunta, E. Ferrara, Deep neural networks for bot detection (2018). arXiv preprint:1802.04289Google Scholar
  48. 48.
    S. Lee, J. Kim, Early filtering of ephemeral malicious accounts on twitter. Comput. Commun. 54, 48–57 (2014)CrossRefGoogle Scholar
  49. 49.
    K. Lee, J. Caverlee, S. Webb, Uncovering social spammers: social honeypots+ machine learning, in Proceedings of the 33rd International ACM SIGIR Conference on Research and Development in Information Retrieval (ACM, New York, 2010), pp. 435–442Google Scholar
  50. 50.
    K. Lee, B.D. Eoff, J. Caverlee, Seven months with the devils: a long-term study of content polluters on twitter, in ICWSM (2011)Google Scholar
  51. 51.
    D. Liu, B. Mei, J. Chen, Z. Lu, X. Du, Community based spammer detection in social networks, in International Conference on Web-Age Information Management (Springer, Berlin, 2015), pp. 554–558Google Scholar
  52. 52.
    C. Lumezanu, N. Feamster, H. Klein, # bias: Measuring the tweeting behavior of propagandists, in Sixth International AAAI Conference on Weblogs and Social Media (2012)Google Scholar
  53. 53.
    Z. Miller, B. Dickinson, W. Deitrick, W. Hu, A.H. Wang, Twitter spammer detection using data stream clustering. Inf. Sci. 260, 64–73 (2014)CrossRefGoogle Scholar
  54. 54.
    A. Mislove, M. Marcon, K.P. Gummadi, P. Druschel, B. Bhattacharjee, Measurement and analysis of online social networks, in Proceedings of the 7th ACM SIGCOMM conference on Internet measurement (ACM, New York, 2007), pp. 29–42Google Scholar
  55. 55.
    L.M. Neudert, B. Kollanyi, P.N. Howard, Junk News and Bots During the German Federal Presidency Election: What Were German Voters Sharing Over Twitter? (2017)Google Scholar
  56. 56.
    B. Nimmo, #botspot: Twelve Ways to Spot a Bot—dfrlab—medium (2017). https://medium.com/dfrlab/botspot-twelve-ways-to-spot-a-bot-aedc7d9c110c (Accessed on 11/03/2018).
  57. 57.
    F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay, Scikit-learn: machine learning in Python. J. Mach. Learn. Res. 12, 2825–2830 (2011)MathSciNetzbMATHGoogle Scholar
  58. 58.
    J. Stubbs, J. Ahlander, Exclusive: Right-Wing Sites Swamp Sweden with ‘Junk News’ in Tight Election Race | Reuters (2018). https://www.reuters.com/article/us-sweden-election-disinformation-exclus/exclusive-right-wing-sites-swamp-sweden-with-junk-news-in-tight-election-race-idUSKCN1LM0DN (Accessed on 11/20/2018).
  59. 59.
    V.S. Subrahmanian, A. Azaria, S. Durst, V. Kagan, A. Galstyan, K. Lerman, L. Zhu, E. Ferrara, A. Flammini, F. Menczer, The darpa twitter bot challenge. Computer 49(6), 38–46 (2016)CrossRefGoogle Scholar
  60. 60.
    Swedish Defence Reserch Agency, Antalet botar på twitter ökar inför valet—totalförsvarets forskningsinstitut (2018). https://www.foi.se/press--nyheter/nyheter/nyhetsarkiv/2018-08-29-antalet-botar-pa-twitter-okar-infor-valet.html (Accessed on 11/20/2018)
  61. 61.
    K. Thomas, C. Grier, D. Song, V. Paxson, Suspended accounts in retrospect: an analysis of twitter spam, in Proceedings of the 2011 ACM SIGCOMM conference on Internet measurement conference (ACM, New York, 2011), pp. 243–258Google Scholar
  62. 62.
    Twitter. Elections Integrity Data Archive. https://about.twitter.com/en_us/values/elections-integrity.html#us-elections (Accessed on 03/30/2019)
  63. 63.
    O. Varol, E. Ferrara, C.A. Davis, F. Menczer, A. Flammini, Online human-bot interactions: Detection, estimation, and characterization (2017). arXiv preprint:1703.03107Google Scholar
  64. 64.
    J.-P. Verkamp, M. Gupta, Five incidents, one theme: Twitter spam as a weapon to drown voices of protest, in Presented as part of the 3rd USENIX Workshop on Free and Open Communications on the Internet (2013).Google Scholar
  65. 65.
    B. Viswanath, M.A. Bashir, M. Crovella, S. Guha, K.P. Gummadi, B. Krishnamurthy, A. Mislove, Towards detecting anomalous user behavior in online social networks, in USENIX Security Symposium (2014), pp. 223–238Google Scholar
  66. 66.
    G. Wang, M. Mohanlal, C. Wilson, X. Wang, M. Metzger, H. Zheng, B.Y. Zhao, Social turing tests: crowdsourcing sybil detection (2012). arXiv preprint:1205.3856Google Scholar
  67. 67.
    H. Yu, M. Kaminsky, P.B. Gibbons, A. Flaxman, Sybilguard: defending against sybil attacks via social networks, in ACM SIGCOMM Computer Communication Review, vol. 36 (ACM, New York, 2006), pp. 267–278Google Scholar

Copyright information

© Springer Nature Switzerland AG 2020

Authors and Affiliations

  1. 1.School of Computer ScienceCarnegie Mellon UniversityPittsburghUSA

Personalised recommendations