UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Security analysis of malicious socialbots on the web Boshmaf, Yazan 2015

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata


24-ubc_2015_september_boshmaf_yazan.pdf [ 4.68MB ]
JSON: 24-1.0167277.json
JSON-LD: 24-1.0167277-ld.json
RDF/XML (Pretty): 24-1.0167277-rdf.xml
RDF/JSON: 24-1.0167277-rdf.json
Turtle: 24-1.0167277-turtle.txt
N-Triples: 24-1.0167277-rdf-ntriples.txt
Original Record: 24-1.0167277-source.json
Full Text

Full Text

Security Analysis of Malicious Socialbots on the WebbyYazan BoshmafB. Computer Engineering, Jordan University of Science and Technology, 2005M. Information Technology, University of Stuttgart, 2008A THESIS SUBMITTED IN PARTIAL FULFILLMENTOF THE REQUIREMENTS FOR THE DEGREE OFDoctor of PhilosophyinTHE FACULTY OF GRADUATE AND POSTDOCTORALSTUDIES(Electrical and Computer Engineering)The University of British Columbia(Vancouver)May 2015© Yazan Boshmaf, 2015AbstractThe open nature of the Web, online social networks (OSNs) in particular, makes itpossible to design socialbots—automation software that controls fake accounts ina target OSN, and has the ability to perform basic activities similar to those of realusers. In the wrong hands, socialbots can be used to infiltrate online communities,build up trust over time, and then engage in various malicious activities.This dissertation presents an in-depth security analysis of malicious socialbotson the Web, OSNs in particular. The analysis focuses on two main goals: (1) tocharacterize and analyze the vulnerability of OSNs to cyber attacks by malicioussocialbots, social infiltration in particular, and (2) to design and evaluate a coun-termeasure to efficiently and effectively defend against socialbots.To achieve these goals, we first studied social infiltration as an organized cam-paign operated by a socialbot network (SbN)—a group of programmable social-bots that are coordinated by an attacker in a botnet-like fashion. We implementeda prototypical SbN consisting of 100 socialbots and operated it on Facebook for 8weeks. Among various findings, we observed that some users are more likely tobecome victims than others, depending on factors related to their social structure.Moreover, we found that traditional OSN defenses are not effective at identifyingautomated fake accounts or their social infiltration campaigns.Based on these findings, we designed Íntegro—an infiltration-resilient defensesystem that helps OSNs detect automated fake accounts via a user ranking scheme.In particular, Íntegro relies on a novel approach that leverages victim classificationfor robust graph-based fake account detection, with provable security guarantees.iiWe implemented Íntegro on top of widely-used, open-source distributed systems,in which it scaled nearly linearly. We evaluated Íntegro against SybilRank—thestate-of-the-art in graph-based fake account detection—using real-world datasetsand a large-scale, production-class deployment at Tuenti, the largest OSN in Spainwith more than 15 million users. We showed that Íntegro significantly outperformsSybilRank in ranking quality, allowing Tuenti to detect at least 10 times more fakeaccounts than their current abuse detection system.iiiPreface“Акъылым уасэ иIэкъым, гъэсэныгъэм гъунэ иIэкъым”“Intellect is priceless, education has no limit” — Circassian proverb [59]This research was the product of a fruitful collaboration between the author ofthe dissertation and the following people: Ildar Muslukhov, Konstantin Beznosov(co-advisor), and Matei Ripeanu (co-advisor) from the University of British Columbia,Dionysios Logothetis and Georgos Siganos from Telefónica Research, and JorgeRodriguez Lería and Jose Lorenzo from Tuenti, Telefónica Digital.It is worth mentioning that the work presented herein consists of research stud-ies that have been published or under review in peer-reviewed international confer-ences, workshops, and journals. In particular, the characterization study presentedin Chapter 2, and partly discussed in Chapter 4, led to the following publications:• Y. Boshmaf, I. Muslukhov, K. Beznosov, and M. Ripeanu. The SocialbotNetwork: When Bots Socialize for Fame and Money. In Proceedings of the27th Annual Computer Security Applications Conference (ACSAC ’11), pp.93–102, Orlando, FL, USA, 2011 (best paper award, 20% acceptance rate).• Y. Boshmaf, I. Muslukhov, K. Beznosov, and M. Ripeanu. Key Challengesin Defending against Malicious Socialbots. In Proceedings of the 5th An-nual USENIX Workshop on Large-Scale Exploits and Emergent Threats(LEET ’12), San Jose, CA, USA, 2012 (18% acceptance rate).iv• Y. Boshmaf, I. Muslukhov, K. Beznosov, and M. Ripeanu. Design and Anal-ysis of a Social Botnet. Computer Networks, Elsevier, 57(2), pp. 556–578,2013 (1.87 impact factor).The results related to the system design part, which are reported in Chapter 3,are presented in the following refereed publications:• Y. Boshmaf, K. Beznosov, and M. Ripeanu. Graph-based Sybil Detection inSocial and Information Systems. In Proceedings of the 2013 IEEE/ACM In-ternational Conference on Advances in Social Networks Analysis and Min-ing (ASONAM ’13), pp. 466–473, Niagara Falls, ON, Canada, 2013 (bestpaper award, 13% acceptance rate).• Y. Boshmaf, D. Logothetis, G. Siganos, J. R. Lería, J. Lorenzo, M. Ripeanu,and K. Beznosov. Íntegro: Leveraging Victim Prediction for Robust FakeAccount Detection in OSNs. In Proceedings of the 2015 Annual Networkand Distributed System Security Symposium (NDSS ’15), San Diego, CA,USA, 2015 (16% acceptance rate).• Y. Boshmaf, D. Logothetis, G. Siganos, J. R. Lería, J. Lorenzo, M. Ripeanu,and K. Beznosov. Thwarting Fake Accounts in Online Social Networks byPredicting their Victims. Under review (submitted in March, 2015).The discussion in Chapter 4 is partially influenced by ideas and findings whichled to publications at the following refereed workshops and conferences:• S-T. Sun, Y. Boshmaf, K. Hawkey, and K. Beznosov. A Billion Keys, butFew Locks: The Crisis of Web Single Sign-On. In Proceedings of the 2010Workshop on New Security Paradigms (NSPW ’10), Concord, MA, USA,2010 (32% acceptance rate).• H. Rashtian, Y. Boshmaf, P. Jaferian, and K. Beznosov. To Befriend or Not?A Model for Friend Request Acceptance on Facebook. In Proceedings ofthe 11th Symposium On Usable Privacy and Security (SOUPS ’14), MenloPark, CA, USA, 2014 (27% acceptance rate).vTable of ContentsAbstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iiPreface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ivTable of Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viList of Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiList of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiiAcknowledgments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiiiDedication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiv1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.1 Problem Overview . . . . . . . . . . . . . . . . . . . . . . . . . 21.1.1 What is the Threat? . . . . . . . . . . . . . . . . . . . . . 21.1.2 What is at Stake? . . . . . . . . . . . . . . . . . . . . . . 31.1.3 Who is Liable? . . . . . . . . . . . . . . . . . . . . . . . 31.1.4 Why is the Threat Hard to Mitigate? . . . . . . . . . . . . 41.2 Goals and Methodology . . . . . . . . . . . . . . . . . . . . . . . 61.3 Research Summary . . . . . . . . . . . . . . . . . . . . . . . . . 61.3.1 Threat Characterization . . . . . . . . . . . . . . . . . . . 71.3.2 Countermeasure Design . . . . . . . . . . . . . . . . . . . 11vi2 Social Infiltration in OSNs . . . . . . . . . . . . . . . . . . . . . . . 152.1 Background and Related Work . . . . . . . . . . . . . . . . . . . 162.1.1 Online Social Networks . . . . . . . . . . . . . . . . . . . 162.1.2 Sybil Attacks and Social Infiltration . . . . . . . . . . . . 172.1.3 Social Engineering, Automation, and Socialbots . . . . . . 172.1.4 Online Attacks in a Web of Scale . . . . . . . . . . . . . . 182.1.5 The Cyber-Criminal Ecosystem for OSN Abuse . . . . . . 202.2 Vulnerabilities in OSN Platforms . . . . . . . . . . . . . . . . . . 212.2.1 Ineffective CAPTCHAs . . . . . . . . . . . . . . . . . . . 212.2.2 Fake Accounts and User Profiles . . . . . . . . . . . . . . 222.2.3 Crawlable Social Graphs . . . . . . . . . . . . . . . . . . 242.2.4 Exploitable Platforms and APIs . . . . . . . . . . . . . . 242.3 Design of a Social Botnet . . . . . . . . . . . . . . . . . . . . . . 252.3.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . 262.3.2 Threat Model . . . . . . . . . . . . . . . . . . . . . . . . 272.3.3 Requirements . . . . . . . . . . . . . . . . . . . . . . . . 272.3.4 Socialbots . . . . . . . . . . . . . . . . . . . . . . . . . . 282.3.5 Botmaster . . . . . . . . . . . . . . . . . . . . . . . . . . 292.3.6 C&C Channel . . . . . . . . . . . . . . . . . . . . . . . . 312.4 Empirical Evaluation . . . . . . . . . . . . . . . . . . . . . . . . 322.4.1 Ethics Consideration . . . . . . . . . . . . . . . . . . . . 322.4.2 Methodology . . . . . . . . . . . . . . . . . . . . . . . . 332.4.3 Implementation on Facebook . . . . . . . . . . . . . . . . 342.4.4 Experimentation . . . . . . . . . . . . . . . . . . . . . . 352.4.5 Analysis and Discussion . . . . . . . . . . . . . . . . . . 372.5 Economic Feasibility . . . . . . . . . . . . . . . . . . . . . . . . 422.5.1 Methodology . . . . . . . . . . . . . . . . . . . . . . . . 432.5.2 Model and Assumptions . . . . . . . . . . . . . . . . . . 432.5.3 Scalability of Social Infiltration . . . . . . . . . . . . . . . 442.5.4 Profit-Maximizing Infiltration Strategies . . . . . . . . . . 48vii2.5.5 Case Study: Social Infiltration in Facebook . . . . . . . . 502.5.6 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . 532.6 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 563 Infiltration-Resilient Fake Account Detection in OSNs . . . . . . . . 573.1 Background and Related Work . . . . . . . . . . . . . . . . . . . 583.1.1 Threat Model . . . . . . . . . . . . . . . . . . . . . . . . 583.1.2 Fake Account Detection . . . . . . . . . . . . . . . . . . 603.1.3 Abuse Mitigation and the Ground-truth . . . . . . . . . . 633.1.4 Analyzing Victim Accounts . . . . . . . . . . . . . . . . 643.2 Intuition, Goals, and Model . . . . . . . . . . . . . . . . . . . . . 643.2.1 Intuition . . . . . . . . . . . . . . . . . . . . . . . . . . . 653.2.2 Design Goals . . . . . . . . . . . . . . . . . . . . . . . . 653.2.3 System Model . . . . . . . . . . . . . . . . . . . . . . . . 663.3 System Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . 673.3.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . 673.3.2 Identifying Potential Victims . . . . . . . . . . . . . . . . 683.3.3 Ranking User Accounts . . . . . . . . . . . . . . . . . . . 703.3.4 Selecting Trusted Accounts . . . . . . . . . . . . . . . . . 743.3.5 Computational Cost . . . . . . . . . . . . . . . . . . . . . 763.3.6 Security Guarantees . . . . . . . . . . . . . . . . . . . . . 763.4 Comparative Evaluation . . . . . . . . . . . . . . . . . . . . . . . 793.4.1 Compared System . . . . . . . . . . . . . . . . . . . . . . 793.4.2 Methodology . . . . . . . . . . . . . . . . . . . . . . . . 803.4.3 Datasets . . . . . . . . . . . . . . . . . . . . . . . . . . . 803.4.4 Implementation . . . . . . . . . . . . . . . . . . . . . . . 823.4.5 Victim Classification . . . . . . . . . . . . . . . . . . . . 823.4.6 Ranking Quality . . . . . . . . . . . . . . . . . . . . . . . 873.4.7 Sensitivity to Seed-targeting Attacks . . . . . . . . . . . . 903.4.8 Deployment at Tuenti . . . . . . . . . . . . . . . . . . . . 923.4.9 Scalability . . . . . . . . . . . . . . . . . . . . . . . . . . 96viii3.5 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 973.5.1 Robustness of User Ranking . . . . . . . . . . . . . . . . 973.5.2 Maintenance and Impact . . . . . . . . . . . . . . . . . . 983.5.3 Limitations . . . . . . . . . . . . . . . . . . . . . . . . . 983.6 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 994 Discussion and Research Directions . . . . . . . . . . . . . . . . . . 1014.1 Challenges for Preventive Countermeasures . . . . . . . . . . . . 1014.1.1 Web Automation . . . . . . . . . . . . . . . . . . . . . . 1024.1.2 Identity Binding . . . . . . . . . . . . . . . . . . . . . . . 1044.1.3 Usable Security . . . . . . . . . . . . . . . . . . . . . . . 1054.2 Account Admission Control in OSNs . . . . . . . . . . . . . . . . 1064.2.1 Vouching . . . . . . . . . . . . . . . . . . . . . . . . . . 1064.2.2 Service Provisioning . . . . . . . . . . . . . . . . . . . . 1064.2.3 User Education and Security Advice . . . . . . . . . . . . 1074.3 Leveraging Victim Prediction . . . . . . . . . . . . . . . . . . . . 1074.3.1 User-Facing Security Advice . . . . . . . . . . . . . . . . 1084.3.2 Honeypots and User Sampling . . . . . . . . . . . . . . . 1095 Impact and Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . 110Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114A Security Analysis of Íntegro . . . . . . . . . . . . . . . . . . . . . . . 129A.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129A.2 Mathematical Proofs . . . . . . . . . . . . . . . . . . . . . . . . 131B Evaluating Sybil Node Detection Algorithms with SyPy . . . . . . . 140B.1 Framework . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140B.1.1 Graphs and Regions . . . . . . . . . . . . . . . . . . . . . 141B.1.2 Networks . . . . . . . . . . . . . . . . . . . . . . . . . . 141B.1.3 Detectors . . . . . . . . . . . . . . . . . . . . . . . . . . 142ixB.2 Benchmarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142B.3 Extensibility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143xList of TablesTable 2.1 Generic operations supported by the socialbot malware . . . . . 28Table 2.2 Master commands executed by socialbots . . . . . . . . . . . . 30Table 2.3 Estimates for analyzing infiltration profitability in Facebook . . 51Table 3.1 Low-cost features extracted from Facebook and Tuenti . . . . . 84xiList of FiguresFigure 1.1 Research questions and contributions . . . . . . . . . . . . . . 7Figure 2.1 Components required for OSN abuse in Facebook . . . . . . . 20Figure 2.2 An overview of a socialbot network (SbN) . . . . . . . . . . . 25Figure 2.3 Seeding social infiltration in Facebook . . . . . . . . . . . . . 36Figure 2.4 Demographics of contacted users . . . . . . . . . . . . . . . . 38Figure 2.5 User susceptibility to infiltration on Facebook (CI=95%) . . . 38Figure 2.6 Real-world samples of user interactions with socialbots . . . . 39Figure 2.7 Users with accessible private data . . . . . . . . . . . . . . . 40Figure 2.8 Infiltration performance on Facebook . . . . . . . . . . . . . 41Figure 3.1 System model . . . . . . . . . . . . . . . . . . . . . . . . . . 66Figure 3.2 Trust propagation example . . . . . . . . . . . . . . . . . . . 74Figure 3.3 Victim classifier tuning . . . . . . . . . . . . . . . . . . . . . 85Figure 3.4 Victim classification using the RF algorithm . . . . . . . . . . 86Figure 3.5 Ranking quality under each infiltration scenario (CI=95%) . . 89Figure 3.6 Ranking sensitivity to seed-targeting attacks (CI=95%) . . . . 91Figure 3.7 Preprocessing for system deployment . . . . . . . . . . . . . 92Figure 3.8 Deployment results at Tuenti . . . . . . . . . . . . . . . . . . 95Figure 3.9 System scalability on distributed computing platforms . . . . 97Figure B.1 SyPy network visualization . . . . . . . . . . . . . . . . . . . 144xiiAcknowledgmentsFirst and foremost, I would like to thank my kind advisors, Konstantin Beznosovand Matei Ripeanu, for giving me the opportunity to venture into different topicsand disciplines, and for patiently guiding me through this journey.Second, this work would have been no fun without my fabulous collaboratorsand supportive friends. In no particular order, I give my special thanks to you all:Dionysios Logothetis, Georgos Siganos, Jorge Rodriguez Lería, Jose Lorenzo,Ildar Muslukhov, Pooya Jaferian, San-Tsai Sun, Hootan Rashtian, Primal Wije-sekera, Ivan Cherapau, Tareq AlKhatib, Bader AlAhmad, Hussam Ashab, AntoneDabeet, Daniel Lopez, Nima Fartash, Mohammad Shamma, and Alex Dauphin.Third, I would like to thank all members of LERSSE and NetSysLab for theirfeedback and constructive discussions. I am greatly thankful to Tim Hwang fromPacific Social, Chris Sumner from the Online Privacy Foundation, Ulf Lindqvistfrom SRI International, Hasan Cavusoglu from Sauder School of Business, JosephMcCarthy from University of Washington–Bothell, the Alexander von HumboldtFoundation, and the Natural Sciences and Engineering Research Council (NSERC)of Canada for their external support and feedback during my studies. I am vastlyindebted to the University of British Columbia for a generous doctoral fellowship.Fourth, salutes to my colleagues at Facebook, Telefónica, Microsoft, and Sophosfor enriching internship experiences: Kyle Zeeuwen, David Cornell, Dmitry Samos-seiko, Yuchun Tang, Matt Jones, Nikolaos Laoutaris, and Mihai Budiu.Last but not least, I would like to thank my family. Words cannot express howthankful I am for your constant support and love over these years.xiiiDedicationTo my parents who never stopped believing in mexivChapter 1IntroductionOur personal and professional lives have gone digital. We live, work, and play inthe cyber-space. We use the Web every day to talk, email, text, and socialize withfamily, friends, and colleagues. Among a plethora of social Web services, onlinesocial networks (OSNs), such as Facebook1 and Twitter2, have had the highestpopularity and impact on the way we engage with the Web on a daily basis [11].With more than one billion active users [30, 119], OSNs have attracted thirdparties who exploit them as effective online social media to reach out to and poten-tially influence a large and diverse population of web users [24, 62]. For example,OSNs were heavily employed by Obama’s 2012 campaign team who raised $690million online, up from $500 million in 2008 [100]. In addition, it has been ar-gued that OSNs were one of the major enablers of the recent Arab Spring in theMiddle East [89]. This pervasive integration of OSNs into everyday life is rapidlybecoming the norm, and arguably, it is here to stay. Today’s social Web, however,is not exclusive to only human beings. In fact, online personas and their socialinteractions can be programmed and fully automated. As Hwang et al. put it [56]:“digitization drives botification; the use of technology in a realm of human activityenables the creation of software to act in lieu of human.”1http://facebook.com2http://twitter.com1In this dissertation, we analyze the threat of malicious socialbots—automatedpersonas that infiltrate OSNs like virtual con men, build up trust over time, andthen exploit it to promote personalized messages for adversarial objectives.31.1 Problem OverviewIn what follows, we first define the threat of malicious socialbots (Section 1.1.1),explain why it is important to defend against (Section 1.1.2), discuss who is legallyliable in case of harm or damage (Section 1.1.3), and then elaborate why mitigat-ing this threat is a hard socio-technical problem (Section 1.1.4).1.1.1 What is the Threat?A new breed of computer programs called socialbots are emerging, and they caninfluence users online [79]. A socialbot is an automation software that controls afake account in a target OSN, and has the ability to perform basic activities similarto those of real users, such as posting messages and sending friend requests.What makes a socialbot different from self-declared bots, such as Twitter botsthat post weather forecasts, or spambots, which are bots that massively distributeunsolicited messages to non-consenting users, is that it is designed to pass itselfoff as a human being. This is achieved by either simply mimicking the actions ofa real OSN user or by simulating such a user using artificial intelligence, just asin social robotics [56].In the wrong hands, a socialbot can be used to infiltrate users in a target OSNand reach an influential position, that is, to adversely manipulate the social graphof the OSN by connecting with a large number of its users. We discuss the threatmodel is more detail in Sections 2.3.2 and 3.1.13Use of the plural pronoun is customary even in solely authored research work. However, giventhe subject of this dissertation, its use herein is particularly ironic, following Douceur’s seminalpaper [22] on multiple-identity or Sybil attacks in distributed systems.21.1.2 What is at Stake?First of all, a malicious socialbot can pollute the target OSN with a large number ofnon-genuine social relationships. This means that it is unsafe to treat the infiltratedOSN as a network of trust anymore, which goes against the long-term health of theOSN ecosystem [11]. Put differently, third-party applications and websites wouldhave to identify and remove fake user accounts before integrating with such OSNs,which has been shown to be a non-trivial task [137].Second, once a socialbot infiltrates the target OSN, it can exploit its new po-sition in the network to spread misinformation and bias the public opinion [113],perform online surveillance [95], and even influence algorithmic trading [8]. Forexample, Ratkiewicz et al. [98] described the use of Twitter bots to run astroturfcampaigns during the 2010 U.S. midterm elections. Moreover, a socialbot can ex-ploit its position in the network to distribute malicious content, such as malwareand botnet executables [132]. For example, the Koobface botnet [112] propagatesby hijacking user accounts of infected machines, after which it uses these accountsto send messages containing a malicious link to other OSN users. When clicked,the link points to a compromised website that attempts to infect its visitors withthe Koobface malware using drive-by download attacks [132].Third and last, as a socialbot infiltrates the target OSN, it can harvest privateuser data such as email addresses, phone numbers, and other personally identifi-able information that have monetary value. For an attacker, such data are valuableand can be used for online profiling and large-scale email spam and phishing cam-paigns [58]. It is thus not surprising that similar socialbots are being offered forsale at underground markets, along with other commodities for OSN abuse [114].1.1.3 Who is Liable?It is often the case that OSN users or third parties are liable for any damage causedby OSN platforms or services, not OSN operators [78]. For example, Facebook’sTerms of Service explicitly states under §15 entitled “Disputes” that [33]:3“We [Facebook and subsidiaries] try to keep Facebook up, bug-free,and safe, but you [the end user] use it at your own risk. We are pro-viding Facebook as is without any express or implied warranties. [...]We do not guarantee that Facebook will always be safe, secure orerror-free. [...] Facebook is not responsible for the actions, content,information, or data of third parties, and you release us, our directors,officers, employees, and agents from any claims and damages, knownand unknown, arising out of or in any way connected with any claimyou have against any such third parties.”From privacy law perspective, the American courts, for example, lack coher-ent and consistent methodology for determining if an individual has a reasonableexpectation of privacy in a particular fact that has been shared with one or morepersons in social networks [106]. In online settings, this methodological weaknesscan have severe implications, as OSNs like Facebook end up facilitating peer-to-peer privacy violations in which users harm each others’ privacy interests [43].While policymakers cannot make such OSNs completely safe, they can help peo-ple use it safely by introducing policy interventions, such as a strengthened public-disclosure tort and a right to opt out completely [43]. It is also in the interest ofOSN operators to keep their platforms as safe and secure as possible by preventingor detecting malicious activities, including automated fake accounts [17, 33]. Thiscommitment is evident by the large investments made by OSNs to hire teams ofsecurity engineers in order to develop, deploy, and maintain sophisticated defensemechanisms [15, 104, 135].1.1.4 Why is the Threat Hard to Mitigate?Recently, a number of feature-based detection techniques were proposed to auto-matically identify OSN bots, or automated fake accounts, based on their abnormalbehavior [108, 127, 135]. By extracting features that describe certain user behav-iors from recent activities (e.g., frequency of friend requests), a fake account clas-sifier is trained using various machine learning techniques. For example, Stein et4al. [104] presented Facebook’s “immune system”—an adversarial machine learn-ing system that performs real-time checks and classifications on every read andwrite action on Facebook’s database, which are based on features extracted fromuser accounts and their activities. Accordingly, it is expected that such an adver-sarial learning system to be effective at identifying and blocking spambots. Social-bots, however, are more deceptive than spambots as they are designed to behave“normally” by posing as average OSN users [56]. Armed with today’s artificialintelligence advancements, it is feasible to orchestrate a network of socialbots thatsense, think, and act cooperatively just like human beings. An attacker, for exam-ple, can use adversarial classifier reverse engineering techniques [74] in order tolearn sufficient information about the employed security defenses and then evadedetection by constructing an adversarial strategy that minimizes the chance of asocialbot being classified as abnormal or fake, sometimes down to zero.Graph-based detection techniques [2, 123, 137], as an alternative to feature-based detection, analyze the social graph in order to partition it into two regionsseparating real accounts from fakes. Such techniques, however, are expected to beless effective at identifying malicious socialbot, as they make strong assumptionsabout the capabilities of online attackers that do not hold in practice. For example,it is often assumed that the regions are sparsely connected, that is, automated fakeaccounts cannot establish many social relationships with real users, as this wouldrequire exceptional social engineering [137]. This assumption does not hold whenmalicious socialbots are used, because each bot is expected to gradually infiltrateand integrate into the online community it targets, resembling the scenario whena new user joins an OSN and starts connecting with others [56].To this end, detecting malicious socialbots in OSNs is expected to result in anarms race, which will keep both the defenders and the attackers busy, dependingon their available resources. The threat of socialbots could be mitigated by elim-inating the factors that make their automated operation feasible in the first place.Doing so, however, involves solving a number of hard socio-technical challenges,which we identify later in Section 4.1. Alternatively, the OSN can facilitate strict5account admission policies that limit the service offered to newly registered users.As we discussed in Section 4.2, this can negatively impact user experience.1.2 Goals and MethodologyOur ultimate goal is twofold: (1) to understand and characterize what makes OSNsvulnerable to cyber-attacks by malicious socialbots, to social infiltration in partic-ular, and (2) to design a new countermeasure to effectively and efficiently defendagainst malicious socialbots. In order to achieve these goals, we divided our inves-tigation into two parts: Threat characterization, which is presented in Chapter 2,and countermeasure design, which is presented in Chapter 3.As for our research methodology, we followed the standard iterative processin improving computer security, which starts with vulnerability analysis, followedby security requirements specification and countermeasure design, and then endswith security assurance [4]. We used quantitative and analytical research methodsto characterize social infiltration in OSNs and design a robust defense mechanismagainst it with provable security guarantees. These methods included controlledexperiments, statistical analysis, simulations, mathematical modeling and analy-sis, and real-world system deployment. We provide more detailed description ofour research methodology in Sections 2.4.2 and Research SummaryWhile the motivations for operating socialbots and the technical mechanisms thatenable them remain rich areas of research, we focus our investigation around fourresearch questions (RQs) covering the threat characterization part (Section 1.3.1)and the countermeasure design part (Section 1.3.2). Figure 1.1 summarizes theresearch questions and contributions of this dissertation.6• Vulnerability-analysis-of-OSN-pla5orms-• Characteriza;on-of-user-suscep;bility--How-vulnerable-are-OSNs-to-social-infiltra;on?-• Quan;fica;on-of-privacy-breaches-• Effec;veness-of-security-defenses-What-are-the-security-and-privacy-implica;ons-of-social-infiltra;on?- • Scalability-from-economic-context-• ProfitHmaximizing-infiltra;on-strategy-What-is-the-economic-ra;onale-behind-social-infiltra;on-at-scale?-• Vic;m-predic;on-for-robust-detec;on-• Framework-for-systema;c-evalua;on-How-can-OSNs-detect-automated-fakes-that-infiltrate-on-a-large-scale?-Threat Characterization Countermeasure Design 1 11 21 31 4Figure 1.1: Research questions and contributions1.3.1 Threat CharacterizationIn Chapter 2, we consider profit-driven attackers who infiltrate OSNs using mali-cious socialbots. In particular, the following questions guide our characterizationof social infiltration in OSNs:• RQ1: How vulnerable are OSNs to social infiltration?• RQ2: What are the security and privacy implications of social infiltration?• RQ3: What is the economic rationale behind social infiltration at scale?To address these questions, we studied social infiltration as an organized cam-paign run by a network of socialbots. In particular, we adopted the design ofweb-based botnets [101] and defined what we call a socialbot network (SbN): Agroup of programmable socialbots that are coordinated by an attacker using a soft-ware controller called the botmaster. In our design, the botmaster follows simpleinfiltration strategies that exploit known social behaviors of users in OSNs in orderto increase the scale of infiltration and its success rate.7Main FindingsWe implemented an SbN consisting of 100 socialbots and a single botmaster. Weoperated this SbN on Facebook for 8 weeks in early 2011. During that time, thesocialbots sent a total of 9,646 friend requests to attacked users, out of which3,439 requests were accepted by victims. This resulted in an average success rateof 35.7%. We recorded all data related to user behavior, along with all accessibleusers’ profile information.The main findings of this part of the dissertation are the following:Finding 1 (under RQ1): OSNs such as Facebook suffer from inherent vulnerabil-ities that enable an attacker to automate social infiltration on a large scaleWe analyzed the vulnerability of Facebook to malicious automation. We foundthat OSNs such as Facebook employ ineffective CAPTCHAs, allow multiple ac-counts to be created by the same user, hide the social graph but permit any user tocrawl it, and provide social APIs and web platforms that are relatively easy to ex-ploit or reverse engineer. Along with poorly designed end-user privacy controls,these vulnerabilities represent the enabling factors that make operating socialbotsfeasible in the first place.Finding 2 (under RQ1): Some users are more likely to become victims than oth-ers, which partly depends on factors related to their social structure.We found that OSN users are not careful when accepting connection requests,especially when they share mutual connections with the sender. This behavior canbe exploited to achieve a large-scale infiltration with a success rate of up to 80%,where the bots shared at least 10 mutual friends with the victims. Moreover, wefound that the more friends a user has, the more likely the user is to accept friendrequests sent by fakes posing as strangers, regardless to their gender or number ofmutual friends.Finding 3 (under RQ2): An SbN can seriously breach the privacy of users, wherepersonally identifiable information is compromised.8Subject to user privacy settings, we showed that an attacker can gain access to2.6 times more private user data by running a social infiltration campaign. Thesedata include birth dates, email addresses, phone numbers, and other personallyidentifiable information. This privacy breach includes the private data of “friendsof friends,” that is, users who have not been infiltrated by socialbots but are friendswith their victims.Finding 4 (under RQ2): Traditional OSN defenses are not effective at identifyingautomated fake accounts nor their social infiltration campaigns.Even though Facebook uses a sophisticated feature-based abuse detection sys-tem that relies on various machine learning techniques to identify fake accounts [104],we found that only 20% of the fakes controlled by socialbots were correctly iden-tified and suspended. However, these suspended accounts were in fact manuallyflagged by concerned users, rather than by Facebook. In addition, while the aver-age Facebook user had approximately 100 friends [39], we found that 50% of thesocialbots infiltrated more than 35 victims and up to 120, which means fakes arenot always loosely connected to real accounts in the social graph. This, as a result,renders graph-based fake account detection systems, which aim to find sparse cutsto separate real accounts from fakes, ineffective in practice.Finding 5 (under RQ3): For an SbN to be scalable in terms of number of attackedusers, it ought to have a fixed size in terms of number of socialbots.We developed an economic model for a profit-driven social infiltration. Usingcost-volume-profit (CVP) analysis, we found that when operating an SbN using agrowing number of socialbots, there is no benefit—if not a loss—from scaling theSbN and attacking even more users. Due to linear cost dependance, the attackermakes the same profit, at best, if 1K or 100K users are attacked, for example.Finding 6 (under RQ3): Operating an SbN at scale is expected to be profitable,but it is not particularly attractive as an independent business.9We derived two business models for social infiltration under which an attackercan make non-zero profit. In the first model, the attacker sells harvested user datathrough infiltrating as many users as possible, following a scalable data-driven in-filtration strategy. In the second model, the attacker receives a lump-sum paymentthrough infiltrating a predefined number of users, following a non-scalable target-driven infiltration strategy. However, due to poor market incentives, our analysisindicates that scalable social infiltration is not sustainable as an independent busi-ness, and a rational botherder would utilize an SbN as a monetization platform formore profitable underground commodities.ContributionsIn summary, this part of the dissertation makes the following main contributions:Contribution 1: Demonstrating the feasibility of social infiltration in OSNs.We performed the first comprehensive study that shows the feasibility of socialinfiltration in OSNs such as Facebook. In particular, we designed and evaluated asocial botnet on Facebook, which resulted in new or confirmatory findings relatedto platform vulnerabilities, user susceptibility to social infiltration, data breaches,and the effectiveness of fake account detection mechanisms against malicious so-cialbots that are capable of social infiltration at scale (Findings 1–4).Contribution 2: Economic analysis of social infiltration at scale.We developed a mathematical model to analyze the scalability of social infil-tration from an economic context. We used this model to derive profit-maximizinginfiltration strategies that satisfy different scalability requirements. We also evalu-ated the economic feasibility of social infiltration under these strategies using datacollected from Facebook, which resulted in new findings related to the dynamicsof social infiltration as a monetization platform for OSN abuse (Findings 5–6).101.3.2 Countermeasure DesignIn Chapter 3, we consider attackers who can run a social infiltration campaign at alarge scale using a set of automated fake accounts, or socialbots. Specifically, eachfake account can perform social activities similar to those of real users, includingbefriending other real users. Motivated by Finding 4, our design of an infiltration-resilient fake account detection mechanism tackles the following question:• RQ4: How can OSNs detect automated fakes that infiltrate on a large scale?To address this question, we designed Íntegro—a robust and scalable defensesystem that helps OSNs detect automated fake accounts via a user ranking scheme.4The system is suitable for OSNs whose users declare bidirectional social relation-ships (e.g., Tuenti,5 RenRen,6 LinkedIn,7 Facebook), with the ranking process be-ing completely transparent to users. While the ranking scheme is graph-based, thegraph is preprocessed first and annotated with information derived from feature-based detection techniques, similar to those employed by Facebook’s immune sys-tem. This approach of integrating user-level activities into graph-level structurespositions Íntegro as the first feature-and-graph-based detection mechanism.Design OverviewOur design directly follows from Finding 2 and is based on the observation thatvictims—real accounts whose users have accepted friend requests sent by fakes—are useful for designing robust fake account detection mechanisms. In particular,Íntegro uses simple account features (e.g., gender, number of friends, time sincelast update), which are cheap to extract from user-level activities, to train a victimclassifier in order to identify potential victims in the OSNs. As attackers have nocontrol over victim accounts nor their activities, a victim classifier is inherently4In Spanish, the word “íntegro” means integrated, which suites our novel approach of integrat-ing user-level activities into graph-level structures.5http://tuenti.com6http://renren.com7http://linkedin.com11more resilient to adversarial attacks than a similarly-trained fake account classifier.Moreover, as victims are directly connected to fakes in the graph, they represent anatural “borderline” that separates real accounts from fakes.Íntegro makes use of this observation by consistently assigning lower weightsto edges incident to potential victims than other accounts, after which it ranks useraccounts based on the landing probability of a short, supervised random walk thatstarts from a known real account. The random walk is “short,” as it is terminatedearly before it converges. The walk is “supervised,” as it is biased towards travers-ing nodes that are reachable through higher-weight paths. Therefore, this modifiedrandom walk is likely to stay within the subgraph consisting of real accounts, andthus most real accounts receive higher ranks than fakes. Unlike SybilRank [15],which is the state-of-the-art in graph-based fake account detection, we do not as-sume sparse connectivity between real and fake accounts. This makes Íntegro thefirst fake account detection system that is robust against social infiltration.Given an OSN consisting of n user accounts, Íntegro takes O(n logn) time tocomplete its computation. For attackers who randomly establish a set Ea of edgesbetween victim and fake accounts, Íntegro guarantees that at most O(vol(Ea) logn)fakes are assigned ranks similar to or higher than real accounts in the worst case,where vol(Ea) is the sum of weights on edges in Ea. In fact, this bound on rankingquality represents an improvement factor of O(|Ea|/vol(Ea)) over SybilRank. Inaddition, even with a uniformly random victim classifier that labels each accountas a victim with 0.5 probability, Íntegro ensures that vol(Ea) is at most equals to|Ea|, resulting in the same asymptotic bound offered by SybilRank [15].Main ResultsWe evaluated Íntegro against SybilRank using real-world datasets and a large-scale deployment at Tuenti—the largest OSN in Spain with about 15 million users.We picked SybilRank because it was shown to outperform known contenders [15],including EigenTrust [60], SybilGuard [138], SybilLimit [139], SybilInfer [19],Mislove’s method [122], and GateKeeper [118]. In addition, as SybilRank relies12on a user ranking scheme that is similar to ours albeit on an unweighted versionof the graph, evaluating against SybilRank allowed us to clearly show the impactof leveraging victim classification on fake account detection.Our evaluation results show that Íntegro consistently outperforms SybilRank inranking quality, especially as the fakes infiltrate an increasing number of victims,that is, as Ea grows large. In particular, Íntegro resulted in up to 30% improvementover SybilRank in its ranking area under ROC curve (AUC), which represents theprobability that a random real account is ranked higher than a random fake ac-count [122]. In fact, Íntegro achieved an AUC greater than 0.92 as |Ea| increased,while SybilRank resulted in an AUC as low as 0.71 under the same setting.In practice, the deployment of Íntegro at Tuenti resulted in up to an order ofmagnitude higher precision in fake account detection, where ideally fakes shouldbe located at the bottom of the ranked list. In particular, for the bottom 20K low-ranking users, Íntegro achieved 95% precision, as compared to 43% by SybilRank,or 5% by Tuenti’s current user-based abuse reporting system. More importantly,the precision significantly decreased as we inspected higher ranks in the list, whichmeans Íntegro consistently placed most of the fakes at the bottom of the list, un-like SybilRank. The only requirement for Íntegro to outperform SybilRank is totrain a victim classifier that is better than random. This requirement can be easilysatisfied during the cross-validation phase by deploying a victim classifier withan AUC greater than 0.5. In our deployment, the victim classifier was 52% betterthan random with an AUC of 0.76, although it was trained using low-cost features.We implemented Íntegro on top of Mahout8 and Giraph,9 which are widely de-ployed, open-source distributed machine learning and graph processing systems,respectively. Using a synthetic benchmark of five OSNs consisting of up to 160Musers, Íntegro scaled nearly linearly with number of users in the graph. In partic-ular, for the largest graph with 160M nodes, it took Íntegro less than 30 minutesto finish its computation on a cluster of 33 commodity machines.8http://mahout.apache.org9http://giraph.apache.org13ContributionsIn summary, this part of the dissertation makes the following contributions:Contribution 3: Leveraging victim classification for fake account detection.We designed and analyzed Íntegro—a fake account detection system that re-lies on a novel technique for integrating user-level activities into graph-level struc-tures. Íntegro uses supervised machine learning with features extracted from user-level activities in order to identify potential victims in the OSN. By weighting thegraph such that edges incident to potential victims have lower weights than otheraccounts, Íntegro guarantees that most real accounts are ranked higher than fakes.These ranks are derived from the landing probability of a modified random walkthat starts from a known real account. To our knowledge, Íntegro is the first detec-tion system that is robust against adverse manipulation of the graph, where fakesfollow an adversarial strategy to befriend a large number of accounts, real or fake,in order to evade detection.Contribution 4: Open-source implementation and evaluation framework.We implemented Íntegro on top of open-source distributed systems which runon commodity machines. We publicly released Íntegro as part of two projects: (1)SyPy,10 our single-machine comparative evaluation framework for graph-basedfake account detection algorithms, and (2) GrafosML,11 a library and tools for ma-chine learning and graph analytics. We evaluated Íntegro against SybilRank usingreal-world datasets and a large-scale deployment at Tuenti. Íntegro has allowedTuenti to detect at least 10 times more fake accounts than their current user-basedabuse reporting system, in which reported users are not ranked. With an averageof about 16K flagged accounts a day [15], Íntegro has saved Tuenti hundreds ofman hours in manual verification by robustly ranking user accounts.10http://boshmaf.github.io/sypy11http://grafos.ml14Chapter 2Social Infiltration in OSNsOnline social networks (OSNs) have attracted more than a billion active user andhave become an integral part of today’s Web. In the wrong hands, however, OSNscan be used to harvest private user data [6], distribute malware [132], controlbotnets [63], perform surveillance [95], spread misinformation [113], and eveninfluence algorithmic trading [8]. Online attackers start their abuse by running asocial infiltration campaign using automated fake accounts, where fakes are usedto connect with a large number of users in the target OSN. Such an infiltration isrequired because isolated fake accounts cannot directly interact with or promotecontent to most users in the OSN [28].While online attackers, also called cyber-criminals, were generally thought tobe less financially driven in the past, a number of recent studies showed that suchcriminals are moving towards profitable business models [14, 38, 86]. For exam-ple, spammers that promote generic pharmaceuticals and knock-off designer prod-ucts generate an estimated revenue between $12–92 million each year [77], whilesimilar criminals that trick victims into installing ineffectual software, such as fakeanti-virus tools, pulling in $5–116 million [105]. The threats these cyber-criminalspose to OSNs are aggravated by the emergence of an underground economy—adigital network of criminals who buy and sell goods that directly enable the abuseof OSNs for a small price. These goods include fake accounts [88], compromised15machines [14], CAPTCHA-solving services [87], and IP address proxies [53].In this chapter, we consider profit-driven attackers who infiltrate OSNs usingmalicious socialbots—automation software that control fake accounts and per-form social activities similar to those of legitimate users. In particular, we aim toaddress the following RQs, as introduced in Section 1.3.1:• RQ1: How vulnerable are OSNs to social infiltration?• RQ2: What are the security and privacy implications of social infiltration?• RQ3: What is the economic rationale behind social infiltration at scale?2.1 Background and Related WorkWe next present required background and contrast related work to ours, focusingon socialbots, social infiltration, scalability of online attacks, and the undergroundmarket for OSN abuse.2.1.1 Online Social NetworksAn online social network (OSN) is a centralized web platform that facilitates andmediates users’ social activities online. A user in such a platform owns an accountand is represented by a profile that describes her social attributes, such as name,gender, interests, and contact information. We use the terms “account,” “profile,”and “user” interchangeably but make the distinction when deemed necessary. Asocial relationship between two users can be either bilateral such as friendships inFacebook, or unilateral such as followerships in Twitter.An OSN can be modeled as a social graph G=(V,E), where V represents a setof users and E represents a set of social connections (i.e., relationships) among theusers. For every user vi ∈V , the neighborhood of vi is the set that contains all usersin V with whom vi has social connections. For the users in the neighborhood of vi,the union of their neighborhoods is the extended neighborhood of vi. In Facebook,16for example, the neighborhood and the extended neighborhood of a user representthe “friends” and the “friends of friends” of that user, respectively.2.1.2 Sybil Attacks and Social InfiltrationThe Sybil attack [22] represents the situation where an attacker controls multipleidentities, each called a Sybil, and joins a targeted system under these identities inorder to subvert a particular service. Accordingly, we define social infiltration inOSNs as an instance of the Sybil attack, where an attacker employs an automationsoftware, which is scalable enough to control many attacker-owned fake accounts,in order to connect with a large number of legitimate users in the target OSN.Recent research indicates that social infiltration in OSNs is possible and rep-resents an emerging threat [57, 91, 96]. For example, Bilge et al. [6] where amongthe first to demonstrate that users in OSNs are not cautious when accepting friendrequests. In particular, the authors performed an experiment to evaluate how will-ing users are to accept friend requests sent by forged user accounts of people whowere already in their friends list as confirmed contacts. They also compared thatwith users’ response to friend requests sent by people who they do not know, thatis, fake accounts posing as strangers. In their experiment, they found that theacceptance rate for forged accounts was over 60%, and about 20% for the fakes.Unlike their targeted attack, we do not expect the attacker to steal identities andforge user accounts, as this makes the attack non-scalable and more susceptibleto detection [104]. Moreover, we aim to characterize descriptive user behaviorsthat are important to improve today’s OSN security defenses, and to evaluate thecorresponding security and privacy implications, all under the threat model of anattacker who is capable of social infiltration on a large scale.2.1.3 Social Engineering, Automation, and SocialbotsThe concept of social engineering is usually defined as the art of gaining unautho-rized access to secure objects by exploiting human psychology, rather than usinghacking techniques [4]. Social engineering has become more technical and com-17plex; social engineering attacks are being computerized and fully automated, andhave become adaptive and context-aware [6, 13, 57, 58, 95].Huber et al. [54] presented one of the first frameworks for automated social en-gineering in OSNs, where a new breed of bots can be used to automate traditionalsocial engineering attacks for many adversarial objectives. Under this framework,a malicious socialbot represents an automated social engineering tool that allowsan attacker to infiltrate online communities like a virtual con man, build up trustover time, and then exploit it to elicit information, sway opinions, and call to ac-tion. In fact, automation has a strong economic rationale behind it. Herley [49]showed for an online attack to be scalable, it ought to be automated without man-ual per-user adjustments. If not, there are no economic incentives for a rational(i.e., profit-driven) attacker to scale the attack, which is typically undesirable froman adversarial standpoint.Socialbots can be used for non-adversarial objectives as well [56]. For exam-ple, the Web Ecology Project1 envisions the design of benign socialbots that havepositive impact on online communities by advocating awareness and cooperationamong human users on civic or humanitarian issues. Soon after, this objective wasextended towards realizing social architecture [92], where “intelligent” socialbotsare used to interact with, promote, and provoke online communities towards de-sirable behaviors, including large-scale restructuring of social graphs.2.1.4 Online Attacks in a Web of ScaleThe distinction between online attacks that are scalable and those that are not haslong been recognized. Dwork and Naor [23] suggested that forcing a linear costdependence makes certain attacks financially unattractive.Similarly, Herley [49] distinguished between scalable attacks, in which costsare almost independent of the number of attacked users, and non-scalable or tar-geted attacks, which involve per-user effort resulting in a higher than average cost.To compensate, the non-scalable attacker must target users with higher than aver-1http://www.webecologyproject.org18age value, as low value users negatively affect the reward. To accomplish this, thenon-scalable attacker needs that value be both visible and very concentrated, withfew users having very high value while most have little. In this the attacker is for-tunate; power-law long-tail distributions that describe the distributions of wealth,fame, and other phenomena are extremely concentrated. However, in these distri-butions, only a small fraction of the population have above average value. For ex-ample, fewer than 2% of people have above average wealth in the US [20]. Thus,when attacking assets where value is concentrated, the non-scalable attacker ig-nores the vast majority of users. By contrast, the scalable attacker lives in a “costsnothing to try” world and attacks everyone indiscriminately, whether they havehigh or low value, ending up with an average value per user. As a result, scal-able attacks reach orders of magnitude more users than non-scalable ones. This,to some extent, explains why only few users ever get targeted with sophisticatedattacks, while mostly all users receive spam emails at some point in time [49].The scale of an attack, interestingly, can lead to its own demise. Florêncio etal. [35] observed that there is an enormous gap between potential and actual harmof scalable online attacks; the majority of Web users appear unharmed each year.The authors explained that a scalable attacker faces a sum-of-effort rather thana weakest-link defense, leading to an increased cost per user. As scalable attacksmust be profitable in expectation, not merely in particular scenarios, many attackscan be rendered non-profitable and non-scalable by marginally increasing the costto subvert the defenses deployed by the service provider or its users, even whenmany profitable targets exist.We analyze social infiltration in OSNs from an economic perspective, focusingon the scale of infiltration, in terms of number of attacked users, and its effect onprofitability. We extend Herley’s analysis and assume the role of an attacker whoderives profit-maximizing infiltration strategies under different scalability require-ments. As we discuss next, social infiltration and other online attacks, scalable ornot, integrate into a cyber-criminal ecosystem for OSN abuse, with a thriving un-derground market pulling in hundreds of millions of dollars in revenue.19Fake Accounts Compromised AccountsFriend Spam Messaging Spam Photo-Tag Spam Page/Event SpamKnock-off Products Fake Software Click-fraud Credit Card Numbers Underground InfrastructureExternal URL Information & User Data FameExternal AbuseMonetization(Abuse egress point)EngagementCredentials(Abuse ingress point)1 12 13 Figure 2.1: Components required for OSN abuse in Facebook2.1.5 The Cyber-Criminal Ecosystem for OSN AbuseAs OSNs dominate the daily activities of Web users, cyber-criminals have adaptedtheir monetization strategies to engage users within these walled gardens. Attackstargeting OSNs require three components [114]: (1) access to account credentials,(2) a mechanism to engage with legitimate users within the network (i.e. the vic-tims that will be exploited to realize a profit), and (3) some form of a monetizablecontent. With respect to Facebook, Figure 2.1 shows the underpinnings of eachcomponent, which we use to guide the upcoming discussion.As the figure shows, an attacker can use fake or compromised accounts in or-der to get access to Facebook. Shortly after that, to draw an audience, the attackercan engage users by running a social infiltration campaign (i.e., by sending friendrequest spam). If user privacy settings are set to “public to everyone,” which is notusually the case [46, 68], the attacker can engage users through other kinds of pub-lic communication, such as messaging, photo-tagging, and page/event requests. Inorder to monetize a victim, users are typically directed away from Facebook viaa URL to another website, which generates a profit via knock-off products, fake20software, click-fraud, banking information, or a malware that converts a victim’smachine or assets (e.g., credentials, private data) into a commodity for the under-ground market (e.g., zombie machines, compromised accounts, email addresses).At the heart of this for-profit cyber-criminal ecosystem is an underground mar-ket that connects attackers with parties selling a range of specialized productsand services, including spam hosting [3], fake accounts [88, 114], compromisedzombie machines [14], CAPTCHA-solving services [87], IP address proxies [53],and exploit kits [42]. Even simple services such as generating favorable reviewsor writing webpage content are for sale [126]. Revenue generated by criminalsparticipating in this market varies widely based on business strategy, with spamaffiliate programs generating $12–92 million [77] and fake anti-virus scammerspulling in $5–116 million [105] over the course of their operations.Within this cyber-criminal ecosystem for OSN abuse, the analysis we presentin Section 2.5 suggests that social infiltration at scale is not financially attractiveas an independent business. However, it facilitates a non-zero profit monetizationcampaign for underground market commodities, as illustrated in Figure Vulnerabilities in OSN PlatformsWe discuss four vulnerabilities found in today’s OSNs that allow an attacker to runa large-scale social infiltration campaign. Collectively, along with poorly designedend-user privacy controls [73], these vulnerabilities represent the enabling factorsthat make operating socialbots feasible in the first place.2.2.1 Ineffective CAPTCHAsTypically, OSNs employ CAPTCHAs [124], which are a type of challenge-responsetest used in computing to determine whether or not the user is human, in order toprevent automated bots from abusing their platforms. Attackers, however, can of-ten circumvent this countermeasure using different techniques, such as automatedanalysis using optical character recognition and machine learning [6], exploiting21botnets to trick victims into manually solving CAPTCHAs [25], reusing sessionidentifiers of known CAPTCHAs [52], or even hiring cheap human labor [87].Let us consider the use of human labor to solve CAPTCHAs; a phenomenonknown as a CAPTCHA-solving business. Motoyama et al. [87] showed that com-panies involved in such a business are surprisingly efficient: (1) they have highservice quality with a success rate of up to 98% in solving CAPTCHAs, (2) theycharge less than $1 per 1K successfully solved CAPTCHAs, and (3) they providesoftware APIs to automate the whole process. Thus, even the most sophisticatedCAPTCHA that only humans could solve can be effectively circumvented with asmall investment from an attacker. In such a situation, the attacker is considered arational agent who invests in such businesses if the expected return on investmentis considerably high. This also allows researchers to study online attacks from aneconomic context, and define cost structures that measure when it is economicallyfeasible for an attacker to mount scalable attacks that involve, for instance, solvingCAPTCHAs by employing cheap human labor [49]. We provide such an analysisfor social infiltration at scale in Section Fake Accounts and User ProfilesCreating a new user account on an OSN involves three tasks: (1) providing anactive email address, (2) creating a user profile, and (3) solving a CAPTCHA ifrequired. Each user account maps to one profile, but many user accounts can beowned by the same person or organization using different email addresses, whichrepresents a potential Sybil attack. In what follows, we discuss how an attackercan fully automate the account creation process in order to create a group of fakeaccounts, where each account is represented by a fake user profile. In fact, thisautomation is not new as similar tools, such as FriendBomber,2 are used for onlinemarketing. The attacker can write a customized software to create fake accountsor buy OSN accounts in bulk from underground markets [88].2http://www.friendbomber.com22Fake AccountsWhen creating a new user account, an email address is required to validate thenactivate the account. Usually, the OSN validates the account by associating it withthe owner of the email address. After account validation, the owner activates theaccount by following an activation link that is emailed by the OSN. Accordingly,an attacker has to overcome two hurdles when creating a new fake account: (1)providing a working email address which is under his control, and (2) perform-ing email-based account activation. To tackle the first hurdle, the attacker canmaintain many email addresses by either using “temp” email addresses that areobtained from providers that do not require registration, such as 10MinuteEmail,3or by creating email addresses using email providers that do not limit the numberof created email accounts per browsing session or IP address, such as MailRu.4As for the second hurdle, an attacker can write a simple script that downloads theactivation email and then sends an HTTP request to the activation URL, which istypically included in the downloaded email.Fake User ProfilesCreating a user profile is a straightforward task for legitimate users, as they justhave to provide the information that describes their social attributes (e.g., name,age, gender, interests). For an attacker, however, the situation is subtly different.The objective of the attacker is to create profiles that are “socially attractive.” Weconsider a purely adversarial standpoint concerning social attractiveness, whereattackers aim to exploit certain social attributes that have shown to be effective ingetting users’ attention. Such attributes can be inferred from recent social engi-neering attacks. Specifically, using a profile picture of a good looking woman orman has had the greatest impact [6]. Thus, an attacker can use publicly availablepersonal pictures for the newly created user profiles, along with the correspond-ing gender and age range information. The attacker can use already-rated personal3http://10minutemail.com4http://mail.ru23pictures from websites like HotOrNot,5 where users publicly post their personalpictures for others to rate their “hotness.” In fact, such websites also provide cate-gorization of the rated personal pictures based on gender and age range. It is thuspossible for an attacker to automate the collection of required profile informationin order to populate a fake user profile by crawling, or scavenging, the Web.2.2.3 Crawlable Social GraphsThe social graph of an OSN is usually hidden from public access in order to pro-tect its users’ privacy. An attacker, however, can reconstruct parts or a completeversion of the social graph by first logging in to the OSN platform using one ormany fake accounts, and then traversing through linked user profiles starting froma seed profile. In the second task, web crawling techniques can be used to down-load profile pages and then scrape their content. This allows the attacker to parsethe connections lists of user profiles, such as the “friends list” in Facebook, alongwith their profile information. After that, the attacker can gradually construct thecorresponding social graph with accessible social attributes using an online searchalgorithm, such as breadth-first search [80]. The attacker can build either a cus-tomized web crawler for this task or resort to cheap commercial crawling services,such as 80Legs,6 that support social-content crawling.2.2.4 Exploitable Platforms and APIsMost OSNs provide software APIs that enable the integration of their platformsinto third-party software systems. Facebook’s Graph API [67], for example, en-ables third parties to read from and write data into Facebook’s platform. It alsoprovides a simple and consistent view of Facebook’s social graph by uniformlyrepresenting objects (e.g., profiles, photos, posts) and the connections betweenthem (e.g., friendships, likes, tags). An attacker, however, can use such APIs toautomate the execution of social activities online. If an activity is not supported5http://hotornot.com6http://80legs.com24Botmaster!C&C Channel!Botherder!Socialbot!Socialbot!Online Social Network!Real!Victim !Fake!OSN Channel!Figure 2.2: An overview of a socialbot network (SbN)by the API, the attacker can scrape the content from the platform’s website, andrecord the exact HTTP requests which are used to carry out such an activity in or-der to create request templates. In particular, sending connection requests is oftennot supported, and is usually protected against automated usage by CAPTCHAs.This is also the case if a user sends too many requests in a short time period. Anattacker, however, can always choose to reduce the frequency at which the botssends the requests to avoid CAPTCHAs. Another technique is to inject artificialconnection requests into non-encrypted OSN traffic at the HTTP level, so that itwould appear as if users have added the socialbot as a contact [55].2.3 Design of a Social BotnetIn what follows, we present the design of a web-based social botnet. In particular,we start with an overview of the socialbot network (SbN) concept and define itsthreat model. This is followed by a detailed discussion of the SbN design and itsmain components.252.3.1 OverviewWe studied large-scale infiltration in OSNs as an organized campaign run by a net-work of malicious socialbots. We followed the design of web-based botnets [101]and defined a socialbot network (SbN): A group of malicious socialbots that areorchestrated by an attacker called the botherder.As shown in Figure 2.2, an SbN consists of a set of socialbots, a botmaster,and a command & control (C&C) channel. Each socialbot controls a fake accountin a target OSN, and is capable of executing commands that result in operationsrelated to social interactions (e.g., posting a message) or the social structure (e.g.,sending a friend request). These commands are either sent by the botmaster ordefined locally on each socialbot. All data collected by socialbots are called thebotcargo, and are always sent back to the botmaster. A botmaster is a softwarecontroller with which the botherder interacts in order to define commands thatare sent through the C&C channel. We designed the botmaster to exploit knownuser behaviors in OSNs, such as the triadic closure principle [24], in order toimprove the success rate of the infiltration.7 The C&C channel is a communicationmedium that facilitates the transfer of the botcargo and the commands between thesocialbots and the botmaster, including any heartbeat signals, which are used tocheck whether a socialbot is online and operational.In Figure 2.2, each node in the OSN represents a user account. Fake accounts,which are controlled by socialbots, are marked in black. Victim accounts, whichare real accounts controlled by legitimate users but have been infiltrated by fakes,are marked in grey. Edges between nodes represent bilateral social connections.The dashed arrow represents a connection request. Small arrows represent socialinteractions, such as posting a message. As we explain next, the SbN can bepart of an existing botnet, where compromised machines are also infected by thesocialbot malware that control fake accounts in the target OSN.7The triadic closure principle states that if two people have a friend in common, then there isan increased likelihood that they will become friends themselves in the future.262.3.2 Threat ModelWe assume a global passive attacker who is capable of designing and operating afully or semi automated SbN on a large scale. This involves exploiting all of thevulnerabilities presented in Section 2.2, which collectively enable the operationof an SbN in the target OSN. We also assume the attacker is capable of deployingthe SbN as part of an existing botnet, and thus, we treat the SbN as a distributednetwork of compromised “zombie” machines acting cooperatively. Accordingly,we believe it is fair to assume that the defenders, consisting of OSNs and Internetservice providers, are able to cooperate, and therefore have a global view of thecommunication traffic. Finally, we assume that botnet infections are not easilydetected, that is, an SbN cannot tolerate 100% clean up of all infected machines,just like any other botnet. We expect, however, an SbN to tolerate random losses ofa large number of compromised machines because at least one machine is requiredto host all of the socialbots, as we show in Section 2.4.As for the adversarial objectives, the botherder designs and operates an SbNto (1) carry out a social infiltration campaign in the target OSN, and to (2) harvestprivate user data. The first objective involves connecting with a large number ofeither random or targeted OSN users for the purpose of establishing an influentialposition, which can be exploited to promote malicious content or spread misinfor-mation. In the second objective, however, the botherder aims to generate profit bycollecting personally identifiable information that have monetary value as goodsin underground markets. In addition, these data can be used to craft personalizedmessages for subsequent spam, phishing, or astroturfing campaigns.2.3.3 RequirementsIdeally, an SbN has to be automated and highly scalable to control hundreds ofsocialbots, which is achieved by following the design of web-based botnets. Inorder to be effective, however, an SbN has to meet three challenging requirements:(1) each socialbot has to be designed in such a way that hides its true face, that is,to pass itself off as a human not a bot, (2) the botmaster has to implement strategies27Operation Type Descriptionread(o,p) Interaction Reads object o from account p and returns its value v as botcargowrite(v,o,p) Interaction Writes value v to object o on account pconnect(b,p) Structure Sends or accepts a connection request between accounts b and pbreak(b,p) Structure Breaks the social connection between accounts b and pTable 2.1: Generic operations supported by the socialbot malwarethat enable large-scale social infiltration in the target OSN, and (3) the traffic inthe C&C channel has to look benign in order to avoid detecting the botmaster.We decided to use a simplistic design in order to meet each one of these re-quirements. As we describe next, we used techniques that have shown to be bothfeasible and effective. We acknowledge, however, that more sophisticated tech-niques, which use machine learning algorithms [74], are possible. We refrainedfrom using such techniques as our goal is to evaluate the threat of social infiltra-tion and characterize user behaviors, rather than to optimize the performance ofan SbN. We discuss the details of the used techniques in what follows.2.3.4 SocialbotsA socialbot consists of two components: (1) a fake user account on a target OSN,and (2) the socialbot software. As we design the socialbot software in an adversar-ial setting, we regard this software as being malicious and refer to it as a malware.We next present the primitives required for the socialbot malware to mimic realuser behavior in OSNs (e.g., posting messages, befriending others).Each socialbot malware supports two types of generic operations in any givenOSN: (1) social interaction operations that are used to read and write social con-tent, and (2) social structure operations that are used to modify the social graph.A detailed description of these operations is shown in Table 2.1.We define a set of commands that each includes a sequence of generic opera-tions. A command is used to mimic a real user action that relates to social contentgeneration and networking, such as posting a status update and joining a commu-28nity of users. A command is either native or master, depending on its origin. Anative command is defined locally on each socialbots, while a master commandis sent by the botmaster to the socialbot through the C&C channel. For example,we define a single native command called update(b), which is executed by eachsocialbot, as follows: At arbitrary times, each socialbot b generates a message mand executes the operation write(m,o,b), where o is the object which maintainsmessages for the account b. This command resembles a user posting a status up-date message on her profile, and is executed at arbitrary times in order to avoidcreating detectable patterns. More sophisticated commands can be defined that,for example, allow socialbots to comment on each others’ status updates.Each socialbot employs a native controller: A two-state finite-state machine(FSM) that enables the socialbot to either socialize with others by executing com-mands or stay dormant. State transition occurs whenever a native or master com-mand needs to be executed. A native controller can also enable advanced socialinteraction capabilities by integrating existing chatter bots [54] or hijacking onlinehuman-to-human conversations in a man-in-the-middle fashion [69].2.3.5 BotmasterA botmaster is an automation software that orchestrates the overall operation ofan SbN. The botmaster software consists of a botworker, a botupdater, and a C&Cengine. The botworker maintains socialbots in the SbN by creating fake accountsand delegating each account’s credentials, the user name and password, to the so-cialbot’s malware in order to get full control over the account. The botupdaterpushes software updates to socialbots using the C&C channel, including new na-tive commands, modified HTTP-request templates, and improved CAPTCHA-solving capabilities. The C&C engine manages a repository of master commandsand deploys a master controller: A many-state FSM that is the core control com-ponent of the SbN. The botherder interacts with the C&C engine to define a setof master commands, which are dispatched when needed by the master controllerand then sent to the socialbots. We next present a set of master commands that29Command Descriptioncluster(b,k) Connects socialbot b with at most k other socialbotsseed(b,k) Connects socialbot b with k user profiles that are picked at randomdecluster(b) Breaks the social connections between socialbot b and other socialbotscollect(b) Returns profile information of users in the neighborhoods of socialbot bexploit(b) Connects socialbot b with users in its extended neighborhoodTable 2.2: Master commands executed by socialbotsdefine a social infiltration strategy which exploits known user behaviors in OSNs.These master commands, which are summarized in Table 2.2, are issued by themaster controller in three super-states, or phases, as follows:Setting up the SbNIn the beginning, each socialbot has no connections and is isolated from the restof the users in the targeted OSN, which is not favorable for social infiltration. Inparticular, Tong et al. [116] showed that the social attractiveness of a profile in anOSN is highly correlated to its neighborhood size, where the highest attractivenessis observed when the neighborhood size is close to the OSN’s average. Therefore,in an attempt to increase the social attractiveness of socialbots, we define a mastercommand cluster(b,k), which orders the socialbot b to connect with at mostk other socialbots. This initial clustering of socialbots can be helpful in evadingOSN security defenses that keep track of how many rejected connection requestseach new user has after joining the OSN, which is usually used as an indication ofan automated activity by fake accounts [135].Seeding the InfiltrationAfter initial setup, the socialbots start their social infiltration campaign. In orderto bootstrap the infiltration with some victims, the seeds, we define a master com-mand seed(b,k), which orders each socialbot b to connect with k user profilesthat are picked at random from the targeted OSN. Because the clique structureamong the socialbots can be easily detected [137], we define a master command30decluster(b) that orders the socialbot b to break the social connections with allother socialbots. One can define a similar master command which orders each so-cialbot to break one social connection with another socialbot for every new victim,and thus gradually decluster.To satisfy the second objective in the threat model, we define the master com-mand collect(b), which orders the socialbot b to collect accessible user profileinformation in the direct and extended neighborhoods of its victims, and then re-turn them as botcargo. This command is issued whenever a new user is infiltrated.Exploiting Triadic ClosureIt has been widely observed that if two users have a friend in common, then thereis an increased chance that they become friends themselves in the future [71]. Thisproperty, which is also known as the triadic closure principle [24], was first ob-served in real-world social networks. Nagle et al. [91] showed that the likelihoodof accepting a friend request in Facebook is three times higher given the existenceof some number of mutual friend. Accordingly, in order to increase the yield ofsocial infiltration in the target OSN, we define a master command exploit(b),which orders the socialbot b to connect with users with whom it has some mutualconnections, that is, users in its extended neighborhood.2.3.6 C&C ChannelThe communication model of an SbN consists of the C&C channel and the OSNchannel. The OSN channel carries only OSN-specific API calls over HTTP traffic,which are the end product of executing a command by a socialbot. From the OSNside, this traffic may originate from either an HTTP proxy, in case of high activity,or from a normal user machine. It is therefore quite difficult to identify a socialbotsolely based on the traffic it generates in the OSN channel.As for the C&C channel, we recall that detecting the botmaster from the C&Ctraffic is as hard as it is in a traditional botnet, as the botherder can rely on an ex-isting botnet infrastructure and deploy the SbN as part of the botnet, as discussed31in Section 2.3.2. Alternatively, the botherder can exploit the OSN platform itselffor the C&C infrastructure [63]. For example, Nagaraja et al. [90] showed that abotherder can establish a probabilistically unobservable C&C channel by build-ing a covert OSN botnet that, for example, uses image steganography to hide thecommunication as part of photo sharing behavior of OSN users.2.4 Empirical EvaluationTo evaluate how vulnerable OSNs are to social infiltration, we implemented anSbN according to the discussion in Section 2.3. We picked Facebook as the targetOSN because it is the largest OSN today, consisting of more than one billion activeuser [30]. Moreover, unlike other OSNs, Facebook is mostly used to connect withreal-world friends and family not with strangers [27, 68]. Finally, Facebook em-ploys a state-of-the-art “immune system” that detects thousands of fake accountsa day using various machine learning techniques [104]. Therefore, the success ofsocial infiltration on an OSN such as Facebook represents a serious threat whichneeds to be addressed.2.4.1 Ethics ConsiderationGiven the nature of social infiltration, one legitimate concern is whether it is eth-ically acceptable and justifiable to conduct such a research experiment. As com-puter security researchers, we believe that controlled, minimal-risk, and realisticexperiments are the only way to reliably estimate the feasibility of cyber attacksin the real-world. These experiments allow us and the wider research communityto gain a genuine insight into the ecosystem of online attacks, which is useful inunderstanding how similar attacks may behave and how to defend against them.We carefully designed our experiment in order to reduce any potential risk atthe user side [9]. In particular, we followed known practices and received the ap-proval of our university’s behavioral research ethics board (BREB).8 In addition,8This study has been approved by BREB application #H10-01439.32we strongly encrypted and secured all collected data.As part of our code of ethics, we communicated the details of our experimentto Facebook before any publication, and accordingly, we decided not to includespecific technicalities about a set of vulnerabilities we discovered in Facebook’splatform. We believe that these platform vulnerabilities can be exploited by cybercriminals to mount different kinds of online attacks. We reported these vulnera-bilities to Facebook through its platform’s vulnerability reporting tool [31].2.4.2 MethodologyOur main research objective is to characterize users’ response to a social infiltra-tion campaign in OSNs, along with the corresponding security and privacy im-plications. We implemented an SbN prototype targeting Facebook for the reasonsoutlined above, and operated this SbN for 8 weeks during the first quarter of 2011.The duration of the experiment was informed by how many data points we neededto properly capture user behavior, and accordingly, we took the SbN down oncewe stopped observing new trends. We report only the results we observed duringthe length of the experiment.We used a single machine and two types of IP addresses at different stages ofthe experiment. The first IP address was assigned by the university, and the secondIP address was assigned by a commercial Internet service provider. We also im-plemented a simple HTTP proxy on the used machine in order to make the trafficappear as if it originated from multiple clients having different browsers and oper-ating systems. Even though the university-assigned IP address might have dilutedFacebook’s immune system, we believe that it is unsafe to completely white-listuniversity IP addresses.9 In fact, today’s botnet owners struggle over who has thelargest number of “high-quality” infected machines, including university, corpo-rate, and even government machines [101].9While operating the SbN under the university-issued IP address, we observed that some op-erations were identified as malicious, and the used IP address was temporarily blocked, especiallyduring fake account creation. This supports the argument that even university IP addresses wereaudited by Facebook, and they were not fully white-listed.332.4.3 Implementation on FacebookIn our prototype’s implementation, each socialbot ran the same malware and wasequipped with one native command, namely, update. We implemented the genericoperations described in Table 2.1 using API calls and HTTP-request templates, asfollows. First, we used Facebook’s Graph API [67] to carry out social interactionoperations. The API, however, requires the user—the socialbot in this case—to belogged in to Facebook at the time of any API call. As login sessions had timeouts,we instead developed a Facebook application that fetched permanent OAuth 2.0access tokens in order to allow each socialbot to send API calls without the needto login.10 Second, for social structure operations, we used prerecorded HTTP-request templates that enabled each socialbot to send friend requests to other users,as if these requests were sent from a browser. To generate status update messages,we used an API provided by iHeartQuotes11 to pull random quotes and blurbs.For the botmaster software, our implementation integrates the botworker withthe following websites: DeCaptcher,12 a CAPTCHA-solving business; HotOrNot,a photo-sharing website; and MailRu, a web email provider. The implementationalso supports on-demand updates through the botupdater, including HTTP-requesttemplates and native commands. Finally, we implemented each master commanddescribed in Table 2.2.Let us consider the random sampling used in seed(b,k). On Facebook, eachuser profile has a unique identifier (ID) represented by a 64-bit integer, which isassigned at account creation time. In order to obtain a uniformly random sampleof Facebook users, we used a sampling technique called rejection sampling [99],as follows. First, we randomly generated 64-bit integers from known ID rangesused by Facebook [39]. After that, we checked whether each newly generated IDmapped to an existing user profile by probing the corresponding profile page. Ifthe profile existed, we included the user in the random sample only if the profile10The application was called “Desktop Connector,” and was hosted on the same machine.11http://iheartquotes.com12http://de-captcher.com34was not isolated. We define an isolated user profile as a profile that does not havefriends or does not share its friends list on Facebook.We implemented the simple native and master controllers that follow a 3-phaseinfiltration strategy. We note, however, that a more resourceful attacker can em-ploy adversarial classifier reverse engineering [74] techniques to derive an infil-tration strategy that has a higher chance at evading automated detection by defensesystems similar to those deployed by Facebook.2.4.4 ExperimentationWe operated the SbN prototype for 8 weeks from 28 January to 23 March, 2011.The socialbots send 9,646 friend requests out of which 3,439 requests were ac-cepted by legitimate users, resulting in an average acceptance rate of 35.7% . Werefer to such infiltrated users who accepted friend requests sent by malicious so-cialbots as victims. We divide the upcoming discussion according to the 3-phasesocial infiltration strategy.Phase 1 – SetupOur SbN prototype, which was physically hosted on a single commodity machine,consisted of 100 socialbots and a single botmaster.13 Even though we could haveautomatically created fake accounts for the socialbots, we decided not to finan-cially support the CAPTCHA-solving business. In total, we manually created 49socialbots that had male user profiles and 51 socialbots that had female user pro-files. The socialbots clustered into a 100-clique structure, representing a tightly-knit, cohesive community of users, as discussed in Section 2.3.513We note that the original study [10] involved two more socialbots. As these socialbots did notparticipate in the infiltration but were rather used to check the status of the SbN, we decided toexclude them from our analysis herein.350 1 10 100 1 10 100 1000 Number'of'users'in'sample'Number'of'friends'Power law with R² = 0.75 (a) Degree distribution of user sample0 0.2 0.4 0.6 0.8 1 0 1 2 3 4 5 6 CDF'Days'before'friend'request'accepted'58% of victims accepted  requests in a day or less (b) Victims response time to requestsFigure 2.3: Seeding social infiltration in FacebookPhase 2 – SeedingThe socialbots generated a random sample of 5,053 valid user profile IDs. Theseunique IDs passed the inclusion criteria we presented in Section 2.4.3. Figure 2.3ashows the degree distribution of this uniform sample.14Each socialbot sent 25 friend requests per day in order to avoid CAPTCHAs.Accordingly, it took the bots 2 days to send friend requests to the sampled users,where each user received exactly one request. In total, 2,391 requests were sentfrom “male” socialbots and 2,662 from “female” socialbots. We kept monitoringthe status of each request for 6 days after being sent. Overall, 976 requests wereaccepted, resulting in an average acceptance rate of 19.3%. In particular, out of allaccepted requests, 381 were sent from male socialbots (15.9%) and 595 were sentfrom female socialbots (22.3%). The difference in the average acceptance rate wasstatistically significant (χ2 = 32.8702, p= 9.852×10−9, CI=95%), where femalesocialbots outperformed male socialbots by 6.4%, on average.15 Moreover, 58%of the victims accepted the request during the same day of being sent, as shown inFigure 2.3b. In our implementation, the socialbots gradually broke the 100-clique14The degree of a node is the size of its neighborhood. The degree distribution is the probabilitydistribution of these degrees over all nodes included in the sample.15Using a two-sample test for equality of proportions.36structure as they infiltrated more victims. The SbN spent 2 weeks in this seedingphase. For most of that time, however, the SbN was setting idle.Phase III – ExploitationWe kept the SbN running for another 6 weeks on Facebook. During this time, thesocialbots send another 4,593 friend requests to users in their extended neighbor-hoods. Overall, 2,463 requests were accepted by victims, resulting in an averageacceptance rate of 53.6%. The difference in average acceptance rate between thetwo phases was statistically significant (χ2 = 1429.9, p = 2.2×10−16, CI=95%),where exploiting triadic closure resulted in 34.3% higher average acceptance ratethan targeting users at random.2.4.5 Analysis and DiscussionIn what follows, we analyze and discuss the results presented in the Section 2.4.4.We focus on characterizing user behaviors, the infiltration performance, and itsimplications on user privacy and fake account detection mechanisms.User BehaviorAs the infiltration was seeded at random, the 9,646 users who received friend re-quests sent by socialbots represented a diverse sample of Facebook’s user popula-tion. In particular, the users were 51.3% males and 48.7% females, lived in 1,983cities across 127 countries, practiced 43 languages, and have used Facebook for5.4 years on average, as shown in Figure 2.4.The main observation of this study is that some users are more likely to becomevictims than others. First, in the seeding phase, we found that the more friends auser had, the more likely the user was to accept friend requests sent by socialbotsposing as strangers, regardless to their gender or mutual friends, as shown in Fig-ure 2.5a. Second, in the exploitation phase, we found that the more mutual friendsthe user had with a socialbot, the more likely the user was to accept friend requestsend by this socialbot, as shown in Figure 2.5b.370 10 20 30 40 50 60 70 English Spanish Indonesian Italian French Por=on'of'users'(%)'Language'Top-5 used languages from locales (a) Language32% of users have 5 years or more of experience 0 0.2 0.4 0.6 0.8 1 30 40 50 60 70 80 90 100 110 CDF'Months'since'joining'Facebook'(b) Experience(c) LocationFigure 2.4: Demographics of contacted users0 10 20 30 40 50 60 70 80 Acceptance'rate'(%)'Number'of'friends'Pearson’s r = 0.85 (a) Seeding at randomPearson’s r = 0.85 10 20 30 40 50 60 70 80 90 0 1 2 3 4 5 6 7 8 9 10 ≥11 Acceptance'rate'(%)'Number'of'mutual'friends'(b) Exploiting triadic closureFigure 2.5: User susceptibility to infiltration on Facebook (CI=95%)3863"(a) Victim’s comment on a status update posted by a socialbot59"68"(b) Personalized phishing message sent to a socialbotFigure 2.6: Real-world samples of user interactions with socialbotsMost of the victims maintained a social history with the socialbots. In partic-ular, the socialbots received 73 personal messages, 112 “likes,” and 164 “wall”posts by their victims. The bots also received 331 friend requests from the friendsof their victims, that is, from users in their extended neighborhoods. Interestingly,the socialbots received 2 messages and 8 wall posts from users who are not in theirfriends list. Based on manual inspection, we found that all of them were in factmalicious, and were sent by a fake or hijacked user account.16 Figure 2.6 shows asample of user interactions with socialbots.16This was possible because the accounts controlled by socialbots had public user profiles.390 10 20 30 40 50 IM account ID Postal address Phone number E-mail address Number'of'users'(thousands)'Before'AEer'Direct (%) Extended (%)Profile Info Before After Before AfterBirth Date 3.5 72.4 4.5 53.8Email Address 2.4 71.8 2.6 4.1Gender 69.1 69.2 84.2 84.2Home City 26.5 46.2 29.2 45.2Current City 25.4 42.9 27.8 41.6Phone Number 0.9 21.1 1.0 1.5School Name 10.8 19.7 12.0 20.4Postal Address 0.9 19.0 0.7 1.3IM Account ID 0.6 10.9 0.5 0.8Married To 2.9 6.4 3.9 4.9Worked At 2.8 4.0 2.8 3.2Average 13.3 34.9 15.4 23.7Table 2.3: Percentage of users with accessible private data2.4.7 Infiltration PerformanceThe socialbots infiltrated Facebook over 55 days starting January 28, 2011. Dur-ing this time, the bots established 3,439 friendships with victim users, where eachfriendship or attack edge connects exactly one victim to a socialbot, as shown inFigure 2.7a. The figure also illustrates the effect of triadic closure. In particular,the infiltration rapidly increased after the first 2 weeks, where each socialbot hadat least one friend in common with the user to which it sent a friend request.Another observation of this study is that attack edges are generally easy toestablish. As shown in Figure 2.7b, an attacker can establish enough attack edgessuch that fake accounts, which are controlled by socialbots, are strongly connectedto real accounts. This observation has serious implications to graph-based fakeaccount detection mechanisms used by defense systems such as EigenTrust [38],SybilLimit [85], SybilInfer [11], Mislove’s method [76], and GateKeeper [73]. Inparticular, these systems assume that fakes can establish only a small number ofattack edges, at most one per fake [84], so that the cut which crosses over attackedges is sparse.17 Accordingly, the systems attempt to find such a sparse cut with17A cut is a partition of the nodes of a graph into two disjoint subsets. Visually, it is a line thatcuts through or crosses over a set of edges in the graph.36(a) (b)Figure 2.7: Users with accessible private dataCollected DataThe socialbots harvested large amounts of user data through the collect mastercommand. By the end of the 8th week, the SbN resulted in a total of 250GB in-bound and 3GB outbound network traffic between our machine and Facebook. Wewere able to collect news feeds, user profile information, and wall posts. In otherords, p actically everythi g shared on Facebook by the victims, which could beused for large-scale user surveillance [95]. Even though adversarial surveillance,such as online profiling [44], is a serious threat to user privacy, we decided to focuson user data that have monetary value at underground markets, such as personallyidentifiable information (PII).After excluding remaining friendships among the bots, the size of their directneighborhoods was 3,439 users. The size of all extended neighborhoods, however,was as large as 1,085,785 users. In Figure 2.7a, we compare data revelation beforeand after operating the SbN in terms of percentage of users with accessible privatedata, PII in particular. On average, 2.62 times more PII was exposed in directneighborhoods after infiltration, and 1.54 times more in extended neighborhoods.While one would expect the averages of direct and extended neighborhoods to beapproximately the same before the infiltration, there is a small difference of 2.1%.400 1 2 3 4 0 7 14 21 28 35 42 49 56 Number'of'aFack'edges'(thousands)'Days'of'opera=on'At least one mutual friend per request At least 11 mutual friends per request (a) Infiltration over time50% of bots had more than 35 attack edges 0.00 0.25 0.50 0.75 1.00 5 15 25 35 45 55 65 75 85 95 CDF'Number'of'aFack'edges'per'socialbot'(b) Infiltration per socialbotFigure 2.8: Infiltration performance on FacebookThis minor artifact can be attributed to the relatively smaller sample size of directneighborhoods and the non-uniform sampling method of extended neighborhoods.Infiltration PerformanceThe socialbots infiltrated users over a period of 55 days, starting January 28, 2011.During this time, the bots established 3,439 friendships with victim users, whereeach friendship or attack edge connects exactly one victim to a socialbot, as shownin Figure 2.8a. The figure also illustrates the effect of triadic closure. In particular,the infiltration rapidly increased after the first 2 weeks, where each socialbot hadat least one friend in common with the user to which it sent a friend request.Another observation of this study is that attack edges are generally easy toestablish. As shown in Figure 2.8b, an attacker can establish enough attack edgessuch that fake accounts, which are controlled by socialbots, are strongly con-nected to real accounts in the social graph. This observation has serious implica-tions on the performance of graph-based fake account detection systems such asEigenTrust [60], SybilLimit [139], SybilInfer [19], GateKeeper [118], and Sybil-Rank [15]. In particular, such systems often assume that fake accounts can estab-lish only a small number of attack edges, ideally at most one per fake [137], so41that the cut which crosses over attack edges is sparse.17 Given this assumption,these systems attempt to find such a sparse cut with formal guarantees in order toseparate fakes from real accounts. To our knowledge, we are the first to providean empirical evidence that invalidates this assumption. We systematically evaluatethe implications of this finding on detection accuracy in Chapter 3.Finally, we found that only 20 fake accounts were blocked by Facebook (20%),and curiously, all of these accounts represented female user profiles. After furtherinvestigation, we found that these accounts were blocked because concerned Face-book users flagged them as spam.18 In fact, we did not observe any evidence thatFacebook’s immune system detected the social infiltration campaign, other thanrelying on user feedback. We also noticed that accumulating social history usingthe cluster and update commands was in fact beneficial. In particular, when wecreated 15 fake accounts for a pilot study in which the socialbots did not use thesecommands, 93.3% of fakes were blocked by Facebook in less than a week.2.5 Economic FeasibilityIn what follows, we analyze the economic feasibility of social infiltration at scale.We consider a rational botherder who runs a social infiltration campaign using asocialbot network (SbN) in a target OSN. Unlike a botherder who is motivatedby fun, self-fulfillment, or proof of skills, a rational botherder makes decisionsabout the SbN operation and its infiltration strategy based on whether the expectedreward exceeds the total cost. In particular, we seek to understand how the scale ofa social infiltration—number of attacked users who received friend requests fromfakes—affects its profit and infiltration strategies.Our analysis suggests that social infiltration at scale, while not financially at-tractive by itself, is a crucial part of the larger cyber-criminal underground marketfor OSN abuse. In particular, it plays the role of a market enabler of the actual17A cut is a partition of the nodes of a graph into two disjoint subsets. Visually, it is a line thatcuts through or crosses over a set of edges in the graph.18Based on the content of a pop-up message that Facebook showed when we manually loggedin using the blocked accounts.42“money-makers,” which include social spam and malware, as confirmed by vari-ous empirical studies on Twitter [109, 114].2.5.1 MethodologyWe developed a simple economic model using cost-volume-profit (CVP) analysis(Section 2.5.2). We chose CVP analysis because it is typically used for short-rundecisions, which suites the underground market for OSN abuse that suffers fromcheating, dishonesty, and uncertainty [50]. Moreover, CVP analysis assumes thebehavior of costs and rewards does not change during the infiltration campaign,which we believe is a reasonable assumption given the lack of information aboutvolume dynamics and discounts in underground markets.We used this economic model to define and analyze the scale of a social infil-tration campaign, quantified by the number of attacked users, from an economicperspective. In particular, we analyzed the cost structure of an SbN as the infil-tration grows arbitrarily large in scale, focusing on its effect on profitability (Sec-tion 2.5.3). Using this model, we derived profit-maximizing infiltration strategiesthat implement two business models under different scalability requirements (Sec-tion 2.5.4). We applied these strategies in practice and found that they lead to anon-zero profit on Facebook when operated as part of the underground market forOSN abuse (Sections 2.5.5 and 2.5.6).2.5.2 Model and AssumptionsWe assume the threat model described in Section 2.3.2. In particular, we considera rational botherder who operates an SbN with scale n and size m. The scalerepresents the number of attacked users—users who receive connection requestsfrom socialbots. The size represents the number of socialbots in the SbN.The objective of the botherder is to maximize the profit by operating the SbNunder the best profit-maximizing infiltration strategy. Each socialbot has a fixedaverage cost c¯ associated with it, which represents the estimated cost of creating anew fake account. Moreover, each socialbot can have up to k social connections,43where k is an OSN-specific parameter (e.g., 5K in Facebook). Each user profile inthe target OSN has an associated average extracted value v¯, which represents, forexample, an estimate of the monetary value of the profile’s information.We denote the reward and total cost of running a social infiltration campaignwith scale n by R(n) and C(n), respectively. Operating an SbN is profitable onlyif the reward exceeds the total cost, that is, whenever the profit P(n) is non-zero(i.e., positive), as follows:P(n) = R(n)−C(n) > 0, (2.1)where the botherder breaks even whenever n = n0 > 0 such that P(n0) = Scalability of Social InfiltrationInformally, we say that an SbN is scalable if the botherder has an economic incen-tive to attack ever more users in a single infiltration campaign, that is, wheneverP(αn) > αP(n) for a scaling factor α > 1. This means that operating a scalableSbN accrues an increasing profit margin.In what follows, we formally define the scalability of a social infiltration cam-paign from an economic perspective. After that, we develop the concepts neededto relate scalability to profitability.Scalability in Economic ContextWe generalize the scalability definition of online attacks presented by Herley [49],and adapt it to an arbitrarily-large social infiltration campaign. Formally, we callan SbN scalable if C(n) grows more slowly than R(n) as n→ ∞. In other words,when C(n) = o(R(n)) asymptotically, which is true if the following equality holds:limn→∞C(n)R(n) = 0. (2.2)44Otherwise, we call the SbN non-scalable. A scalable SbN is favorable as it has adesirable effect on profitability, where the following equality now holds:limn→∞P(n)R(n) = 1, (2.3)which means that given a scalable SbN, P(n) grows as fast as R(n), or similarly,P(n)≈ R(n) as n→ ∞, where C(n) has a diminishing effect on profit.To this end, in order to judge the scalability of a given SbN, we need to pre-cisely define its total cost C(n) and reward R(n), after which we can test the scal-ability condition defined in Equation 2.2.The Cost StructureFor an SbN with scale n and size m, let Cs(n) and Co(n) be the setup and operationcosts of the SbN, respectively. The setup cost is the cost incurred due to creating mfake accounts before operating the SbN, and the operation cost is the cost incurreddue to running the SbN for a particular period of time. The operation cost includesa rental cost Cr(n) for operating the SbN in a botnet, and a detection cost Cd(n) forcreating new fake accounts in place of detected ones. As described in the threadmodel, we assume the SbN is deployed as part of an existing botnet, with at leastone compromised machine that is available. In other words, the rental cost is fixedCr(n) = r¯, where r¯ is the average rental cost of a single machine in a botnet for theduration of the campaign. As for the detection cost Cd(n), let p¯ be the probabilitythat a socialbot is detected during a single run of the campaign, we therefore haveCd(n) = p¯×Cs(n). Accordingly, we define the total cost C(n) as follows:C(n) =Cs(n)+Co(n)=Cs(n)+Cr(n)+Cd(n)=Cs(n)(1+ p¯)+ r¯. (2.4)As outlined in Section 2.5.2, each socialbot is limited to up to k social connec-45tions. This means m = dn/ke bots are needed, and Cs(n) is linearly dependent onn whenever n > k. Specifically, we define the setup cost Cs(n) as follows:Cs(n) =c¯ if n≤ k,c¯×⌈nk⌉if n > k.(2.5)From Equations 2.4 and 2.5, the total cost C(n) = O(1) for n≤ k, as the both-erder needs to create a fixed amount of socialbots m = mˆ = O(1). In this case, thefollowing inequality holds for any scaling factor α > 1, as long as αn≤ k:C(αn) < αC(n). (2.6)Accordingly, we say the SbN achieves an economy of scale: The cost of operatingan SbN decreases per attacked user as the number of users to attack increases. Forn > k, however, we have C(n) = O(n) and the complement of Equation 2.6 holdsfor any scaling factor α > 1:C(αn)≥ αC(n). (2.7)In this case, each attacked user adds cost at least as much as the previous one, andthe SbN has a growing size m = O(n), which is defined by both n and k.The Basic Reward ModelAs mentioned in Section 2.5.2, there is no clear pricing structure for SbN goods.Therefore, we assume R(n) grows at least linearly as n→∞, that is, R(n) =Ω(n).Moreover, this assumption is usually made upon entering new markets that do notoffer short-term feedback [121], in which case the simplest way for valuation is toconsider the average extracted value v¯ and quote it for each unit of goods. Such asimple linear reward model, however, does not accommodate the network effect,46nor the market’s pricing fluctuations or trends.19 Accordingly, this model is usefulfor basic, short-term analysis, which is suitable for today’s underground economy,especially in the presence of dishonesty, cheating, and uncertainty [50].To make things concrete, let y¯ be the yield of infiltration, which represents theaverage acceptance rate of a connection request sent by a socialbot. For now, wedefine the reward R(n) of a social infiltration campaign with scale n as follows:R(n) = n× y¯× v¯ = O(n). (2.8)Scalability vs. ProfitabilityThe definition of scalability (Equation 2.2) under the cost and reward definitions(Equations 2.5 and 2.8) leads to the following basic result: For n ≤ k, the SbN isscalable and has a fixed size mˆ = O(1), but for n > k, the SbN is non-scalable andhas a growing size m = O(n).To give a meaningful interpretation of scalability, we now derive an economicimplication of the result presented above. Let us consider the profit when we scalethe SbN operation by a factor α > 1. For αn≤ k, we have:P(αn) = R(αn)−C(αn)> αR(n)−αC(n)> αP(n), (2.9)which means that the botherder earns more profit by attacking, say, 100n users asopposed to n users when using a fixed number of socialbots. There is a clear eco-nomic incentive for the attacker to use the bots to go as large in scale as possible.19The network effect is the effect a user of a product has on the value of the product to others.When present, the value of a product is dependent on the number of others using it [121].47When αn > k, however, the situation changes as follows:P(αn) = R(αn)−C(αn)≤ αR(n)−αC(n)≤ αP(n), (2.10)which means when operating an SbN using a growing number of socialbots, thereis no benefit, if not a loss, from scaling the SbN and attacking even more users.The botherder makes the same profit, at best, if n or 100n users are attacked. Thisleads to the following non-desirable situation: The botherder has an incentive toscale the infiltration using a fixed number of socialbots, but has no incentives toinfiltrate more users by operating a growing network of socialbots. In other words,for a rational botherder in a web of scale, social infiltration evidently suffers fromthe plight of non-scalability. In what follows, we derive two profit-maximizingsocial infiltration strategies which implement different business models under dif-ferent scalability requirements.2.5.4 Profit-Maximizing Infiltration StrategiesThe main result from Section 2.5.3 is that for a a social infiltration campaign tobe scalable, it ought to use a fixed amount of socialbots. In other words, the sizeof the SbN should be fixed m = mˆ = O(1) while its scale should be as large as pos-sible. However, scalable infiltration reaches only mˆ× k users. Ideally, one wouldlike the reach to be unbounded for a scalable campaign, while being profitable.In what follows, we show how the botherder can make profit by following aprofit-maximizing infiltration strategy that implements one of two business mod-els. In the first model, the botherder makes profit by selling harvested user datathrough infiltrating as many users as possible, following a scalable data-driveninfiltration strategy. In the second model, the botherder makes profit by receivinga lump-sum payment through infiltrating a predefined number of users, followinga non-scalable target-driven infiltration strategy.48Scalable Data-driven InfiltrationIn order to achieve scalability, we need to upper bound the total cost C(n) by o(n)instead of O(n) whenever n > k. The botherder can do so by artificially removingthe limit k on each socialbot, and thus, making the cost independent from n. Oneway to achieve this is by breaking one or more social connections between eachsocialbot and attacked users after they become victims, or whenever the limit k isreached.20 Doing so means the SbN can have a fixed size mˆ = O(1), and thereforethe total cost becomes independent from n as follows:C(n) = r¯+Cs(n)(1+ p¯)= r¯+(c¯× mˆ)(1+ p¯). (2.11)Accordingly, we have C(αn)< αC(n) for a scaling factor α > 1. In other words,the SbN is now scalable and achieves economy of scale—the botherder has clearincentives to scale its operation to infiltrate as many OSN users as possible.We call this infiltration strategy data-driven, since the botherder seeks to har-vest as much data from users as possible by following a simple infiltrate-collect-break strategy. In this strategy, however, the yield should be refined to include theaverage data access ratio per victim d¯, as follows:yˆ = y¯× d¯. (2.12)We are interested in finding the scale n which maximizes the profit earned byoperating the SbN under this data-driven infiltration strategy. Formally, we aim tosolve the following profit-maximization problem:maximizen P(n) =R(n)︷ ︸︸ ︷n× y¯× d¯× v¯−C(n)︷ ︸︸ ︷r¯+(c¯× mˆ)(1+ p¯)subject to n≥ 120Recall that a socialbot collects user data immediately after an attacked user becomes a victim.49In order to maximize the profit, the botherder needs to maximize the reward orminimize the cost. As the total cost is fixed, the optimal solution is set n as large aspossible, which means there is no closed form solution for n. Therefore, as n→∞,the profit P(n)→ ∞ as well, which is an expected result given the scalability ofthis infiltration strategy.Non-Scalable Target-driven InfiltrationAnother strategy is to keep the social connections with the victims, but to attack asmany users as required in order to meet a contractually-agreed number of victimsnˆ ≥ 1, after which the botherder receives a lump sum payment R(n) = ρ froma third-party. We call such a non-scalable strategy target-driven: The botherderoperates the SbN on a limited scale to victimize nˆ users on average. The scale andsize of the SbN are derived by solving the following profit-maximization problem:maximizen,m P(n,m) =R(n)︷ ︸︸ ︷n× y¯× v¯−C(n)︷ ︸︸ ︷r¯+(c¯×m)(1+ p¯)subject to m≥⌈nk⌉, n≥ nˆ, and R(n)≤ ρThe optimal solution is find the minimal size m that is large enough to earn thelump-sum payment, in which the botherder is expected to victimize nˆ users. ByEquation 2.8, if the botherder attacks n users then the expected number of victimsis n× y¯. Therefore, for a target of nˆ victims, the optimal solution is to set the scaleto n = dnˆ/y¯e and the size to m = dn/ke= ddnˆ/y¯e/ke.2.5.5 Case Study: Social Infiltration in FacebookWe next assume the role of a botherder who plans to run a social infiltration cam-paign in Facebook. In particular, we apply the analysis from Section 2.5.4 todecide whether it is economically feasibility for the botherder to operate an SbNunder each one of the profit-maximizing infiltration strategies.50Parameter Description Estimate Sourcep¯ Probability of detecting a fake account 0.2 Boshmaf et al. [10]y¯ Average friend request acceptance rate 0.536 Boshmaf et al. [10]r¯ Average rental cost of a machine in a botnet $0.35 Goncharov [41]c¯ Average cost of creating a fake account $0.1 Thomas et al. [114]d¯ Average email address access ratio 0.718 Boshmaf et al. [10]v¯ Average extracted value per email address ¢0.03 Fossi et al. [37]k Maximum number of friends per user 5,000 Boshmaf et al. [10]nˆ Number of users to befriend and victimize 38,462 Motoyama et al. [88]ρ Lump-sum payment for befriending nˆ users $1,000 Motoyama et al. [88]Table 2.3: Estimates for analyzing infiltration profitability in FacebookCampaign SetupLet us consider a botherder who wants to operate an SbN constructed as describedin Section 2.4. Being rationale, the bothered wants to decide whether it would beeconomically feasible to operate the SbN in the first place. The botherder targetsFacebook, where the average acceptance rate of a friend request is y¯ = 0.536 afterseeding. There is also a probability p¯ = 0.2 for each fake account to be detected.To operate the SbN, the botherder needs to create fake accounts and rent atleast one machine in a botnet for the duration of the campaign. Thomas et al. [114]reported that 1K phone-verified fake accounts costs $100 (c¯ = $0.1), where mer-chants like BuyAccs21 making millions of dollars in revenue each year. Moreover,a recent study [41] by TrendMicro on Russian underground markets reported thatthe average rental cost of 2K hijacked machines in a botnet is $200 (r¯ = $0.35),while buying a similar botnet costs as little as $700. Table 2.3 provides a summaryof these and other campaign parameters.Economic FeasibilityIn what follows, we apply the strategies derived in Section 2.5.4 in practice, andshow that operating an SbN on Facebook at scale is expected to be profitable but21http://buyaccs.com51is not financially attractive as an independent business.Collecting and Selling Email AddressesLet us start with a data-driven infiltration campaign where the borherder operatesmˆ = 100 socialbots to collect and sell email addresses at underground markets.22As presented in Section 2.4, the average data access ratio of email addressesis d¯ = 0.718 after infiltration. A recent study [37] by Symantec on undergroundmarkets reported that 1MB of email addresses is worth $0.3–$40 at IRC under-ground markets, with an average of $20.15 per MB. Assuming an email address is15 ASCII characters long, or 15 bytes, then an email address has an average valueof v¯ = ¢0.03. A more recent study [115] by Fortinet about the anatomy of botnetsreported a similar value of $100 for a million email addresses.We can now calculate the profit in dollars for the SbN with scale n by substi-tuting the values from Table 2.3 into Equations 2.1, 2.11, and 2.12, leading to:P(n) = R(n)−C(n)= $0.0001154544n−$12.35.The botherder breaks even after attacking n0≈ 107K users. To make $1K in profit,the botherder has to attack nearly 8.7 million users. In fact, the botherder can earnat most $155,851 by attacking each one of the 1.35 billion users on Facebook [30].While unrealistic, this figure represents an optimistic estimate of how much userdata, email addresses in particular, are worth in underground markets, even for theunlikely case when the botherder manages to run such an extreme-scale campaign.The main result here is that running a large-scale social infiltration campaignis not expected to incur high cost nor high profits. It facilitates, however, a non-zero profit monetization campaign for more profitable underground commodities.22We chose to restrict the collected data to email addresses in order to avoid over-estimating theextracted value of user profile information.52Befriending Users for ProfitWe now consider a target-driven infiltration campaign, in which the figures lookmore promising for the botherder. Motoyama et al. [88] recently reported that onecan earn ρ = $1,000 for befriending nˆ = 38,462 users in online freelance mar-kets such as Freelancer.23 The authors refer to this task as OSN linking, wherea freelance worker creates enough accounts in the target OSN to befriend a par-ticular number of users, after which the worker receives an specific lump-sumpayment. Given the limit of k = 5,000 friends per user account on Facebook [10],the botherder attacks n = dnˆ/y¯e = 71,758 users using m = dn/ke = 15 social-bots. By directly applying Equations 2.1 and 2.5, where R(n) = ρ , the botherderis expected to make a profit of P(71,758) = $1,000−$2.15 = $997.85.To this end, the botherder can make more profit by operating a non-scalablesocial infiltration campaign. As compared to the earlier strategy, the botherderattacks nearly 120 times less users and makes about the same profit of $1K, whichmakes the latter strategy more financially attractive. In fact, Motoyama et al. [88]found that particular freelance tasks, such as OSN linking and creating fake ac-counts, were finished in less than a minute, suggesting the use of automation soft-ware to undertake these non-scalable tasks.2.5.6 DiscussionWe now discuss the implications of social infiltration on underground market. Inparticular, we make the case that even if a social infiltration campaign is profitable,it is still more reasonable for a botherder to treat it as a monetization campaign formore profitable commodities, rather than an independent business by itself.Implications for Underground MarketsUsing a scalable data-driven SbN means its operation ought to be automated andnon-adaptive to individual users, while delivering commodity-goods as a result.23http://freelancer.com/53These implications are attributed to the economy of scale the strategy achieves,and are discussed in depth by Herely [49]. In particular, adding per-user person-alization, such as responding to individual messages requesting introductions, oradapting to per-user countermeasures, such as breaking CAPTCHAs to send moreconnection requests, violates the cost structure of the strategy and would result inpotential losses. Moreover, as the operation of the SbN is automated, other copy-cat attackers will have the incentives to operate a similar SbN as well. As Herleynicely puts it [49]: “once a clever exploit is scripted, then it can be used by many.”Consequently, this automation will increase the demand for large-scale social in-filtration campaigns but decrease their profit, resembling the effect of the tragedyof commons: A dilemma arising from the situation in which multiple individu-als, acting independently and rationally, will ultimately deplete a shared limitedresource, even when it is clear that it is not in anyone’s long-term interest [121].This has the implication of decreasing the average extracted value v¯ down to zero.In a non-scalable target-driven SbN, the situation is different. As the objectiveof the botherder is to reach a target number of victims rather than to attack usersat scale to collect their data, it is more reasonable to target users who are expectedto have a higher yield y¯. As shown in Section 2.4, such users have a higher thanaverage number of friends on Facebook. For this the botherder is fortunate, thedegree distribution of users in OSNs is power-law and highly concentrated [80].However, this also means the botherder will ignore the majority of users becauseonly a small fraction of users have higher than average number of friends.Scalable Social Infiltration as a BusinessEven though the profit estimates in Section 2.5.5 might be optimistic, it is safe toassume that operating a data-driven SbN is expected to make a non-zero profit. Inwhat follows, we argue that such an SbN is not attractive as an independent andsustainable business, and a rational botherder would utilize the SbN as a moneti-zation tool for subsequent, more profitable campaigns.First, the analysis we presented in Section 2.4.5 is a cost-volume-profit (CVP)54analysis, which is useful for short-run decisions and has to be re-evaluated reg-ularly. Even if the botherder is able to observe market trends over time—whichallows planning a pricing structure, using more predictive economic models, andforecasting future profits—the underground market is still unfriendly. Herley etal. [50] used simple economic arguments to show that such markets suffer fromcheating, dishonesty, and uncertainty. Also, the authors showed that these mar-kets represent a classic example of markets for lemons [121]: The situation wheresellers have better information than buyers about the quality of their goods, whichleads to an adverse selection where the bad drives out the good.24 These mar-kets drive “high-quality” businesses out of existence and result in a diminishingvaluation of goods. After all, “nobody sells gold for the price of silver.” [50]Second, maintaining a business requires durability. Operating an SbN at scale,however, is expected to become more costly to maintain over time. This is dueto the fact that OSN operators update their deployed defenses to mitigate onlineattacks, especially against large-scale ones. With adversarial learning systemsin place, such as Facebook’s immune system [104], this will result in an armsrace between the OSN operator and the botherder, where the more resourcefulparty will eventually win. Based only on the profit the botherder is expected tomake from operating a data-driven SbN, the odds are small that the investmentin maintaining and updating the SbN will ever payback, especially as this wouldforce the SbN to become non-scalable due to the increased cost. As demonstratedby Florêncio et al. [35], the botherder faces a sum-of-effort rather than a weakest-link defense, which means even a slight improvement in defenses employed bythe OSN or its users can render the SbN non-scalable and non-profitable.Finally, Kanich et al. [61] showed that the underground market for spam-advertised businesses is particularly attractive, where such businesses make hun-dreds of thousands of dollars a month in revenue. The botherder is thus better offusing an SbN as a tool for monetizing subsequent, personalized e-mail spam cam-24A lemon is an American slang term for a car that is found to be defective only after it has beenbought. The market for lemons concludes that owners of good cars will not place their cars on theused-car market. This is sometimes summarized as “the bad driving out the good” in the market.55paign as part of a larger underground affiliation. In other words, large-scale socialinfiltration can be considered as a market enabler of the actual “money-makers,”as confirmed by other empirical studies on Twitter [109, 114].2.6 SummaryFrom a computer security perspective, the concept of socialbots is both interestingand disturbing. The threat is no longer from a human controlling or monitoring acomputer, but from exactly the opposite.This study presented an empirical evidence showing that OSNs such as Face-book are vulnerable to social infiltration by socialbots—automated fake accountsthat mimic real user behavior. In particular, we found that such socialbots makeit difficult for OSN security defenses, such as Facebook’s immune system, to de-tect or stop social infiltration as it occurs. This has resulted in alarming privacybreaches and serious implications to graph-based fake account detection mecha-nisms, which assume attackers who are not capable of social infiltration.We also analyzed the economic feasibility of social infiltration at scale. Theanalysis suggested that large-scale social infiltration, when operated as part of asocialbot network (SbN), is expected to be profitable but is not particularly attrac-tive as an independent business. Social infiltration at scale, however, plays the roleof a market enabler of more profitable commodities, which are monetized throughsocial spam and malware in a multi-million dollar underground market. In addi-tion, we showed that for an SbN to be scalable in terms of number of attackedusers, it ought to have a fixed size in terms of number of its socialbots. Otherwise,there would be a linear cost dependence which forces the botherder to limit thescale of the SbN in order for it to be more financially attractive.While this study showed that social infiltration in OSNs is both profitable anddifficult to detect, it has also made the observation that victims of socialbots havepredictive characteristics. As shown in Chapter 3, this insight has practical impli-cations to the design of robust defense mechanisms that lead to a safer Web forbillions of active OSN users.56Chapter 3Infiltration-Resilient Fake AccountDetection in OSNsIn its 2014 earnings report, Facebook estimated that 15–16 millions (1.2%) of itsmonthly active users are in fact “undesirable,” representing fake accounts that areused in violation of the site’s terms of service [32]. For such OSNs, the existenceof fakes leads advertisers, developers, and investors to distrust their reported usermetrics, which negatively impacts their revenues [17]. Moreover, as we discussedin Chapter 2, attackers create and automate fake accounts, or socialbots, for vari-ous malicious activities, starting with social infiltration. Therefore, it is importantfor OSNs to detect fake accounts as fast and accurately as possible.Most OSNs employ defense systems that automatically flag fake accounts byanalyzing either user-level activities or graph-level structures. Because automatedaccount suspension is inapplicable in practice, these accounts are pooled for man-ual verification by experienced analysts, who maintain a ground-truth for fake andreal accounts [15, 104].Fake account detection, in general, can be divided into two main approaches.In the first approach, unique features are extracted from recent user activities, suchas frequency of friend requests and fraction of accepted requests, after which theyare applied to a fake account classifier that has been trained offline using machine57learning techniques [104]. In the second approach, an OSN is modeled as a graph,with nodes representing user accounts and edges representing social relationships(e.g., friendships). Given the assumption that fake accounts can befriend only fewreal accounts, the graph is partitioned into two subgraphs, or regions, separatingreal accounts from fakes, with a narrow passage between them [2, 137]. Whileboth approaches are effective against naïve attacks, various studies showed thatthey are inaccurate in practice and can be easily evaded [6, 10, 29, 125]. In Chap-ter 2, we showed that an attacker can cheaply create fake accounts that resemblereal users, circumventing feature-based detection, or use simple social engineer-ing tactics to befriend and infiltrate many real users, invalidating the assumptionbehind graph-based detection (Finding 4).In this chapter, we consider attackers who can run a large-scale social infiltra-tion campaign using a set of automated fake accounts, or socialbots. Specifically,each fake account can perform social activities similar to those of real users, in-cluding befriending other users. Under this stronger threat model, we aim to tacklethe following question in our design of Íntegro, as introduced in Section 1.3.2:• RQ4: How can OSNs detect fakes that infiltrate users on a large scale?3.1 Background and Related WorkWe first review the threat model we assume in this work. We then present re-quired background and related work on fake account detection, abuse mitigation,maintaining a ground-truth, and analyzing victim accounts in OSNs.3.1.1 Threat ModelWe focus on large OSNs such as Tuenti, RenRen, and Facebook, which are opento everyone and allow users to declare bilateral relationships (i.e., friendships).58CapabilitiesAs presented in Chapter 2, we consider attackers who are capable of creating andautomating fake accounts on a large scale. Each fake account, also referred to asa socialbot, can perform social activities similar to those of real users [56]. Thisincludes sending friend requests and posting social content. We do not considerattackers who are capable of hijacking real accounts, as there are existing detec-tion systems that tackle this threat, such as COMPA [26]. We focus on detectingfake accounts that can befriend a large number of benign users in order to mountsubsequent attacks, as we describe next.ObjectivesThe objectives of an attacker include distributing social spam and malware, mis-informing the public, and collecting private user data on a large scale. To achievethese objectives, the attacker has to infiltrate the target OSN by using the fakesto befriend many real accounts. Such an infiltration is required because isolatedfake accounts cannot directly interact with or promote content to most users in theOSN [28]. This is also evident by a thriving underground market for OSN abuse,including social infiltration. For example, attackers can have their fake accountsbefriend 1K users on Facebook for $26 or less [88].VictimsWe refer to accounts whose users have accepted friend requests sent by fake ac-counts as victims. We refer to friendships between victim and fake accounts asattack edges. Victim accounts are subset of real accounts, which are accounts cre-ated and controlled by benign users who socialize with others in a non-adversarialsetting. Moreover, we refer to accounts whose users are more susceptible to so-cial infiltration and are likely to be victims as potential victims. We use the terms“account,” “profile,” and “user” interchangeably but do make the distinction whendeemed necessary.593.1.2 Fake Account DetectionFrom a systems design perspective, most of today’s fake account detection mech-anisms are either feature-based or graph-based, depending on whether they utilizemachine learning or graph analysis techniques in order to identify fakes. Next, wediscuss each of these approaches in detail.Feature-based DetectionThis approach relies on user-level activities and its account details (i.e., user logs,profile pages). By identifying discriminating features of an account, one can clas-sify each account as fake or real using various machine learning techniques. Forexample, Facebook employs an “immune system” that performs real-time checksand classification for each read and write action on its database, which are basedon features extracted from user accounts and their activities [104].Yang et al. used ground-truth provided by RenRen to train an SVM classifierin order to detect fake accounts [134]. Using simple features, such as frequency offriend requests, fraction of accepted requests, and per-account clustering coeffi-cient, the authors were able to train a classifier with 99% true-positive rate (TPR)and 0.7% false-positive rate (FPR).Stringhini et al. utilized honeypot accounts in order to collect data describingvarious user activities in OSNs [108]. After analyzing the collected data, theywere able to assemble a ground-truth for real and fake accounts, with featuressimilar to those outlined before. The authors trained two random forests (RF)classifiers to detect fakes in Facebook and Twitter, ending up with 2% FPR and 1%false-negative rate (FNR) for Facebook, and 2.5% FPR and 3% FNR for Twitter.Wang et al. used a click-stream dataset provided by RenRen to cluster user ac-counts into “similar” behavioral groups, corresponding to real or fake accounts [127].The authors extracted both sessions and clicks features, including average clicksper session, average session length, the percentage of clicks used to send friendrequests, visit photos, and to share content. With these features, the authors wereable to calibrate a cluster-based classifier with 3% FPR and 1% FNR, using the60METIS clustering algorithm [64].Cao et al. observed that fake accounts tend to perform loosely synchronizedactions in a variety of OSN applications, all from a limited set of IP addresses [16].The authors extracted simple user action features, such as timestamp, target appli-cation, and IP address, in order to cluster user accounts according to the similarityof their actions using a scalable implementation of the single-linkage hierarchicalclustering algorithm. Through a deployment at Facebook, the authors we able tocalibrate a cluster-based classifier and detect more than two million fake accounts,which acted similarly at about the same time for a sustained period of time.Even though feature-based detection scales to large OSNs, it is still relativelyeasy to circumvent. This is the case as it depends on features describing activitiesof known fakes in order to identify unknown ones. In other words, attackers canevade detection by adversely modifying the content and activity patterns of theirfakes, which leads to an arms race [74, 120]. In addition, feature-based detectiondoes not provide any formal security guarantees and often results in a high FPRin practice. This is partly attributed to the large variety and unpredictability ofbehaviors of users in adversarial settings [15].With Íntegro, we use feature-based detection to identify potential victims in anon-adversarial setting. In particular, the dataset used for training a victim clas-sifier includes features of only known real accounts that have either accepted orrejected friend requests send by known fakes. Because real accounts are controlledby benign users who are not adversarial, a feature-based victim account classifieris much harder to circumvent than a similarly-trained fake account classifier. Aswe discuss in Section 3.3, we require the victim classification to be better thanrandom in order to outperform the state-of-the-art in fake account detection.Graph-based DetectionAs a response to the lack of formal security guarantees in feature-based detection,the state-of-the-art in fake account detection utilizes a graph-based approach in-stead. In this approach, an OSN is modeled as a graph, with nodes representing61user accounts and edges between nodes representing social relationship. Given theassumption that fakes can establish only a small number of attack edges, the sub-graph induced by the set of real accounts is sparsely connected to fakes, that is, thecut over attack edges is sparse.1 Graph-based detection mechanisms make this as-sumption, and attempt to find such a sparse cut with formal guarantees [123, 137].For example, Tuenti employs SybilRank to rank accounts according to their per-ceived likelihood of being fake, based on structural properties of its graph [15].Yu et al. were among the first to analyze the social graph for the purpose ofidentifying fake accounts in OSNs [138, 139]. The authors developed a techniquethat labels each account as either fake or real based on multiple, modified randomwalks. This binary classification is used to partition the graph into two smallersubgraphs that are sparsely interconnected via attack edges, separating real ac-counts from fakes. They also proved that in the worst case O(|Ea| logn) fakes canbe misclassified, where |Ea| is the number of attack edges and n is the number ofaccounts in the network. Accordingly, it is sufficient for the attacker to establishΩ(n/ logn) attack edges in order to evade this detection scheme with 0% TPR.Viswanath et al. employed community detection techniques to identify fakeaccounts in OSNs [122]. In general, community detection decomposes a givengraph into a number of tightly-knit subgraphs that are loosely connected to eachother, where each subgraph is called a community [36, 72]. By expanding a com-munity starting with known real accounts [82], the authors were able to identifythe subgraph which contains mostly real accounts. Recently, however, Alvisi etal. showed that such a local community detection technique can be easily circum-vented if fake accounts establish sparse connectivity among themselves [2].As binary classification often leads to high FPR [122], Cao et al. proposeduser ranking instead, such that almost all fake accounts are ranked lower than realaccounts [15]. The authors developed SybilRank, a fake account detection systemthat assigns each account a rank describing how likely it is to be fake based on a1A cut is a partition of nodes into two disjoint subsets. Visually, it is a line that cuts through orcrosses over a set of edges in the graph (see Figure 3.1).62modified random walk, in which a lower rank means the account is more likely tobe fake. They also proved that O(|Ea| logn) fakes can outrank real accounts in theworst case, given the fakes establish |Ea| attack edges with victims at random.While graph-based detection provides provable security guarantees, real-worldsocial graphs do not conform with the main assumption on which it depends. Inparticular, various studies confirmed that attackers can infiltrate OSNs on a largescale by deceiving users into befriending their fakes [6, 29, 125], which rendersgraph-based fake account detection ineffective in practice.With Íntegro, we do not assume that fake accounts are limited by how manyattack edges they can establish. Instead, we identify potential victims and leveragethis information to carefully weight the graph. After that, through a user rankingscheme, we bound the security guarantee by the aggregate weight on attack edges,vol(Ea), rather than their number, |Ea|. In particular, by assigning lower weightsto edges incident to potential victims than other accounts, we can upper bound thevalue of vol(Ea) by |Ea|, as we show in Section Abuse Mitigation and the Ground-truthDue to the inapplicability of automated account suspension, OSNs employ abusemitigation techniques, such as CAPTCHA challenges [104] and photo-based so-cial authentication [136], in order to rate-limit accounts that have been automati-cally flagged as fake. These accounts are pooled for manual inspection by experi-enced analysts who maintain a ground-truth for real and fake accounts along withtheir features, before suspending or removing verified fakes [15, 104, 128, 134].While maintaining an up-to-date ground-truth is important for retraining de-ployed classifiers and estimating their effectiveness in practice, it is rather a time-consuming and non-scalable task. For example, on an average day, each analyst atTuenti inspects 250–350 accounts an hour, and for a team of 14 employees, up to30K accounts are inspected per day [15]. It is thus important to rank user accountsin terms of how likely they are to be fake in order to prioritize account inspectionby analysts. Íntegro offers this functionality and leads to a faster reaction against63potential abuse by fakes, benefiting both OSN operators and their users.3.1.4 Analyzing Victim AccountsWhile we are the first to leverage victim classification to separate fakes from realaccounts in the graph, other researchers have analyzed victim accounts as part ofthe larger cyber criminal ecosystem in OSNs [109].Wagner et al. were among the first to develop predictive models in order toidentify users who are more susceptible to social infiltration in Twitter [125]. Theyfound that susceptible users, or potential victims, tend to use Twitter for conver-sational purposes, are more open and social since they communicate with manydifferent users, use more socially welcoming words, and show higher affectionthan non-susceptible users.Yang et al. studied the cyber criminal ecosystem on Twitter [133]. They foundthat victims fall into one of three categories. The first are social butterflies whohave large numbers of followers and followings, and establish social relationshipswith other accounts without careful examination. The second are social promot-ers who have large following-follower ratios, larger following numbers, and arelatively high URL ratios in their tweets. These victims use Twitter to promotethemselves or their business by actively following other accounts without consid-eration. The last are dummies who post few tweets but have many followers. Infact, these victims are dormant fake accounts at an early stage of their abuse.3.2 Intuition, Goals, and ModelWe now introduce Íntegro, a fake account detection system that is robust againstsocial infiltration. In what follows, we first present the intuition behind our design,followed by its goals and model.643.2.1 IntuitionWe start with the premise that some users are more likely to be victims than others.If we can train a classifier to identify potential victims with high probability, wemay be able to use this information to find the cut which separates fakes from realaccounts in the graph. As victims are benign users, the output of this classifierrepresents a reliable information which we can integrate in the graph.To find such a cut, we can define a graph weighting scheme that assigns edgesincident to potential victims lower weights than others, where weight values arecalculated from classification probabilities. In a weighted graph, the sparsest cutis the cut with the smallest volume, which is the sum of weights on edges acrossthe cut. Given an accurate victim classifier, this cut is expected to cross over someor all attack edges, effectively separating real accounts from fakes, even if thenumber of attack edges is large. We aim to find such a cut using a ranking schemethat ideally assigns higher ranks to nodes in one side of the cut than the other, asone way to separate real accounts from fakes. This ranking scheme is inspired bysimilar graph partitioning algorithms proposed by Spielman et al. [103], Yu [137],and Cao et al. [15].3.2.2 Design GoalsÍntegro is designed to help OSNs detect fake accounts, which are capable of large-scale social infiltration, through a user ranking scheme. In particular, Íntegro hasthe following design goals:Effectiveness: High-Quality User RankingThe system should ideally assign higher ranks to real accounts than fakes. If not,it should limit the number of fakes that might rank similar to or higher than realaccounts. In practice, the ranking should have an area under ROC curve (ROC)that is greater than 0.5 and closer to 1, where the AUC represents the probabil-ity of a random real accounts to rank higher than a random fake account [122].65Real!Trusted!Victim!Fake!Attack!edge!Real region! Fake region! Gender      #Friends                       #Posts!Male! 3! … ! 10!Feature vector of B!B(Figure 3.1: System modelAlso, the system should be robust against social infiltration under real-world at-tack strategies. Given a ranked list of users, a high percentage of the users at thebottom of the list should be fake. This percentage, which represents the precisionof detection, should significantly decrease as we move to higher ranks in the list.Efficiency: Scalability and Easy DeploymentThe system should have a practical computational cost that allows it to scale tolarge OSNs. In other words, it should scale nearly linearly with number of useraccounts in the OSN, and deliver ranking results in only few minutes. The sys-tem should be able to extract useful, low-cost features and process large graphson commodity machines, so that OSN operators can deploy it on their existingcomputer clusters.3.2.3 System ModelAs illustrated in Figure 3.1, we model an OSN as an undirected graph G = (V,E),where each node vi ∈ V represents a user account and each edge {vi,v j} ∈ Erepresents a bilateral social relationship among vi and v j. In the graph G, there aren = |V | nodes and m = |E| edges.66AttributesEach node vi has a degree deg(vi), which is equal to the sum of weights on edgesincident to the node. In addition, vi has a feature vector A(vi), where each entrya j ∈A(vi) describes a feature or an attribute of the account. Each edge {vi,v j}∈Ehas a weight w(vi,v j) ∈ (0,1], which is initially set to 1.RegionsThe node set V is divided into two disjoint sets, Vr and Vf , representing real andfake accounts, respectively. We refer to the subgraph induced by Vr as the realregion Gr, which includes all real accounts and the friendships between them.Likewise, we refer to the subgraph induced by Vf as the fake region G f . The re-gions are connected by a set of attack edges Ea between victim and fake accounts.We assume the OSN operator is aware of a small set of trusted accounts Vt , alsocalled seeds, which are known to be real accounts and are not victims.3.3 System DesignWe now describe the design of Íntegro. We start with a short overview of our ap-proach, after which we proceed with a detailed description of system components.3.3.1 OverviewIn the beginning, Íntegro trains a victim classifier using low-cost features extractedfrom user-level activities. This feature-based binary classifier is used to identifypotential victims in the graph, each with some probability (Section 3.3.2). Afterthat, Íntegro calculates new edge weights from the probabilities computed by thevictim classifier in such a way that edges incident to potential victims have lowerweights than others. Íntegro then ranks user accounts based on the landing proba-bility of a modified random walk that starts from a trusted account, or a seed node,picked at random from all trusted accounts (Section 3.3.3).The random walk is “short” because it is terminated after O(logn) steps, early67before it converges. The walk is “supervised,” as it is biased towards traversingnodes which are reachable via higher-weight paths. This modified random walkhas a higher probability to stay in the real region of the graph, because it is highlyunlikely to escape into the fake region in few steps through low-weight attackedges. As a result, Íntegro ranks most of real accounts higher than fakes, given aseed selection strategy that considers the existing community structures in the realregion (Section 3.3.4).Íntegro takes O(n logn) time to complete its computation (Section 3.3.5). Inaddition, it formally guarantees that at most O(vol(Ea) logn) fake accounts canhave the same or higher ranks than real accounts in the worst case, given the fakesestablish |Ea| attack edges at random (Section 3.3.6).3.3.2 Identifying Potential VictimsFor each user vi, Íntegro extracts a feature vector A(vi) from its user-level activi-ties. A subset of the feature vectors is selected to train a binary classifier in orderto identify potential victims using supervised machine learning, as follows:Feature EngineeringExtracting and selecting useful features from user-level activities can be a chal-lenging and time consuming task. To be efficient, we seek low-cost features whichcould be extracted in O(1) time per user account. One possible location for ex-tracting such features is the profile page of user accounts, where features are read-ily available (e.g., a Facebook profile page). However, low-cost features are ex-pected to be statistically weak, which means they may not strongly correlate withthe label of a user account (i.e., victim or not). As we explain later, we require thevictim classifier to be better than random in order to deliver robust fake accountdetection. This requirement, fortunately, is easy to satisfy. In particular, we showin Section 3.4 that an OSN can train and cross-validate a victim classifier that isup to 52% better than random, using strictly low-cost features.68Supervised Machine LearningFor each user vi, Íntegro computes a vulnerability score p(vi) ∈ (0,1) that repre-sents the probability of vi to be a victim. Given an operating threshold α ∈ (0,1)with a default value α = 0.5, we say vi is a potential victim if p(vi) ≥ α . Tocompute vulnerability scores, Íntegro uses random forests (RF) learning algo-rithm [12] in order to train a victim classifier, which given A(vi) and α , decideswhether the user vi is a potential victim with a vulnerability score p(vi). Wepicked the RF learning algorithm because it is both efficient and robust againstmodel over-fitting [47]. Íntegro takes O(n logn) time to extract n feature vectorsand train an RF-based victim classifier. It also takes O(n) to compute vulnerabilityscores for all users, given their feature vectors and the trained victim classifier.RobustnessAs attackers do not control victims, a victim classifier is inherently more resilientto adversarial attacks than similarly-trained fake account classifier. Let us considerone concrete example. In the “boiling-frog” attack [120], fake accounts can forcea classifier to tolerate abusive activities by slowly introducing similar activities tothe OSN. Because the OSN operator has to retrain all deployed classifiers in orderto capture new behaviors, a fake account classifier will learn to tolerate more andmore abusive activities, up until the attacker can launch a full-scale attack withoutdetection [10]. When identifying potential victims, however, this is only possibleif the real accounts used to train the victim classifier have been compromised. Thissituation can be avoided by verifying the accounts, as described in Section 3.1.3.GeneralizationWhile machine learning-based victim classification is imperfect and will typicallyresult in false positives, it is highly scalable and requires only a small ground-truthto generalize the classification to the entire user-base, as we show in Section 3.4.5.This ability of a methodically trained and validated classifier to generalize, so thatit performs accurately on unseen before feature vectors, is key for the predictive69power of machine learning [47]. If not, an OSN has to collect the missing ground-truth by directly testing whether each user is likely to be a victim, possibly via abenign social infiltration campaign that is run against the entire user-base. Sucha brute-force approach to identifying potential victims is infeasible for three mainreasons. First, it has a considerably high cost to the OSN and its users, as it needsto be administered regularly against all accounts to capture changing user behav-iors. Second, flooding the network with friend requests at random is highly ineffi-cient, as only a small fraction of users are expected to become victims. Third andlast, forcing users to participate in the test introduces its own ethical concerns [9],as it could result in risky user behaviors such as emotional attachment, in additionto the corresponding negative impact on user experience.3.3.3 Ranking User AccountsTo rank users, Íntegro computes the probability of a modified random walk to landon each user vi after k steps, where the walk starts from a trusted user accountpicked at random. For simplicity, we refer to the probability of a random walk toland on a node as its trust value, so the probability distribution of the walk at eachstep can be modeled as a trust propagation process [45]. In this process, a weightw(vi,v j) represents the rate at which trust may propagate from either side of theedge {vi,v j} ∈ E. We next describe this process in detail.Trust PropagationÍntegro utilizes the power iteration method to efficiently compute trust values [40].This method involves successive matrix multiplications where each element of thematrix is the transition probability of the random walk from one node to another.Each iteration computes the trust distribution over nodes as the random walk pro-ceeds by one step. Let Tk(vi) denote the trust collected by each node vi ∈V after kiterations. Initially, the total trust, denoted by τ ≥ 1, is evenly distributed among70the trusted nodes in Vt :T0(vi) =τ/|Vt | if vi ∈Vt ,0 otherwise.(3.1)The process then proceeds as follows:Tk(vi) = ∑{vi,v j}∈ETk−1(v j) · w(vi,v j)deg(v j) , (3.2)where in iteration k, each node vi propagates its trust Tk−1(vi) from iteration k−1to each neighbour v j, proportionally to the ratio w(vi,v j)/deg(vi). This is requiredso that the sum of the propagated trust equals Tk−1(vi). The node vi then collectsthe trust propagated similarly from each neighbour v j and updates its trust Tk(vi).Throughout this process, τ is preserved such that for each iteration k≥ 1 we have:∑vi∈VTk−1(vi) = ∑vi∈VTk(vi) = τ. (3.3)Our goal here is to ensure that most real accounts collect higher trust than fakeaccounts. In other words, we want to strictly limit the portion of τ that escapes thereal region Gr and enters the fake region G f . To achieve this propagation property,we make the following modifications.Adjusted Propagation RatesIn each iteration k, the aggregate rate at which τ may enter G f is strictly limited bythe sum of weights on the attack edges, which we denote by the volume vol(Ea).Therefore, we aim to adjust the weights in the graph such that vol(Ea) ∈ (0, |Ea|],without severely restricting trust propagation in Gr. We accomplish this by as-signing smaller weights to edges incident to potential victims than other accounts.In particular, each edge {vi,v j} ∈ E keeps the default weight w(vi,v j) = 1 if vi and71v j are both not potential victims. Otherwise, we modify the weight as follows:w(vi,v j) = min{1,β · (1−max{p(vi), p(v j)})}, (3.4)where β is a scaling parameter with a default value of β = 2. Now, as vol(Ea)→ 0the portion of τ that enters G f reaches zero as desired.For proper degree normalization, we introduce a self-loop {vi,vi} with weightw(vi,vi) = (1−deg(vi))/2 whenever deg(vi) < 1. Notice that self-loops are con-sidered twice in degree calculation.Early TerminationIn each iteration k, the trust vector Tk(V ) = 〈Tk(v1), . . . ,Tk(vn)〉 describes the dis-tribution of τ throughout the graph. As k→∞, the vector converges to a stationarydistribution T∞(V ), as follows [5]:T∞(V ) =〈τ · deg(v1)vol(V ) , . . . ,τ ·deg(vn)vol(V )〉, (3.5)where the node-set volume vol(V ) in this case is the sum of degrees of nodes in V .In particular, Tk(V ) converges after k reaches the mixing time of the graph, whichhas been shown to be much larger than O(logn) iterations for various kinds ofsocial networks [21, 72, 83]. Accordingly, we terminate the propagation processearly before it converges after ω = O(logn) iterations.Degree NormalizationAs described in Equation 3.5, trust propagation is influenced by individual nodedegrees. As k grows large, the propagation is expected to bias towards high degreenodes. This implies that high degree fake accounts may collect more trust than lowdegree real accounts, which is undesirable for effective user ranking. To eliminatethis node degree bias, we normalize the trust collected by each node by its degree.We assign each node vi ∈V after ω = O(logn) iterations a rank value T ′ω(vi) that72is equal to its degree-normalized trust:T ′ω(vi) = Tω(vi)/deg(vi). (3.6)Finally, we sort the nodes by their ranks in a descending order.ExampleFigure 3.2 depicts trust propagation in a graph. In this example, we assume eachaccount has a vulnerability score of 0.05, except the victim E, which has a scoreof p(E) = 0.95. The graph is weighted using α = 0.5 and β = 2, and a total trustτ = 1000 in initialized over the trusted nodes {C,D}. Each value is rounded to itsnearest natural number. The values with parentheses represent the correspondingdegree-normalized trust (i.e., rank values).In Figure 3.2b, after ω = 4 iterations, all real accounts {A,B,C,D,E} collectmore trust than fake accounts {F,G,H, I}. Moreover, the nodes receive the correctranking of (D,A,B,C,E,F,G,H, I), as sorted by their degree-normalized trust. Inparticular, all real accounts have higher rank values than fakes, where the small-est difference is T ′4(E)−T ′4(F) > 40. Notice that real accounts that are not vic-tims have similar rank values, where the largest difference is T ′4(D)−T ′4(C)< 12.These sorted rank values, in fact, could be visualized as a stretched-out step func-tion that has a significant drop near the victim’s rank value.If we allowed the process to converge after k > 50 iterations, the fakes collectsimilar or higher trust than real accounts, following Equation 3.5, as shown inFigure 3.2c. In either case, the attack edges Ea = {{E,G},{E,F},{E,H}} havea volume of vol(Ea) = 0.3, which is 10 times lower than its value if the graph hadunit weights, with vol(Ea) = 3. As we show in Section 3.4, early-termination andpropagation rate adjustment are essential for robustness against social infiltration.73A( B(C( DE( F(GHI(0(500(0(0(500(0(0(0(0(High(=(1.0( Complementary(=(0.25((Low(=(0.1((a) InitializationA( B(C( DE( F(GHI(231((115)(316((105)(5((3)(46((46)(129((117)(13((4)(237((113)(13((4)(10((5)((b) After 4 iterationsA( B(C( DE( F(GHI(103(154(103(51(56(159(107(159(108((c) Stationary distributionFigure 3.2: Trust propagation example3.3.4 Selecting Trusted AccountsÍntegro is robust against social infiltration, as it limits the portion of τ that entersG f by the rate vol(Ea), regardless of the number of attack edges, |Ea|. For the casewhen there are few attack edges so that Gr and G f are sparsely connected, vol(Ea)is already small, even if one keeps w(vi,v j) = 1 for each attack edge {vi,v j} ∈ Ea.However, Gr is likely to contain communities [71, 72], where each represents adense subgraph that is sparsely connected to the rest of the graph. In this case, thepropagation of τ in Gr becomes restricted by the sparse inter-community connec-tivity, especially if Vt is contained exclusively in a single community. We thereforeseek a seed selection strategy for trusted accounts, which takes into account theexisting community structure in the graph.74Seed Selection StrategyWe pick trusted accounts as follows. First, before rate adjustment, we estimate thecommunity structure in the graph using a community detection algorithm calledthe Louvain method [7]. Second, after rate adjustment, we exclude potential vic-tims and pick small samples of nodes from each detected community at random.Third and last, we inspect the sampled nodes in order to verify they correspond toreal accounts that are not victims. We initialize the trust only between the accountsthat pass manual verification by experts.In addition to coping with the existing community structure in the graph, thisselection strategy is designed to also reduce the negative impact of seed-targetingattacks. In these attacks, fake accounts befriend trusted accounts in order to ad-versely improve their own ranking, as the total trust τ is initially distributed amongtrusted accounts. By choosing the seeds at random, however, the attacker is forcedto guess the seeds among a large number of nodes. Moreover, by choosing multi-ple seeds, the chance of correctly guessing the seeds is further reduced, while theamount of trust assigned to each seed in lowered. In practice, the number of seedsdepends on available resources for manual account verification, with a minimumof one seed per detected community.Community DetectionWe chose the Louvain method because it is both efficient and outputs high-qualitypartitions. This method iteratively groups closely connected communities togetherto greedily improve the modularity of the partition [93], which is one measure forpartition quality. In each iteration, each node represents one community, and well-connected neighbors are greedily combined into the same community. At the endof the iteration, the graph is reconstructed by converting the resulting communitiesinto nodes and adding edges that are weighted by inter-community connectivity.Each iteration takes O(m) time, and only a small number of iterations is requiredto find the community structure which greedily maximizes the modularity.While one can apply community detection to identify fake accounts [122], do-75ing so hinges on the assumption that fakes always form tightly-knit communities,which is not necessarily true [134]. This also means fakes can easily evade detec-tion if they establish sparse connectivity among themselves [2]. With Íntegro, wedo not make such an assumption. In particular, we consider an attacker who canbefriend a large number of real or fake accounts, without any formal restrictions.3.3.5 Computational CostFor an OSN with n users and m friendships, Íntegro takes O(n logn) time to com-plete its computation, end-to-end. We next analyze the running time in detail.First, recall that users have a limit on how many friends they can have (e.g.,5K in Facebook, 1K in Tuenti), so we have O(m) = O(n). Identifying potentialvictims takes O(n logn) time, where it takes O(n logn) time to train an RF clas-sifier and O(n) time to compute vulnerability scores. Also, weighting the graphtakes O(m) time. Detecting communities takes O(n) time, where each iteration ofthe Louvain method takes O(m) time, and the graph rapidly shrinks in O(1) time.Propagating trust takes O(n logn) time, as each iteration takes O(m) time and thepropagation process iterates for O(logn) times. Ranking and sorting users by theirdegree-normalized trust takes O(n logn) time. So, the running time is O(n logn).3.3.6 Security GuaranteesIn the following analysis, we consider attackers who establish attack edges withvictims uniformly at random. For analytical tractability, we assume the real regionis fast mixing. This means that it takes O(log |Vr|) iterations for trust propagationto converge in the real region. We refer the interested reader to Appendix A for acomplete analysis, including theoretical background and formal proofs.Sensitivity to Mixing TimeThe ranking quality of Íntegro does not rely on the absolute value of the mixingtime in the real region. Instead, it only requires that the whole graph has a longer76mixing time than the real region. Under this condition, the early-terminated prop-agation process results in a gap between the degree-normalized trust of fakes andreal accounts. Ideally, the number of iterations that Íntegro performs is set equalto the mixing time of the real region. Regardless of whether the mixing time of thereal region is O(logn) or larger, setting the number of iterations to this value re-sults in an almost uniform degree-normalized trust among the real accounts [15].If the mixing time of the real region is larger than O(logn), the trust that escapes tothe fake region is further limited. However, we also run at the risk of starving realaccounts that are loosely connected to trusted accounts. This risk is mitigated byplacing trusted accounts in many communities, and by dispersing multiple seedsin each community, thereby ensuring that the trust is initiated somewhere close tothose real accounts, as described in Section 3.3.4.Bound on Ranking QualityThe main security guarantee offered by Íntegro is captured by the following theo-retical result:Theorem 1: Given a social graph with a fast mixing real region and an attackerwho randomly establishes attack edges, the number of fake accounts that rank sim-ilar to or higher than real accounts after O(logn) iterations is O(vol(Ea) logn).Proof sketch. Consider a graph G = (V,E) with a fast mixing real region Gr. Asweighting a graph changes its mixing time by a constant factor [102], Gr remainsfast mixing after rate adjustment.After O(logn) iterations, the trust vector Tω(V ) does not reach its stationarydistribution T∞(V ). Since trust propagation starts from Gr, the fake region G f getsonly a fraction f < 1 of the aggregate trust it should receive in T∞(V ). On the otherhand, as the trust τ is conserved during the propagation process (Equation 3.3),Gr gets c > 1 times higher aggregate trust than it should receive in T∞(V ).As Gr is fast mixing, each real account vi ∈Vr receives approximately identicalrank value of T ′ω(vi) = c · τ/vol(V ), where τ/vol(V ) is the degree-normalized77trust value in T∞(V ) (Equations 3.5 and 3.6). Knowing that G f is controlled bythe attacker, each fake v j ∈ Vf receives a rank value T ′ω(v j) that depends on howthe fakes inter-connect to each other. However, since the aggregate trust in G fis bounded, each fake receives on average a rank value of T ′ω(v j) = f · τ/vol(V ),which is less than that of a real account. In the worst case, an attacker can arrangea set Vm ⊂Vf of fake accounts in G f such that each vk ∈Vm receives a rank valueof T ′ω(vk) = c · τ/vol(V ), while the remaining fakes receive a rank value of zero.Such a set cannot have more than ( f/c) ·vol(Vf ) = O(vol(Ea) logn) accounts, asotherwise, f would not be less than 1 and G f would receive more than it shouldin Tω(V ).Improvement over SybilRankÍntegro shares many design traits with SybilRank. In particular, modifying Íntegro bysetting w(vi,v j) = 1 for each (vi,v j) ∈ E will result in a ranking similar to thatcomputed by SybilRank [15]. It is indeed the incorporation of victim classifica-tion into user ranking that differentiates Íntegro from other proposals, giving it theunique advantages outlined earlier.As stated by Theorem 1, the bound on ranking quality relies on vol(Ea), re-gardless of how large the set Ea grows. As we weight the graph based on theoutput of the victim classifier, our bound is sensitive to its classification perfor-mance. We next prove that if an OSN operator uses a victim classifier that isuniformly random, which means each user account vi ∈ V is equally vulnerablewith p(vi) = 0.5, then Íntegro is as good as SybilRank in terms of ranking quality:Corollary 1: For a uniformly random victim classifier, the number of fakes thatrank similar to or higher than real accounts after O(logn) iterations is O(|Ea| logn).Proof. This classifier assigns each user account vi ∈ V a score p(vi) = 0.5. ByEquation 3.4, each edge {vi,v j} ∈ E is assigned a unit weight w(vi,v j) = 1, whereα = 0.5 and β = 2. By Theorem 1, the number of fake accounts that rank similarto or higher than real accounts after ω = O(logn) iterations is O(vol(Ea) logn) =O(|Ea| logn).78By Corollary 1, Íntegro can outperform SybilRank in its ranking quality by afactor of O(|Ea|/vol(Ea)), given the used victim classifier is better than random.This can be achieved during the cross-validation phase of the victim classifier,which we thoroughly describe and evaluate in what follows.3.4 Comparative EvaluationWe now present a comprehensive comparative evaluation of Íntegro and SybilRank.First, we explain our choice of SybilRank (Section 3.4.1), after which we outline asummary of the evaluation methodology (Section 3.4.2). This is followed by a de-tailed description of the used datasets (Section 3.4.3) and system implementation(Section 3.4.4). Next, we evaluate Íntegro for victim classification (Section 3.4.5),and then systematically compare it to SybilRank in terms of ranking quality, sensi-tivity to seed-targeting attacks, and real-world deployment (Sections 3.4.6–3.4.8).Finally, we evaluate the scalability of Íntegro in terms of its execution time usinga synthetic benchmark of large OSNs (Section 3.4.9).3.4.1 Compared SystemWe chose SybilRank for two reasons. First, SybilRank uses a similar power itera-tion method albeit on an unweighted version of the graph. This similarity allowedus to clearly show the impact of leveraging victim classification on fake accountdetection. Second, SybilRank was shown to outperform known contenders [15],including EigenTrust [60], SybilGuard [138], SybilLimit [139], SybilInfer [19],Mislove’s method [122], and GateKeeper [118]. We next contrast these systemsto both SybilRank and Íntegro.SybilGuard [138] and SybilLimit [139] identify fake accounts based on a largenumber of modified random walks, where the computational cost is O(√mn logn)in centralized setting like OSNs. SybilInfer [19], on the other hand, uses Bayesianinference techniques to assign each user account a probability score for being fakein O(n(logn)2) time per trusted account. The system, however, does not provide79analytical bounds on its ranking quality.GateKeeper [118], which is a flow-based fake account detection approach, im-proves over SumUp [117]. GateKeeper relies on strong assumptions that requirebalanced graphs and costs O(n logn) time per trusted account or “ticket source.”Viswanath et al. employed Mislove’s algorithm [82] to greedily expand a localcommunity around a set of trusted accounts in oder to partition the graph into twocommunities representing real and fake regions [122]. This algorithm, however,costs O(n2) time and its detection can be easily evaded if the fakes establish sparseconnectivity among themselves [2].Compared to these systems, SybilRank provides an equivalent or tighter secu-rity bound and is more computationally efficient, as it requires O(n logn) time re-gardless of number of trusted accounts. Compared to SybilRank, Íntegro providesO(|Ea|/vol(Ea)) improvement on its security bound, requires the same O(n logn)time, and is robust against social infiltration, unlike all other systems.3.4.2 MethodologyOur objective is to empirically validate system design from five different aspects:Victim classification, ranking quality, sensitivity to seed-targeting attacks, large-scale deployment, and scalability. To achieve this, we implemented and evaluatedboth Íntegro and SybilRank using two real-world datasets recently collected fromFacebook and Tuenti. We also compared both systems through a production-classdeployment at Tuenti in collaboration with its “Site Integrity” team, which has 14full-time account analysts and 10 full-time software engineers who fight spam andother forms of abuse. In order to evaluate system scalability, we tested Íntegro us-ing a benchmark of five synthetic OSNs with an exponentially increasing numberof users. We provide more details on our methodology in the following sections.3.4.3 DatasetsWe used two real-world datasets from two different OSNs. The first dataset wascollected from Facebook during the study described in Chapter 2, and contained80public user profiles and two graph samples. The second dataset was collected byTuenti from their production servers, and contained a day’s worth of server-cacheduser profiles. We had to sign a non-disclosure agreement with Tuenti in order toaccess an anonymized, aggregated version of its user data, with the whole processbeing mediated by Tuenti’s Site Integrity team.The Ground-truthFor the Facebook dataset, we used the ground-truth of the original study, which wealso re-validated for the purpose of this work, as we describe next. For the Tuentidataset, all accounts were manually inspected and labeled by its account analysts.The inspection included matching user profile photos to its declared age and ad-dress, understanding natural language in user posts and messages, examining theuser’s friends, and analyzing the user’s IP address and HTTP-related information.FacebookThe dataset contained public profile pages of 9,646 real users who received friendrequests from fake accounts. Since the dataset was collected in early 2011, wewanted to verify whether these users are still active on Facebook. Accordingly, werevisited their public profiles in June 2013. We found that 7.9% of these accountswere either disabled by Facebook or deactivated by the users. Therefore, we ex-cluded these accounts, ending up with 8,888 accounts, out of which 32.4% werevictims who accepted a single friend request sent by a fake posing as a stranger.The dataset also included two graph samples of Facebook, which were col-lected using a stochastic version of the Breadth-First Search method called “forestfire” [70]. The first graph consisted of 2,926 real accounts with 9,124 friendships(i.e., the real region), 65 fakes with 2,080 friendships (i.e., the fake region), and748 timestamped attack edges. The second graph consisted of 6,136 real accountswith 38,144 friendships, which represented the real region only.81TuentiThe dataset contained profiles of 60K real users who received friend requests fromfake accounts, out of which 50% were victims. The dataset was collected in Feb10, 2014 from live production servers, where data resided in memory and no ex-pensive, back-end queries were issued. For Tuenti, collecting this dataset was alow-cost and easy process, as it involved reading cached user profiles of a subsetof its daily active users—users who logged in to Tuenti on that particular day.3.4.4 ImplementationWe developed two implementations for each of Íntegro and SybilRank. In the firstimplementation, we used the SyPy framework—our open-source Python packagefor single-machine comparative evaluation of graph-based Sybil accounts detec-tion algorithms. We extended the framework by integrating SciKit-Learn,2 whichis a Python package for machine learning algorithms. We used this implementa-tion for the evaluation presented in Sections 3.4.5–3.4.7. We refer the interestedreader to Appendix B for more details on SyPy.In the second implementation, we used Mahout and Giraph, which are widely-used, open-source distributed machine learning and graph processing systems. Wealso publicly released the implementation as part of GrafosML, which provides anopen-source toolset for big data analytics on top of Mahout and Giraph. We usedthis implementation for the evaluation presented in Sections 3.4.8– Victim ClassificationWe sought to validate the following claim: An OSN can identify potential victimswith a probability that is better than random, using strictly low-cost features ex-tracted from readily-available user profiles. We note, however, that it is possibleto achieve better classification performance, at the price of a higher computationalcost, by using advanced learning algorithms with temporal activity features [47].2http://scikit-learn.org82FeaturesAs described in Table 3.1, we extracted features from both datasets to generatefeature vectors. The only requirement for feature selection was to have each fea-ture value available for all users in the dataset, so that the resulting feature vectorsare complete. For the Facebook dataset, we were able to extract 18 features frompublic user profiles. For Tuenti, however, the dataset was limited to 14 features,but contained user features that are not publicly accessible.In Table 3.1, the RI score stands for the relative importance of the feature,which we define in the upcoming discussion. An RI score of “N/A” means thefeature was not available for the corresponding dataset. A k-categorical featuremeans the feature can have one value out of k unique categories. For example,boolean features are 2-categorical.Classifier TuningThe RF learning algorithm is an ensemble algorithm, where a set of decision treesare constructed at training time. When evaluating the classifier on new data (i.e.,unlabeled feature vectors), the decisions from all trees are combined using a ma-jority voting aggregator [12]. Each decision tree in the forest uses a small randomsubset of available features in order to decrease the generalization error, whichmeasures how well the trained classifier generalizes to unseen data [47]. As shownin Figure 3.3, we performed parameter tuning to calibrate the RF classifier. In par-ticular, we used the out-of-bag error estimates computed by the RF algorithm tonumerically find the best number of decision trees and the number of features foreach tree, so that the prediction variance and bias are controlled across the trees.For the Facebook dataset, we used 450 decision trees, where each tree had 3 fea-tures picked at random out of 18 features. For the Tuenti dataset, we used 500decision trees, where each tree had 7 features picked at random out of 14 features.83Feature Brief description Type RI Score (%)Facebook TuentiUser activity:Friends Number of friends the user had Numeric 100.0 84.5Photos Number of photos the user shared Numeric 93.7 57.4Feed Number of news feed items the user had Numeric 70.6 60.8Groups Number of groups the user was member of Numeric 41.8 N/ALikes Number of likes the users made Numeric 30.6 N/AGames Number of games the user played Numeric 20.1 N/AMovies Number of movies the user watched Numeric 16.2 N/AMusic Number of albums or songs the user listened to Numeric 15.5 N/ATV Number of TV shows the user watched Numeric 14.2 N/ABooks Number of books the user read Numeric 7.5 N/APersonal messaging:Sent Number of messages sent by the user Numeric N/A 53.3Inbox Number of messages in the user’s inbox Numeric N/A 52.9Privacy Privacy level for receiving messages 5-categorical N/A 9.6Blocking actions:Users Number of users blocked by the user Numeric N/A 23.9Graphics Number of graphics (photos) blocked by the user Numeric N/A 19.7Account information:Last updated Number of days since the user updated the profile Numeric 90.77 32.5Highlights Number of years highlighted in the user’s time-line Numeric 36.3 N/AMembership Number of days since the user joined the OSN Numeric 31.7 100Gender User is male or female 2-categorical 13.8 7.9Cover picture User has a cover picture 2-categorical 10.5 < 0.1Profile picture User has a profile picture 2-categorical 4.3 < 0.1Pre-highlights Number of years highlighted before 2004 Numeric 3.9 N/APlatform User disabled third-party API integration 2-categorical 1.6 < 0.1Table 3.1: Low-cost features extracted from Facebook and Tuenti840.313 0.318 0.323 0.328 0.333 50 100 150 200 250 300 350 400 450 500 GeneralizaSon(error(Number(of(decision(trees(Facebook(TuenS((a)0.314 0.318 0.322 0.326 0.330 1 3 5 7 9 11 13 15 17 GeneralizaSon(error(Number(of(features(in(each(tree(Facebook(TuenS((b)Figure 3.3: Victim classifier tuningValidation MethodTo evaluate the accuracy of the victim classifier, we performed a 10-fold, strati-fied cross-validation method [47] using the RF learning algorithm after parametertuning. First, we randomly partitioned the dataset into 10 equally-sized sets, witheach set having the same percentage of victims as the complete dataset. We nexttrained an RF classifier using 9 sets and tested it using the remaining set. We re-peated this procedure 10 times (i.e., folds), with each of the sets being used oncefor testing. Finally, we combined the results of the folds by computing the meanof their true-positive rate (TPR) and false-positive rate (FPR).Performance MetricsThe output of the classifier depends on its operating threshold, which is a cutoffvalue in the prediction probability after which the classifier identifies a user as apotential victim. In order to capture the trade-off between TPR and FPR in singlecurve, we repeated the cross-validation method under different threshold valuesusing a procedure known as receiver operating characteristics (ROC) analysis. InROC analysis, the closer the curve is to the top-left corner at point (0,1) the betterthe classification performance is. The quality of the classifier can be quantifiedwith a single value by calculating the area under its ROC curve (AUC) [47], so850 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 True(posiSve(rate(False(posiSve(rate(TuenS(Facebook(Random(AUC = 0.76 AUC = 0.7 AUC = 0.5 (a) ROC Analysis0.747 0.749 0.751 0.753 0.755 0.757 0.759 0.761 10 20 30 40 50 60 Mean(area(under(ROC(curve(Dataset(size((thousands)((b) Sensitivity to dataset sizeFigure 3.4: Victim classification using the RF algorithmthat an AUC of 1 means a perfect classifier, while an AUC of 0.5 means a randomclassifier. The victim classifier has to be better than random with AUC > 0.5.We also recorded the relative importance (RI) of features used for the classi-fication. The RI score is computed by the RF learning algorithm, and it describesthe relative contribution of each feature to the predictability of the label—being avictim or a non-victim—when compared to all other features [12].ResultsFor both datasets, the victim classifier ended up with an AUC greater than 0.5,as depicted in Figure 3.4a. In particular, for the Facebook dataset, the classifierdelivered an AUC of 0.7, which is 40% better than random. For the Tuenti dataset,on the other hand, the classifier delivered an AUC of 0.76, which is 52% betterthan random. Also, increasing the dataset size to more than 40K feature vectorsdid not significantly improve the AUC in cross-validation, as show in Figure 3.4b.This means an OSN operator can train a victim classifier using a relatively smalldataset, so fewer accounts need to be manually verified.We also experimented with another two widely-used learning algorithms forvictim classification, namely, Naïve Bayes (NB) and SVM [47]. However, bothof these algorithms resulted in smaller AUCs on both datasets. In particular, for86the Facebook dataset, the NB classifier achieved an AUC of 0.63 and the SVMclassifier achieved an AUC of 0.57. Similarly, for the Tuenti dataset, the NBclassifier achieved an AUC of 0.64 and the SVM classifier achieved an AUC of0.59. This result is not surprising as ensemble learning algorithms, such as the RFalgorithm, achieve better predictive performance in case individual classifiers are“weak,” meaning they have small AUCs but are still better than random [47].3.4.6 Ranking QualityWe compared Íntegro against SybilRank in terms of their ranking quality undervarious attack scenarios, where ideally real accounts should be ranked higher thanfake accounts. Our results are based on the average of at least 10 runs, with errorbars reporting 95% confidence intervals (CI), when applicable. For this compari-son, we picked the Facebook dataset because it included both feature vectors andgraph samples.Infiltration ScenariosWe considered two real-world attack scenarios which have been shown to be suc-cessful in practice. In the first scenario, attackers establish attack edges by target-ing users with whom their fakes have mutual friends [10]. Accordingly, we usedthe first Facebook graph which contained timestamped attack edges, allowing usto replay the infiltration by 65 socialbots (n = 2,991 and m = 11,952). We referto this scenario as the targeted-victim attack.In the second scenario, we attackers establish attack edges by targeting usersat random [15]. We designated the second Facebook graph as the real region.We then generated a synthetic fake region consisting of 3,068 fakes with 36,816friendships using the small-world graph model [130]. We then added 35,306 at-tack edges at random between the two regions (n = 9,204 and m = 110,266). Assuggested in related work [137], we used a relatively large number of fakes andattack edges in order to stress-test both systems under evaluation. We refer to thethis scenario as the random-victim attack.87Edge WeightsFor each infiltration scenario, we deployed the previously trained victim classifierin order to assign new edge weights. As we injected fakes in the second scenario,we generated their feature vectors by sampling each feature distribution of fakesfrom the first scenario.3 We also assigned edge weights using a victim classifierthat simulates two operational modes. In the first mode, the classifier outputs thebest possible victim predictions with an AUC≈1 and probabilities greater than0.95. In the second mode, the classifier outputs uniformly random predictionswith an AUC≈0.5. We used this multi-mode classifier to evaluate the theoreticalbest and practical worst case performance of Íntegro (see the legend in Figure 3.5).Evaluation MethodTo evaluate each system’s ranking quality, we ran the system using both infiltra-tion scenarios starting with a single attack edge. We then added another attackedge, according to its timestamp if available, and repeated the experiment. Wekept performing this process until there were no more edges to add. At the end ofeach run, we measured the resulting AUC of each system, as explained next.Performance MetricFor the resulting ranked list of accounts, we performed ROC analysis by movinga pivot point along the list, starting from the bottom. If an account is behind thepivot, we marked it as fake; otherwise, we marked it as real. Given the ground-truth, we measured the TPR and the FPR across the whole list. Finally, we com-puted the corresponding AUC, which in this case quantifies the probability that arandom real account is ranked higher than a random fake account.3We excluded the “friends” feature, as it can be computed from the graph.880.5 0.6 0.7 0.8 0.9 1.0 Mean(area(under(ROC(curve(Number(of(a9ack(edges(IntegroYBest(IntegroYRF(IntegroYRandom(SybilRank((a) Targeted-victim attack0.5 0.6 0.7 0.8 0.9 1.0 Mean(area(under(ROC(curve(Number(of(a9ack(edges((thousands)(IntegroYBest(IntegroYRF(IntegroYRandom(SybilRank((b) Random-victim attackFigure 3.5: Ranking quality under each infiltration scenario (CI=95%)Seeds and IterationsIn order to make the chance of guessing seeds very small, we picked 100 trustedaccounts that are non-victim, real accounts. We used a total trust that is equal to n,the number of nodes in the given graph. We also performed dlog2(n)e iterationsfor both Íntegro and SybilRank.ResultsÍntegro consistently outperformed SybilRank in ranking quality, especially as thenumber of attack edges increased. Using the RF classifier, Íntegro resulted in anAUC which is always greater than 0.92, which is up to 30% improvement overSybilRank in each attack scenario, as shown in Figure 3.5.In each infiltration scenario, both systems performed well when the numberof attack edges was relatively small. In other words, the fakes were sparsely con-nected to real accounts and so the regions were easily separated. As SybilRank lim-its number of fakes that can outrank real accounts by the number of attack edges,its AUC degraded significantly as more attack edges were added to each graph,all the way down to 0.71. Íntegro, however, maintained its performance, with atmost 0.07 decrease in AUC, even when the number of attack edges was relatively89large. Notice that Íntegro performed nearly as good as SybilRank when a randomvictim classifier was used, but performed much better when the RF classifier wasused instead. This clearly shows the impact of leveraging victim classification onfake account detection.3.4.7 Sensitivity to Seed-targeting AttacksSophisticated attackers might obtain a full or partial knowledge of which accountsare trusted by the OSN operator. As the total trust is initially distributed amongthese accounts, an attacker can adversely improve the ranking of the fakes byestablishing attack edges directly with them. We next evaluate both systems undertwo variants of this seed-targeting attack.Attack ScenariosWe focus on two main attack scenarios. In the first scenario, the attacker targetsaccounts that are k nodes away from all trusted accounts. This means that thelength of the shortest path from any fake account to any trusted account is exactlyk+1, representing the distance between the seeds and the fake region. For k = 0,each trusted account is a victim and located at a distance of 1. We refer to thisscenario, which assumes a resourceful attacker, as the distant-seed attack.In the second scenario, attackers have only a partial knowledge and target ktrusted accounts at random. We refer to this scenario as the random-seed attack.Evaluation MethodTo evaluate the sensitivity of each system to a seed-targeting attack, we used thefirst Facebook graph to simulate each attack scenario. We implemented this byreplacing the endpoint of each attack edge in the real region with a real accountpicked at random from a set of candidates. For the first scenario, a candidateaccount is one that is k nodes away from all trusted accounts. For the secondscenario, a candidate account is simply any trusted account. We ran experimentsfor both systems using different values of k and measured the corresponding AUC900 0.2 0.4 0.6 0.8 1 1 2 3 4 5 Mean(area(under(ROC(curve(Distance(from(the(fake(region(IntegroYRF(SybilRank((a) Distant-seed attack0 0.2 0.4 0.6 0.8 1 1 10 20 30 40 50 60 70 80 90 100 Mean(area(under(ROC(curve(Number(of(vicSmized(trusted(accounts(IntegroYBest(IntegroYRF(IntegroYRandom(SybilRank((b) Random-seed attackFigure 3.6: Ranking sensitivity to seed-targeting attacks (CI=95%)at the end of each run.ResultsIn the first attack scenario, both systems had a poor ranking quality when the dis-tance was small, as illustrated in Figure 3.6a. Because Íntegro assigns low weightsto edges incident to victim accounts, the trust that escapes to the fake region isless likely to come back into the real region. This explains why SybilRank hada slightly better AUC for distances less than 3. However, once the distance waslarger, Íntegro outperformed SybilRank ,as expected from earlier results.In the second attack scenario, the ranking quality of both systems degraded, asthe number of victimized trusted accounts increased, where Íntegro consistentlyoutperformed SybilRank, as illustrated in Figure 3.6b. Notice that by selecting alarger number of trusted accounts, it becomes much harder for an attacker to guesswhich account is trusted, while the gained benefit per victimized trusted accountis further reduced.910(200(400(600(800(1000(0 500 1000 1500 2000 2500 Number(of(friends(Days(since(joining(TuenS((a) Users connectivity0 2 4 6 8 10 12 14 16 18 20 1 2 3 4 5 6 7 8 9 10 11 12 PorSon(of(expected(friendships((%)(Months(since(joining(TuenS((b) Friendship growth over timeFigure 3.7: Preprocessing for system deployment3.4.8 Deployment at TuentiWe deployed both systems on a snapshot of Tuenti’s daily active users graph inFebruary 6, 2014. The graph consisted of several million nodes and tens of mil-lions of edges. We had to mask out the exact numbers due to a non-disclosureagreement with Tuenti. After initial analysis of the graph, we found that 96.6%of nodes and 94.2% of edges belonged to one giant connected component (GCC).Therefore, we focused our evaluation on this GCC.PreprocessingUsing a uniform random sample of 10K users, we found that new users have weakconnectivity to others due to the short time they have been on Tuenti, as shown inFigure 3.7a. In particular, there was a positive correlation between number of dayssince a user joined Tuenti and how well-connected the user is in terms of numberof friends (Pearson’s r = 0.36). In fact, 93% of all new users who joined Tuenti inthe last 30 days had weak connectivity of 46 friends or less, much smaller than theaverage of 254 friends. If these users were included in our evaluation, they wouldend up receiving low ranks, which would lead to false positives.To overcome this hurdle, we estimated the period after which users accumulateat least 10% of the average number of friends in Tuenti. To achieve this, we used92a uniformly random sample of 10K real users who joined Tuenti over the last 77months. We divided the users in the sample into buckets representing how longthey have been active members. We then calculated the average number of newfriendships they made after every other month. As illustrated in Figure 3.7b, usersaccumulated 53% of their friendships during the first 12 months. In addition,18.6% of friendships were made after one month since joining the network. Tothis end, we decided to defer the consideration of users who have joined in the last30 days since Feb 6, 2014, which represented only 1.3% of users in the GCC.Community DetectionWe applied the Louvain method on the preprocessed GCC. The method finishedquickly after just 5 iterations with a high modularity score of 0.83, where a valueof 1 corresponds to a perfect partitioning. In total, we found 42 communities andthe largest one consisted of 220,846 nodes. In addition, 15 communities were rel-atively large containing more than 50K nodes. Tuenti’s account analysts verified0.05% of the nodes in each detected community, and designated these nodes astrusted accounts for both systems.Performance MetricAs the number of users in the processed GCC is large, it was infeasible to manu-ally inspect and label each account. This means that we were unable to evaluatethe system using ROC analysis. Instead, we attempted to determine the percent-age of fake accounts at equally-sized intervals in the ranked list. We accomplishedthis in collaboration with Tuenti’s analysts by manually inspecting a user samplein each interval in the list. This percentage is directly related to the precisionof fake account detection, which is a performance metric typically used to mea-sure the ratio of relevant items over the top-k highest ranked items in terms ofrelevance [51].93Evaluation MethodWe utilized the previously trained victim classifier in order to weight a copy ofthe graph. We then ran both systems on two versions of the graph (i.e., weightedand unweighted) for dlog2(n)e iterations, where n is number of accounts in thegraph. After that, we examined the ranked list of each system by inspecting thefirst lowest-ranked one million user accounts. We decided not to include the com-plete range due to confidentiality reasons, because otherwise one could preciselyestimate the actual number of fakes in Tuenti. We randomly selected 100 usersout of each 20K user interval for inspection in order to measure the percentage offakes in the interval, that is, the precision.ResultsAs illustrated in Figure 3.8a, Íntegro resulted in 95% precision in the lowest 20Kranking user accounts, as opposed to 43% by SybilRank and 5% by Tuenti’s user-based abuse reporting system. The precision dropped dramatically as we went upin the list, which means our ranking scheme placed most of the fakes at the bottomof the ranked list, as shown in Figure 3.8b.Let us consider SybilRank’s ranking, as shown in Figures 3.8a and 3.8c. Theprecision, starting with 43% for the first interval, gradually decreased until risingagain at the 10th interval. This pattern repeated at the 32nd interval as well. Weinspected the fake accounts at these intervals and found that they belonged tothree different, large communities. In addition, these fakes had a large numberof friends, much larger than the average of 254 friends. In particular, the fakesfrom the 32nd interval onwards had more than 300 friends, with a maximum ofup to 539. Figure 3.8d shows the degree distribution for both verified fake and realaccounts. This figure suggests that fakes tend to create many attack edges with realaccounts, which confirms earlier findings on other OSNs such as Facebook [10].Also, this behavior explains why Íntegro outperformed SybilRank in user rankingquality; these high degree fakes received lower ranks as most of their victims wereidentified by the classifier.940 10 20 30 40 50 60 70 80 90 100 1 2 3 4 5 6 7 8 9 Percentage(of(fakes((%)(20K(node(interval(in(ranked(list(IntegroYRF(SybilRank((a) Precision at lower intervals0 10 20 30 40 50 60 70 80 90 100 0 10 20 30 40 50 Percentage(of(fakes((%)(20K(node(interval(in(ranked(list(IntegroYRF(SybilRank((b) Precision over the inspected list0 2 4 6 8 10 12 14 10 20 30 40 50 Percentage(of(fakes((%)(20K(node(interval(in(ranked(list(IntegroYRF(SybilRank((c) Precision at higher intervals0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 100 200 300 400 500 600 CDF(Number(of(friends((degree)(Fake(Real((d) Node degree distributionFigure 3.8: Deployment results at TuentiSybilRank in RetrospectSybilRank was initially evaluated on Tuenti, where it effectively detected a sig-nificant percentage of the fakes [15]. The original evaluation, however, prunedexcessive edges of nodes that had a degree greater than 800, which include anon-disclosed number of fakes that highly infiltrated Tuenti. Also, the originalevaluation was performed on the whole graph, which included many dormant ac-counts. In contrast, our evaluation was based on the daily active users graph inorder to focus on active fake accounts that could be harmful. While this changelimited the number of fakes that existed in the graph, it has evidently revealed theineffectiveness of SybilRank under social infiltration. Additionally, the original95evaluation showed that 10–20% of fakes received high ranks, a result we also at-test, due to the fact that these fake accounts had established many attack edges.On the other hand, Íntegro has 0–2% of fakes at these high intervals, and so itdelivers an order of magnitude better precision than SybilRank.3.4.9 ScalabilityWe next evaluate the scalability of Íntegro in terms of its execution time as thenumber of users in the OSN increase. Accordingly, we used the distributed versionof our software implementation (Section 3.4.4), and deployed it on a commoditycomputer cluster for benchmarking, as follows.BenchmarkWe deployed Íntegro on an Amazon Elastic MapReduce4 cluster. The clusterconsisted of one m1.small instance serving as a master node and 32 m2.4xlargeinstances serving as slave nodes. We employed the small-world graph model [130]to generate 5 graphs with an exponentially increasing number of nodes. For eachone of these graphs, we used the Facebook dataset to randomly generate all featurevectors with the same distribution for each feature. We then ran Íntegro on eachof the generated graphs and measured its execution time.ResultsÍntegro achieved a nearly linear scalability with the number of nodes in a graph,as illustrated in Figure 3.9. Excluding the time required to load the 160M nodegraph into memory, which is about 20 minutes for a non-optimized data format, ittakes less than 2 minutes to train an RF victim classifier and compute vulnerabilityscores for nodes on Mahout. It also takes less than 25 minutes to weight the graph,rank nodes, and finally sort them on Giraph. This makes Íntegro computationallypractical even for large OSNs such as Facebook.4http://aws.amazon.com/elasticmapreduce961.0 1.5 2.0 2.5 3.0 3.5 4.0 10 60 110 160 ExecuSon(Sme((minutes)(Number(of(nodes((millions)((a) Victim classification on Mahout0 5 10 15 20 25 10 60 110 160 ExecuSon(Sme((minutes)(Number(of(nodes((millions)((b) User ranking on GiraphFigure 3.9: System scalability on distributed computing platforms3.5 DiscussionAs presented in Section 3.3.6, Íntegro’s security guarantee is sensitive to the per-formance of the deployed victim classifier, which is formally captured by vol(Ea)in the bound O(vol(Ea) logn), and can be practically measured by its AUC.3.5.1 Robustness of User RankingAs illustrated in Figure 3.5, improving the AUC of the victim classifier from ran-dom with AUC≈ 0.5, to actual with AUC= 0.7, and finally to best with AUC≈ 1consistently improved the resulting ranking in terms of its AUC. Therefore, ahigher AUC in victim classification leads to a higher AUC in user ranking. Thisis the case because the ROC curve of a victim classifier monotonically increases,so a higher AUC implies a higher true positive rate (TPR). In turn, a higher TPRmeans more victims are correctly identified, and so more attack edges are assignedlower weights, which evidently leads to a higher AUC in user ranking.Regardless of the used victim classifier, the ranking quality decreases as thenumber of attack edges increases, as shown in Figure 3.5. This is the case becauseeven a small false negative rate (FNR) in victim classification means more attackedges, which are indecent to misclassified victims, are assigned high weights,97leading to a lower AUC in user ranking.3.5.2 Maintenance and ImpactWhile an attacker does not control real accounts nor their activities, it can stilltrick users into befriending fakes. In order to achieve a high-quality ranking, thevictim classifier should be regularly retrained to capture new and changing userbehavior in terms of susceptibility to social infiltration. This is, in fact, the case forsupervised machine learning when applied to computer security problems [104].Also, as the ranking scheme is sensitive to seed-targeting attacks, the set of trustedaccounts should be regularly updated and validated in order to reduce the negativeimpact of these attacks, even if they are unlikely to succeed (Section 3.3.4).By using Íntegro, Tuenti requires nearly 67 man hours to manually validate the20K lowest ranking user accounts, and discover about 19K fake accounts insteadof 8.6K fakes with SybilRank. With its user-based abuse reporting system thathas 5% hit rate, and assuming all fakes get reported, Tuenti would need 1,267man hours instead to discover 19K fake accounts (18.9 times more man hours).As such, this improvement has been useful to both Tuenti and its users.3.5.3 LimitationsÍntegro is not a stand-alone fake account detection system. It is intended to com-plement existing detection systems and is designed to detect automated fake ac-counts that befriend many victims for subsequent attacks. In what follows, weoutline two limitations which are inherited from SybilRank [15] and similar ran-dom walk-based ranking schemes [137].Design is Limited to only Undirected Social GraphsIn other words, OSNs whose users declare lateral relationships are not expectedto benefit from our proposal. This is the case because directed graphs, in general,have a significantly smaller mixing time than their undirected counterparts [84],98which means a random walk on such graphs will converge in a much small numberof steps, rendering short random walks unsuitable for robust user ranking.Deployment Delays the Consideration of New User AccountsThis means that an OSN operator might miss the chance to detect fakes at theirearly life-cycle. However, as shown in Figure 3.7a, only 7% of new users whojoined Tuenti in the last month had more than 46 friends. To estimate the numberof fakes in new accounts, we picked 100 accounts at random for manual verifica-tion. We found that only 6% of these accounts were fake, and the most successfulfake account had 103 victims. In practice, the decision of whether to exclude theseaccount is operational, and it depends on the actions taken on low-ranking users.For example, an operator can enforce abuse mitigation technique, as discussedin Section 3.1.3, against low-ranking users, where false positives can negativelyaffect user experience but slow down fake accounts that just joined the network.This is a security/usability trade-off which we leave to the operator to manage. Al-ternatively, the operator can use fake account detection systems that are designedto admit legitimate new users using, for example, a vouching process [131].3.6 SummaryDetecting fake accounts protects both OSN operators and their users from variousmalicious activities. Most detection mechanisms attempt to classify user accountsas real or fake by analyzing either user-level activities or graph-level structures.These mechanisms, however, are not robust against adversarial attacks in whichfake accounts cloak their operation with patterns resembling real user behavior.This work stemmed Findings 2 and 4 from Chapter 2, and demonstrated thatvictims—real accounts whose users have accepted friend requests sent by fakes—form a distinct classification category that is useful for designing robust detectionmechanisms. To start with, as attackers have no control over victim accounts andcannot alter their activities, a victim account classifier which relies on user-levelactivities to identify potential victims is relatively hard to circumvent. Moreover,99as fakes are directly connected to victims, a graph-based fake account detectionmechanism that leverages victim classification is robust against adverse manipu-lations of the graph, social infiltration in particular.To validate this new approach, we designed and evaluated Íntegro—a robustand scalable defense system that leverages victim classification to rank most realaccounts higher than fakes, so that OSN operators can take actions against low-ranking fake accounts. In particular, Íntegro starts by identifying potential victimsfrom user-level activities using supervised machine learning. After that, it anno-tates the graph by assigning lower weights to edges incident to potential victimsthan others. Finally, Íntegro ranks user accounts based on the landing probabilityof a short random walk that starts from a known real account. As this walk is un-likely to traverse low-weight edges in few steps and land on fakes, Íntegro achievesthe desired ranking.We implemented Íntegro using widely-used, open-source distributed comput-ing platforms, where it scaled nearly linearly. We also evaluated Íntegro againstSybilRank, which is the state-of-the-art in fake account detection, using real-worlddatasets and a large-scale deployment at Tuenti—the largest OSN in Spain. Weshowed that Íntegro significantly outperforms SybilRank in user ranking quality,with the only requirement that the employed victim classifier is better than ran-dom. Moreover, the deployment of Íntegro at Tuenti resulted in up to an order ofmagnitude higher precision in fake account detection, as compared to SybilRank.At the moment, Tuenti uses Íntegro in production in order to thwart fake ac-counts in the wild, with at least 10 time better precision than their older detectionmethod and hundreds of hours less time spent for manual account verification.100Chapter 4Discussion and Research DirectionsWe now discuss different directions in defending against automated fake accountsin OSNs. We first elaborate that preventing social infiltration boils down to solvinga set of hard socio-technical challenges (Section 4.1). After that, we explore a newdefense direction which involves “risky” account admission control (Section 4.2).Finally, we advocate that leveraging victim prediction in OSNs is a new securityparadigm which can benefit different defense mechanisms (Section 4.3).4.1 Challenges for Preventive CountermeasuresWhen operated as part of a socialbot network (SbN), an OSN can defend againstlarge-scale social infiltration by following at least one of three defense strategies:(1) preventing its operation in the first place, (2) detecting its operation as early aspossible, or (3) limiting the advantage an attacker gains from its operation.Preventing SbN operation can be achieved by eliminating its enabling factors,that is, by fixing at least one of the vulnerabilities outlined in Chapter 2. In whatfollows, we demonstrate that doing so leads to a set of hard socio-technical chal-lenges that relate to web automation, online-offline identity binding, and usablesecurity. This means that an OSN has more leverage in detecting and limiting theabusive operation of an SbN rather than preventing its operation in the first place.1014.1.1 Web AutomationIn order to simulate a user browsing an OSN platform (e.g., Facebook’s desk-top website), the attacker can employ web automation techniques, which includemethods for solving CAPTCHAs, creating and populating multiple OSN accounts,crawling the social graph, and executing online social activities. Preventing thisautomation, however, requires solving at least one of the following challenges.Challenge 1: Design a reverse Turing test that is usable and effective even against“illegitimate” human solvers.A reverse Turing test, such as CAPTCHA [124], is a challenge-response testwhich is administered by a machine and is designed to tell humans and machinesapart. A perfect Turing test presents a problem that is easy enough for all humansto solve, but is still impossible for a machine or an automation software to pass.Unfortunately, even a perfect test is ineffective if humans are exploited to solvethe test in an illegitimate setting: The situation where human users are employedor tricked into solving reverse Turing tests that are not addressed to them. Underthis setting, we refer to such a human solvers as illegitimate.Eliminating the economic incentives for underground businesses that employillegitimate human solvers is a first step towards tackling this challenge [87], butit does not solve it as legitimate users can be tricked and situated into illegitimatesettings, which is the case for the Koobface botnet [112]. This demands the designof new reverse Turing tests that are resilient to even those illegitimate users, whichwe believe is generally difficult to achieve.Fast-response CAPTCHAs, for example, require the test to be solved in a rel-atively shorter time, as opposed to typical implementations. This makes it moredifficult for automation scripts to pass the test, as they require extra time to relaythe test, solve it and respond back. Fast-response CAPTCHAs, however, are ex-pected to put more pressure on legitimate users who require easy and fast accessto online services, and could potentially repel them away from using them.Alternatively, authenticating users via their social knowledge (e.g., whetherthey can identify their friends from photos), can be used as an effective test that102is challenging for illegitimate users to solve [136]. Other that its usability is-sues, Kim et al. [65] show that it is relatively easy to circumvent such a socialauthentication scheme by either guessing, automated face recognition, or socialengineering.Challenge 2: Effectively limit large-scale Sybil crawls of OSNs without restrict-ing users’ social experience.A large-scale crawl is a malicious activity where an attacker manages to crawllarge portions of a target OSN, including both the social graph and all accessibleuser attributes. Today, large-scale crawls are mitigated by employing a network-wide audit service, which limits the number of profiles a user can view per accountor IP address in a given period of time [104]. This, however, can be circumventedby creating a set of fake accounts and then performing Sybil crawling on a largescale, typically using a botnet with multiple IP addresses [112].To overcome this drawback, one can use the knowledge about the social graphto effectively limit Sybil crawling. Genie [85], for example, is a system that mod-els the trust between users in an OSN as a credit network, where a user can viewthe profile of another user only if the path between them in the social graph hasenough credits to satisfy the operation. If an attacker who controls many fake ac-counts attempts to crawl the OSN on a large scale, then Genie guarantees that theattacker will exhaust all the credits on the paths connecting the fakes to the rest ofthe network, thus limiting large-scale Sybil crawling. This approach, however, isbased on the assumption that it would be hard for an attacker to establish a largenumber of social relationships with other users, which is not case as we showedin Section 2.4.5.Challenge 3: Detect abusive and automated usage of OSN platforms and socialAPIs across the Web.In concept, malicious automation represents the situation in which an attackerscripts his way of consuming system’s resources in order to cause damage or harmto the system itself or its users. Abusive automation, on the other hand, is less103severe because the attacker exploits the offered service in violation of the declaredterms of service (ToS). From OSN operator’s standpoint, all HTTP requests comefrom either a browser or through the social API, which is intentionally provided tosupport automation. Requests that are not associated with a browsing session, thatis, those that do not append the required session cookies, can be easily detectedand dealt with. With web automation, however, an attacker can simulate an OSNuser and make all requests look as if they originate from a browser. Moreover, thepatterns at which these requests are made can be engineered in such a way thatmakes them fall under the normal traffic category. In order to uncover adversarialcampaigns, it is important to reliably identify whether such requests come froma human or a bot, along with means to distinguish patterns of abusive activities,even if the attacker has a knowledge of the used classification techniques.Looking for regularities in the times at which requests are made, for example,can be used to detect automation in OSNs [140]. This, however, can be easilycircumvented by simply mimicking the times and irregularities at which a humanuser makes such requests.4.1.2 Identity BindingMost of the challenges we presented so far are difficult due to the capability of theattacker to mount the Sybil attack. This leads us to the following challenge:Challenge 4: Guarantee an anonymous, yet credible, online-offline identity bind-ing in open-access systems.Douceur [22] showed that without a centralized trusted party that certifies on-line identities, Sybil attacks are always possible except under extreme and unreal-istic assumptions of resource parity and coordination among participating entities.Thus, limiting the number of fake accounts by forcing a clean mapping betweenonline and offline identities is widely recognized as a hard problem, especiallygiven the scalability requirements of today’s open-access OSNs.Arguably, one way to tackle this challenge is to rely on governments for on-line identity management, just as in offline settings. The open government ini-104tiative [111], for example, enables U.S. citizens to easily and safely engage withU.S. government websites using open identity technologies such as OpenID. This,however, requires creating open trust frameworks [111] that enable these websitesto accept identity credentials from third-party identity providers, a task that in-volves solving challenging issues related to identity anonymity, scalability, secu-rity, technology incentives and adoption [75, 110].4.1.3 Usable SecurityAs part of computer security, usable security aims to provide the users with secu-rity controls they can understand and privacy they can control [18]. In OSNs suchas Facebook, there appears to be a growing gap between what the user expectsfrom a privacy control and what this control does in reality [73]. Even if the mostsophisticated OSN security defense is in place, an OSN is still vulnerable to manythreats, such as social phishing [58], in case its users find it puzzling to make ba-sic online security or privacy decisions. This gives us strong motives to study thehuman aspect of the OSN security chain, which is by itself a challenge.Challenge 5: Develop OSN security and privacy controls that help users makeinformed decisions.Designing security controls that better communicate the risks of befriendinga stranger, for example, might be effective against automated social engineering.This, however, requires eliciting and analyzing the befriending behavior of users,including the factors that influence their befriending decisions, in order to informa user-centered design for such controls.Rashtian et al. [97] conducted qualitative and quantitative studies to explainhow various factors influence users when accepting friend requests in Facebook.They found that few factors significantly impact the user’s decision, namely, know-ing the requester in real world, having common hobbies or interests, having mu-tual friends, and the closeness of mutual friends. The study, however, does notevaluate how an OSN operator can utilize these factors to improve user controlsin order to limit the success of social infiltration in OSNs.1054.2 Account Admission Control in OSNsAs presented in Chapter 3, fake account detection represents a reactive defensestrategy, in which new accounts are admitted to the OSN and then classified. Onthe other hand, admission control represents a proactive defense strategy, in whichaccounts are provisioned and controlled based on user or OSN defined criteria, allbefore being admitted to the OSN or given full access to its available services.Although admission control can lead to more resilient defense mechanisms, it isbased on the presumption that “users are guilty until proven innocent,” which in-troduces serious concerns related to user experience and the growth of the OSNin terms of its user-base [104]. In what follows, we review prominent mecha-nisms for admission control in OSN, which can be deployed along side other fakeaccount detection mechanisms, including Íntegro.4.2.1 VouchingXie et al. proposed to identify newly registered, real accounts as early as possiblein order to grant them full access to available services [131]. The authors devel-oped a vouching process that is resilient to attackers, where existing real accountsvouch for new accounts. By carefully monitoring vouching via social communitystructures, they were able to admit 85% of real accounts while reducing the per-centage of admitted fake accounts from 44% to 2.4%, using a dataset provided byHotmail and another one collected from Twitter.4.2.2 Service ProvisioningMislove et al. were among the first to dynamically limit available OSN services tounknown fake accounts by modeling lateral trust relationships between users as acredit network [81]. The authors developed a technique that assigns credit valuesto friendships such that an account is able to send a friend request, for example,only if there is a path with available credit from the sender to the receiver. Mondalet al. utilized this approach to limit large-scale crawls in OSNs [85].1064.2.3 User Education and Security AdviceKim et al. proposed to visualize the trust between users in order to help them bet-ter authenticate those who request their friendship in OSNs such as Facebook [66].Based on prior social science research demonstrating that the social tie strengthis a strong indicator of trust, the authors developed a tool to visualize the so-cial tie strength between the receiver and the sender of a friend request, basedon features of their mutual friends such as their interaction frequency, commu-nication reciprocity, recency, and length. To validate their approach, the authorsconducted a survey with 93 participants who used their visualization tool. Theparticipants found that the tool helped them make better befriending decisions,especially when they received requests from fake accounts.Wang et al. employed known concepts from behavioral decision research andsoft paternalism in order to design mechanisms that “nudge” users to reconsiderthe content and context of their online disclosures before committing them [129].The authors evaluated the effectiveness of their approach with 21 Facebook usersin a three week exploratory field study and 13 follow-up interviews. Their resultssuggested that privacy nudges can be a promising way to prevent unintended dis-closures when, for example, one befriends a fake or shares a post with the public.4.3 Leveraging Victim PredictionIn Chapter 3, we introduced a new and complementary approach to thwart fakeaccounts in OSNs. In particular, we proposed to predict the potential victim of un-known fakes, and then leverage this information in existing defense mechanisms.In fact, Íntegro is one example of an application of this approach to graph-basedfake account detection.We herein take the position that leveraging victim prediction represents a newsecurity paradigm by itself. To highlight the implication of this paradigm to exist-ing defense mechanisms, we study two more security mechanisms that are widely-used to defend against fake accounts in OSNs; namely, user-facing security advice107(§4.3.1) and honeypots (§4.3.2). In what follows, we discuss how each mechanismcan be improved by incorporating victim account prediction into its workflow.4.3.1 User-Facing Security AdviceUser education is the first line of defense against increasingly sophisticated socialengineering attacks, especially in OSNs [58, 107]. While many studies showedthat users tend to reject security advice because of low motivation and poor un-derstanding of involved threats [1, 76], others asserted that users do so because itis entirely rational from an economic viewpoint [34, 48]. In particular, the adviceoffers to protect the users from the direct costs of attacks, but burdens them withincreased indirect costs in the form of effort. When the security advice is appliedto all users, it becomes a daily burden whose benefit is the potential saving of di-rect costs to potential victims; the fraction that might become victims. When thisfraction is small, designing a security advice that is beneficial becomes very hard.For example, it is not feasible to burden 1.2 billion Facebook users with a dailytask in order to spare 1% of them.One way to increase the benefit of a security advice is to make it more usable,which in effect reduces its indirect costs to users. This has been the focus of agrowing community of usable security researchers who consider user educationessential to securing socio-technical systems such as OSNs [18]. Another com-plementary way to reduce indirect costs is to display the security advice to onlythe fraction who might actually benefit from it.We propose to achieve this reduction by displaying the security advice in aninformed, targeted manner. In particular, victim prediction provides an OSN witha robust way to quantify how vulnerable each real account is, as some users aremore likely to be victims than others. The OSN can use this information to focusonly on the most vulnerable population, and accordingly, educate them using asecurity advice while relieving the rest of the population from associated efforts.1084.3.2 Honeypots and User SamplingHoneypots are accounts specially created or controlled to sample the activities ofuser accounts, in particular, those who contact these honeypots by sending themfriend requests or by sharing content [108]. The sampled activities are then ana-lyzed and used to maintain an up-to-date ground truth for fake account detection.While honeypot accounts are often used by third parties, OSNs still perform sim-ilar sampling albeit with direct access to user data [104]. Such a sampling tech-nique, however, is inefficient as it is opportunistic if not completely random. Forexample, Stringhini et al. used 300 honeypot accounts in Facebook to record useractivities over 12 months [108]. The collected dataset, however, was very smallrelatively to the sampling period, with only 3,831 friend requests (4.5% fake) and72,431 messages (5.4% fake). To some extent, this also suggests that attackers donot necessarily target users at random.Assuming that the percentage of fakes in the OSN is small, it is reasonable tosample the users at random and expect to collect mostly benign content originat-ing from real accounts. The problem, however, is when one samples for abusivecontent, as the sampling has to be biased towards unknown fake accounts. Forexample, Facebook has more than 600 million daily active users and they per-form billions of actions everyday [94]. In contrast, the number of fake accountsinvolved in an attack campaign is often on the order of thousands [16]. It is thusdesired to have a reliable way to inform the user sampling process.As victims are connected to abusive fakes, we propose to achieve this propertyby identifying potential victims of unknown fakes and then sampling the activitiesof their friends. Along with uniform random sampling [99], an OSN can achieve adesirable benign-to-abusive content ratio, which is important for effective feature-based detection using machine learning techniques [47].109Chapter 5Impact and ConclusionThe ease with which we adopt online personas and relationships has created a softspot that cyber criminals are willing to exploit. Advances in artificial intelligencemake it feasible to design “socialbots” that sense, think, and act cooperatively insocial settings just as in social robotics. In the wrong hands, these socialbots canbe used to infiltrate online communities and carry out various malicious activities.From a computer security perspective, the concept of malicious socialbots is bothinteresting and disturbing, for the threat is no longer from a human controlling ormonitoring a computer, but from exactly the opposite.While the motivations for operating socialbots and the technical mechanismsthat enable them remain rich areas of research, this dissertation focused on tworesearch goals, which naturally divided the presentation into two parts:1. Threat characterization: To understand and characterize what makes OSNsvulnerable to cyber-attacks by malicious socialbots. In particular, we focuson social infiltration at scale, in which malicious socialbots are used to con-nect with a large number of legitimate users, in order to mount subsequentattacks in the target OSN.2. Countermeasure design: To design a new countermeasure to effectively andefficiently defend against socialbots that infiltrate users on a large scale. In110particular, the countermeasure has to be robust against social infiltration andscale to large OSNs consisting at least hundreds of millions of users.In the fist part, we considered profit-driven attackers who infiltrate OSNs usingmalicious socialbots. Our investigation was guided by the following RQs:• RQ1: How vulnerable are OSNs to social infiltration?• RQ2: What are the security and privacy implications of social infiltration?• RQ3: What is the economic rationale behind social infiltration at scale?To address these questions, we studied social infiltration as an organized cam-paign run by a network of socialbots. We adopted the design of web-based botnetsand defined what we call a socialbot network (SbN)—a group of programmablesocialbots that are orchestrated by an attacker. We implemented an SbN proto-type consisting of 100 socialbots and operated it on Facebook for 8 weeks in early2011. The main findings of this study are:1. OSNs such as Facebook suffer from inherent vulnerabilities that enable anattacker to automate social infiltration on a large scale.2. Some users are more likely to become victims than others, which partlydepends on factors related to their social structure.3. Operating an SbN can result in serious privacy breaches, where personallyidentifiable information is compromised.4. Traditional OSN defenses are not effective at identifying automated fakeaccounts nor their social infiltration campaigns.5. In an economic context, an SbN ought to have a fixed size in terms of num-ber of socialbots for it to be scalable in terms of number of attacked users.6. Operating an SbN at scale is expected to be profitable, but it is not particu-larly attractive as an independent business.111These findings resulted in a timely public education about the threat [56], andhas encouraged other researchers to replicate and extend this study on other OSNssuch as Twitter and LinkedIn [29, 125]. The main contributions of this part of thedissertation are the following:1. Demonstrating the feasibility of social infiltration in OSNs.2. Economic analysis of social infiltration at scale.In the second part, we built on top of earlier findings and considered attackerswho can run a social infiltration campaign at a large scale using a set of automatedfake accounts, or socialbots. We aimed to tackle the following question:• RQ4: How can OSNs detect fakes that infiltrate users on a large scale?To address this question, we designed Íntegro—a robust and scalable defensesystem that helps OSNs detect automated fake accounts via a user ranking scheme.Our design follows from previous findings, which shed light on the possibility ofpredicting potential victims of fakes in OSNs. In particular, Íntegro uses super-vised machine learning with features extracted from user-level activities in orderto identify potential victims in the OSN. By weighting the graph such that edgesincident to potential victims have lower weights than other accounts, Íntegro guar-antees that most real accounts are ranked higher than fakes. These ranks are de-rived from the landing probability of a modified random walk that starts from aknown real account. To our knowledge, Íntegro is the first detection system thatis robust against social infiltration, where fakes follow an adversarial strategy tobefriend a large number of accounts, real or fake, in order to evade detection.We implemented Íntegro on top of widely-used distributed systems, in whichit scaled nearly linearly. We also evaluated Íntegro against SybilRank using ouropen-source comparative evaluation framework under real-world datasets and aproduction-class deployment at Tuenti. Our evaluation results showed that Íntegrosignificantly outperforms SybilRank in ranking quality, with up to an order ofmagnitude better precision in fake account detection. This part of the dissertationmakes the following contributions:1121. Leveraging victim classification for fake account detection.2. Open-source implementation and evaluation framework.Íntegro is currently used in production at Tuenti to thwart fakes in the wild withat least 10 times higher precision, along side a proprietary feature-based detectionsystem and a user-based abuse reporting system. It is also publicly released as partof an open-source project that is used by OSNs such as Facebook and Twitter.Finally, we demonstrated that trying to prevent the threat of malicious social-bots leads to a set of hard socio-technical challenges that relate to web automation,online-offline identity binding, and usable security. Therefore, an OSN has moreleverage in detecting socialbots and limiting their abuse as early as possible, ratherthan preventing their operation in the first place. In terms of future research, wehighlighted account admission control as one possible direction, albeit it suffersfrom usability issues that discourage OSN operators from deploying it in practice.Instead, we advocated that leveraging victim prediction in OSNs is a new secu-rity paradigm, which can benefit different defense mechanisms, such as educatingusers with a user-facing security advice or detecting fakes using OSN honeypots.113Bibliography[1] A. Adams and M. A. Sasse. Users are not the enemy. Communications ofthe ACM, 42(12):40–46, 1999. → pages 108[2] L. Alvisi, A. Clement, A. Epasto, S. Lattanzi, and A. Panconesi. SoK: Theevolution of sybil defense via social networks. In Proceedings of the 34thIEEE Symposium on Security and Privacy, pages 382–396. IEEE, 2013.→ pages 5, 58, 62, 76, 80[3] D. S. Anderson, C. Fleizach, S. Savage, and G. M. Voelker. Spamscatter:Characterizing internet scam hosting infrastructure. In USENIX SecuritySymposium, 2007. → pages 21[4] R. Anderson. Security engineering. John Wiley & Sons, 2008. → pages 6,17[5] E. Behrends. Introduction to Markov chains with special emphasis onrapid mixing, volume 228. Vieweg, 2000. → pages 72, 132[6] L. Bilge, T. Strufe, D. Balzarotti, and E. Kirda. All your contacts arebelong to us: automated identity theft attacks on social networks. InProceedings of the 18th international conference on World wide web,pages 551–560. ACM, 2009. → pages 15, 17, 18, 21, 23, 58, 63[7] V. Blondel, J. Guillaume, R. Lambiotte, and E. Lefebvre. Fast unfoldingof communities in large networks. Journal of Statistical Mechanics:Theory and Experiment, 2008(10), 2008. → pages 75[8] J. Bollen, H. Mao, and X. Zeng. Twitter mood predicts the stock market.Journal of Computational Science, 2(1):1–8, 2011. → pages 3, 15114[9] N. Bos, K. Karahalios, M. Musgrove-Chávez, E. S. Poole, J. C. Thomas,and S. Yardi. Research ethics in the facebook era: privacy, anonymity, andoversight. In CHI’09 Extended Abstracts on Human Factors in ComputingSystems, pages 2767–2770. ACM, 2009. → pages 32, 70[10] Y. Boshmaf, I. Muslukhov, K. Beznosov, and M. Ripeanu. The socialbotnetwork: when bots socialize for fame and money. In Proceedings of the27th Annual Computer Security Applications Conference, pages 93–102.ACM, 2011. → pages 35, 51, 53, 58, 69, 87, 94[11] D. M. Boyd and N. B. Ellison. Social network sites: Definition, history,and scholarship. Journal of Computer-Mediated Communication, 13(1):210–230, 2008. → pages 1, 3[12] L. Breiman. Random forests. Machine learning, 45(1):5–32, 2001. →pages 69, 83, 86[13] G. Brown, T. Howe, M. Ihbe, A. Prakash, and K. Borders. Social networksand context-aware spam. In Proceedings of the 2008 ACM conference onComputer supported cooperative work, pages 403–412. ACM, 2008. →pages 18[14] J. Caballero, C. Grier, C. Kreibich, and V. Paxson. Measuringpay-per-install: The commoditization of malware distribution. In USENIXSecurity Symposium, 2011. → pages 15, 16, 21[15] Q. Cao, M. Sirivianos, X. Yang, and T. Pregueiro. Aiding the detection offake accounts in large scale social online services. In NSDI, pages197–210, 2012. → pages 4, 12, 14, 41, 57, 61, 62, 63, 65, 77, 78, 79, 87,95, 98[16] Q. Cao, X. Yang, J. Yu, and C. Palow. Uncovering large groups of activemalicious accounts in online social networks. In Proceedings of the 2014ACM SIGSAC Conference on Computer and Communications Security,CCS’14, pages 477–488. ACM, 2014. → pages 61, 109[17] CBC. Facebook shares drop on news of fake accounts, Aug 2012. URLhttp://goo.gl/6s5FKL. → pages 4, 57[18] L. F. Cranor. Security and usability: designing secure systems that peoplecan use. O’Reilly Media, Inc., 2005. → pages 105, 108115[19] G. Danezis and P. Mittal. Sybilinfer: Detecting sybil nodes using socialnetworks. In Proceedings of the 9th Annual Network & Distributed SystemSecurity Symposium. ACM, 2009. → pages 12, 41, 79[20] J. B. Davies, A. Shorrocks, S. Sandstrom, and E. N. Wolff. The worlddistribution of household wealth. Center for Global, International andRegional Studies, 2007. → pages 19[21] M. Dellamico and Y. Roudier. A measurement of mixing time in socialnetworks. In Proceedings of the 5th International Workshop on Securityand Trust Management, Saint Malo, France, 2009. → pages 72[22] J. R. Douceur. The Sybil attack. In Peer-to-peer Systems, pages 251–260.Springer, 2002. → pages 2, 17, 104[23] C. Dwork and M. Naor. Pricing via processing or combatting junk mail.In Advances in Cryptology—CRYPTO’92, pages 139–147. Springer, 1993.→ pages 18[24] D. Easley and J. Kleinberg. Networks, crowds, and markets: Reasoningabout a highly connected world. Cambridge University Press, 2010. →pages 1, 26, 31[25] M. Egele, L. Bilge, E. Kirda, and C. Kruegel. Captcha smuggling:hijacking web browsing sessions to create captcha farms. In Proceedingsof the 2010 ACM Symposium on Applied Computing, pages 1865–1870.ACM, 2010. → pages 22[26] M. Egele, G. Stringhini, C. Kruegel, and G. Vigna. Compa: Detectingcompromised accounts on social networks. In NDSS, 2013. → pages 59[27] N. B. Ellison, C. Steinfield, and C. Lampe. The benefits of facebook“friends:” social capital and college students’ use of online social networksites. Journal of Computer-Mediated Communication, 12(4):1143–1168,2007. → pages 32[28] N. B. Ellison et al. Social network sites: Definition, history, andscholarship. Journal of Computer-Mediated Communication, 13(1):210–230, 2007. → pages 15, 59116[29] A. Elyashar, M. Fire, D. Kagan, and Y. Elovici. Homing socialbots:intrusion on a specific organization’s employee using socialbots. InProceedings of the 2013 IEEE/ACM International Conference onAdvances in Social Networks Analysis and Mining, pages 1358–1365.ACM, 2013. → pages 58, 63, 112[30] Facebook. Investors relations: Annual earnings report.http://investor.fb.com, October 2014. → pages 1, 32, 52[31] Facebook. Whitehat program: Reporting security vulnerabilities.https://facebook.com/whitehat/report, October 2014. → pages 33[32] Facebook. Quarterly earning reports, Jan 2014. URL http://goo.gl/YujtO.→ pages 57[33] Facebook. Terms of service. https://www.facebook.com/legal/terms, May2015. → pages 3, 4[34] D. Florêncio and C. Herley. Where do security policies come from? InProceedings of the Sixth Symposium on Usable Privacy and Security,page 10. ACM, 2010. → pages 108[35] D. Florêncio and C. Herley. Where do all the attacks go? In Economics ofInformation Security and Privacy III, pages 13–33. Springer, 2013. →pages 19, 55[36] S. Fortunato. Community detection in graphs. Physics Reports, 486(3):75–174, 2010. → pages 62[37] M. Fossi, E. Johnson, D. Turner, T. Mack, J. Blackbird, D. McKinney,M. K. Low, T. Adams, M. P. Laucht, and J. Gough. Symantec report onthe underground economy. Symantec Corporation, 2008. → pages 51, 52[38] J. Franklin, A. Perrig, V. Paxson, and S. Savage. An inquiry into thenature and causes of the wealth of internet miscreants. In ACM conferenceon Computer and communications security, pages 375–388, 2007. →pages 15[39] M. Gjoka, M. Kurant, C. T. Butts, and A. Markopoulou. Walking inFacebook: A case study of unbiased sampling of osns. In Proceedings ofthe 2010 IEEE International Conference on Computer Communications,pages 1–9. IEEE, 2010. → pages 9, 34117[40] G. H. Golub and H. A. Van der Vorst. Eigenvalue computation in the 20thcentury. Journal of Computational and Applied Mathematics, 123(1):35–65, 2000. → pages 70, 130[41] M. Goncharov. Russian underground 101. Trend Micro IncorporatedResearch Paper, 2012. → pages 51[42] C. Grier, L. Ballard, J. Caballero, N. Chachra, C. J. Dietrich,K. Levchenko, P. Mavrommatis, D. McCoy, A. Nappa, A. Pitsillidis, et al.Manufacturing compromise: the emergence of exploit-as-a-service. InProceedings of the 2012 ACM conference on Computer andcommunications security, pages 821–832. ACM, 2012. → pages 21[43] J. Grimmelmann. Saving facebook. Iowa Law Review, 94:1137–1206,2009. → pages 4[44] R. Gross and A. Acquisti. Information revelation and privacy in onlinesocial networks. In Proceedings of the 2005 ACM workshop on Privacy inthe electronic society, pages 71–80. ACM, 2005. → pages 40[45] Z. Gyöngyi, H. Garcia-Molina, and J. Pedersen. Combating web spamwith trustrank. In Proceedings of the Thirtieth international conference onVery large data bases-Volume 30, pages 576–587. VLDB Endowment,2004. → pages 70[46] E. Hargittai et al. Facebook privacy settings: Who cares? First Monday,15(8), 2010. → pages 20[47] T. Hastie, R. Tibshirani, and J. Friedman. The elements of statisticallearning: Data mining, inference, and prediction, second edition.Springer, 2009. → pages 69, 70, 82, 83, 85, 86, 87, 109[48] C. Herley. So long, and no thanks for the externalities: the rationalrejection of security advice by users. In Proceedings of the 2009 workshopon New security paradigms workshop, pages 133–144. ACM, 2009. →pages 108[49] C. Herley. The plight of the targeted attacker in a world of scale. In The9th Workshop on the Economics of Information Security. ACM, 2010. →pages 18, 19, 22, 44, 54118[50] C. Herley and D. Florêncio. Nobody sells gold for the price of silver:Dishonesty, uncertainty and the underground economy. In Economics ofInformation Security and Privacy, pages 33–53. Springer, 2010. → pages43, 47, 55[51] J. L. Herlocker, J. A. Konstan, L. G. Terveen, and J. T. Riedl. Evaluatingcollaborative filtering recommender systems. ACM Transactions onInformation Systems (TOIS), 22(1):5–53, 2004. → pages 93[52] C. J. Hernandez-Castro and A. Ribagorda. Remotely telling humans andcomputers apart: An unsolved problem. In iNetSec 2009–Open ResearchProblems in Network Security, pages 9–26. Springer, 2009. → pages 22[53] T. Holz, C. Gorecki, F. Freiling, and K. Rieck. Detection and mitigation offast-flux service networks. In Proceedings of the 15th Annual Networkand Distributed System Security Symposium, 2008. → pages 16, 21[54] M. Huber, S. Kowalski, M. Nohlberg, and S. Tjoa. Towards automatingsocial engineering using social networking sites. In ComputationalScience and Engineering, 2009. CSE’09. International Conference on,volume 3, pages 117–124. IEEE, 2009. → pages 18, 29[55] M. Huber, M. Mulazzani, and E. Weippl. Who on earth is “mr. cypher”:Automated friend injection attacks on social networking sites. In Securityand Privacy–Silver Linings in the Cloud, pages 80–89. Springer, 2010. →pages 25[56] T. Hwang, I. Pearce, and M. Nanis. Socialbots: Voices from the fronts.interactions, 19(2):38–45, 2012. → pages 1, 2, 5, 18, 59, 112[57] D. Irani, M. Balduzzi, D. Balzarotti, E. Kirda, and C. Pu. Reverse socialengineering attacks in online social networks. In Detection of intrusionsand malware, and vulnerability assessment, pages 55–74. Springer, 2011.→ pages 17, 18[58] T. N. Jagatic, N. A. Johnson, M. Jakobsson, and F. Menczer. Socialphishing. Communications of the ACM, 50(10):94–100, 2007. → pages 3,18, 105, 108[59] A. Jaimoukha. Circassian Proverbs & Sayings. Sanjalay Book Press,2009. → pages iv119[60] S. D. Kamvar, M. T. Schlosser, and H. Garcia-Molina. The EigenTrustalgorithm for reputation management in P2P networks. In Proceedings ofthe 12th international conference on World Wide Web, pages 640–651.ACM, 2003. → pages 12, 41, 79[61] C. Kanich, N. Weaver, D. McCoy, T. Halvorson, C. Kreibich,K. Levchenko, V. Paxson, G. M. Voelker, and S. Savage. Show me themoney: Characterizing spam-advertised revenue. In USENIX SecuritySymposium, 2011. → pages 55[62] A. M. Kaplan and M. Haenlein. Users of the world, unite! the challengesand opportunities of social media. Business horizons, 53(1):59–68, 2010.→ pages 1[63] E. J. Kartaltepe, J. A. Morales, S. Xu, and R. Sandhu. Socialnetwork-based botnet command-and-control: emerging threats andcountermeasures. In Applied Cryptography and Network Security, pages511–528. Springer, 2010. → pages 15, 32[64] G. Karypis and V. Kumar. Multilevel k-way partitioning scheme forirregular graphs. Journal of Parallel and Distributed computing, 48(1):96–129, 1998. → pages 61[65] H. Kim, J. Tang, and R. Anderson. Social authentication: harder than itlooks. In Financial Cryptography and Data Security, pages 1–15.Springer, 2012. → pages 103[66] T. H.-J. Kim, A. Yamada, V. Gligor, J. Hong, and A. Perrig. Relationgram:Tie-strength visualization for user-controlled online identityauthentication. In In Proceedings of Financial Cryptography and DataSecurity Conference, pages 69–77. Springer, 2013. → pages 107[67] M. N. Ko, G. P. Cheek, M. Shehab, and R. Sandhu. Social-networksconnect services. Computer, 43(8):37–43, 2010. → pages 24, 34[68] C. Lampe, N. B. Ellison, and C. Steinfield. Changes in use and perceptionof facebook. In Proceedings of the 2008 ACM conference on Computersupported cooperative work, pages 721–730. ACM, 2008. → pages 20, 32[69] T. Lauinger, V. Pankakoski, D. Balzarotti, and E. Kirda. Honeybot, yourman in the middle for automated social engineering. In Proceedings of the1203rd USENIX Workshop on Large-Scale Exploits and Emergent Threats.USENIX Association, 2010. → pages 29[70] J. Leskovec and C. Faloutsos. Sampling from large graphs. In Proceedingsof the 12th ACM SIGKDD international conference on Knowledgediscovery and data mining, pages 631–636. ACM, 2006. → pages 81[71] J. Leskovec and E. Horvitz. Planetary-scale views on a largeinstant-messaging network. In Proceedings of the 17th internationalconference on World Wide Web, pages 915–924. ACM, 2008. → pages 31,74[72] J. Leskovec, K. Lang, A. Dasgupta, and M. Mahoney. Communitystructure in large networks: Natural cluster sizes and the absence of largewell-defined clusters. Internet Mathematics, 6(1):29–123, 2009. → pages62, 72, 74[73] Y. Liu, K. P. Gummadi, B. Krishnamurthy, and A. Mislove. Analyzingfacebook privacy settings: user expectations vs. reality. In Proceedings ofthe 2011 ACM SIGCOMM conference on Internet measurementconference, pages 61–70. ACM, 2011. → pages 21, 105[74] D. Lowd and C. Meek. Adversarial learning. In Proceedings of the 11thACM International conference on Knowledge Discovery in Data Mining,pages 641–647. ACM, 2005. → pages 5, 28, 35, 61[75] E. Maler and D. Reed. The venn of identity. IEEE Security and Privacy, 6(2):16–23, 2008. → pages 105[76] M. Mannan and P. C. van Oorschot. Security and usability: the gap inreal-world online banking. In Proceedings of the 2007 Workshop on NewSecurity Paradigms, pages 1–14. ACM, 2008. → pages 108[77] D. McCoy, A. Pitsillidis, G. Jordan, N. Weaver, C. Kreibich, B. Krebs,G. M. Voelker, S. Savage, and K. Levchenko. Pharmaleaks:Understanding the business of online pharmaceutical affiliate programs. InProceedings of the 21st USENIX Conference on Security Symposium.USENIX Association, 2012. → pages 15, 21121[78] T. Merrill, K. Latham, R. Santalesa, and D. Navetta. Social media: Thebusiness benefits may be enormous, but can the risks–reputational, legal,operational–be mitigated? ACE Group, 2011. → pages 3[79] D. Misener. Rise of the socialbots: They could be influencing you online.http://goo.gl/TX7c1p, March 2011. → pages 2[80] A. Mislove, M. Marcon, K. P. Gummadi, P. Druschel, andB. Bhattacharjee. Measurement and analysis of online social networks. InProceedings of the 7th ACM SIGCOMM conference on Internetmeasurement, pages 29–42. ACM, 2007. → pages 24, 54[81] A. Mislove, A. Post, P. Druschel, and P. K. Gummadi. Ostra: Leveragingtrust to thwart unwanted communication. In Proceedings of the 5thUSENIX Symposium on Networked Systems Design and Implementation,pages 15–30. USENIX Association, 2008. → pages 106[82] A. Mislove, B. Viswanath, K. P. Gummadi, and P. Druschel. You are whoyou know: inferring user profiles in online social networks. InProceedings of the third ACM international conference on Web search anddata mining, pages 251–260. ACM, 2010. → pages 62, 80[83] A. Mohaisen, A. Yun, and Y. Kim. Measuring the mixing time of socialgraphs. In Proceedings of the 10th annual conference on Internetmeasurement, pages 383–389. ACM, 2010. → pages 72[84] A. Mohaisen, H. Tran, N. Hopper, and Y. Kim. On the mixing time ofdirected social graphs and security implications. In Proceedings of the 7thACM Symposium on Information, Computer and CommunicationsSecurity, pages 36–37. ACM, 2012. → pages 98[85] M. Mondal, B. Viswanath, A. Clement, P. Druschel, K. P. Gummadi,A. Mislove, and A. Post. Limiting large-scale crawls of social networkingsites. ACM SIGCOMM Computer Communication Review, 41(4):398–399, 2011. → pages 103, 106[86] T. Moore, R. Clayton, and R. Anderson. The economics of online crime.The Journal of Economic Perspectives, 23(3):3–20, 2009. → pages 15[87] M. Motoyama, K. Levchenko, C. Kanich, D. McCoy, G. M. Voelker, andS. Savage. Re: Captchas-understanding captcha-solving services in an122economic context. In Proceedings of the 2010 USENIX SecuritySymposium, volume 10, pages 4–1. USENIX Association, 2010. → pages16, 21, 22, 102[88] M. Motoyama, D. McCoy, K. Levchenko, S. Savage, and G. M. Voelker.Dirty jobs: The role of freelance labor in web service abuse. InProceedings of the 20th USENIX conference on Security, pages 14–14.USENIX Association, 2011. → pages 15, 21, 22, 51, 53, 59[89] R. Mourtada and F. Salem. Civil movements: The impact of facebook andtwitter. Arab Social Media Report, 1(2):1–30, 2011. → pages 1[90] S. Nagaraja, A. Houmansadr, P. Piyawongwisal, V. Singh, P. Agarwal, andN. Borisov. Stegobot: a covert social network botnet. In InformationHiding, pages 299–313. Springer, 2011. → pages 32[91] F. Nagle and L. Singh. Can friends be trusted? exploring privacy in onlinesocial networks. In Proceedings of the IEEE/ACM InternationalConference on Advances in Social Network Analysis and Mining, pages312–315. IEEE, 2009. → pages 17, 31[92] M. Nanis, I. Pearce, and T. Hwang. Pacific social architecting corporation:Field test report. http://pacsocial.com, November 2011. → pages 18[93] M. E. Newman. Modularity and community structure in networks.Proceedings of the National Academy of Sciences, 103(23):8577–8582,2006. → pages 75[94] R. Nishtala, H. Fugal, S. Grimm, M. Kwiatkowski, H. Lee, H. C. Li,R. McElroy, M. Paleczny, D. Peek, P. Saab, D. Stafford, T. Tung, andV. Venkataramani. Scaling memcache at Facebook. In Proceedings of the10th USENIX Conference on Networked Systems Design andImplementation, NSDI’13, pages 385–398. USENIX Association, 2013.→ pages 109[95] J. Nolan and M. Levesque. Hacking human: data-archaeology andsurveillance in social networks. ACM SIGGROUP Bulletin, 25(2):33–37,2005. → pages 3, 15, 18, 40[96] R. Potharaju, B. Carbunar, and C. Nita-Rotaru. iFriendU: Leveraging3-cliques to enhance infiltration attacks in online social networks. In123Proceedings of the 17th ACM conference on Computer andcommunications security, pages 723–725. ACM, 2010. → pages 17[97] H. Rashtian, Y. Boshmaf, P. Jaferian, and K. Beznosov. To befriend ornot? a model of friend request acceptance on facebook. In Symposium onUsable Privacy and Security (SOUPS), 2014. → pages 105[98] J. Ratkiewicz, M. Conover, M. Meiss, B. Gonçalves, A. Flammini, andF. Menczer. Detecting and tracking political abuse in social media. InProceedings of the 5th International AAAI Conference on Weblogs andSocial Media, 2011. → pages 3[99] C. P. Robert and G. Casella. Monte Carlo Statistical Methods.Springer-Verlag, 2005. → pages 34, 109[100] M. Scherer. Obama’s 2012 digital fundraising outperformed 2008.http://goo.gl/pvYiQh, November 2012. → pages 1[101] S. S. Silva, R. M. Silva, R. C. Pinto, and R. M. Salles. Botnets: A survey.Computer Networks, 57(2):378–403, 2013. → pages 7, 26, 33[102] A. Sinclair. Improved bounds for mixing rates of Markov chains andmulticommodity flow. In Proceedings of the 1st Latin AmericanSymposium on Theoretical Informatics, pages 474–487. Springer-Verlag,1992. → pages 77, 131[103] D. A. Spielman and S.-H. Teng. Nearly-linear time algorithms for graphpartitioning, graph sparsification, and solving linear systems. InProceedings of the thirty-sixth annual ACM symposium on Theory ofcomputing, pages 81–90. ACM, 2004. → pages 65[104] T. Stein, E. Chen, and K. Mangla. Facebook immune system. InProceedings of the 4th Workshop on Social Network Systems, page 8.ACM, 2011. → pages 4, 5, 9, 17, 32, 55, 57, 58, 60, 63, 98, 103, 106, 109[105] B. Stone-Gross, R. Abman, R. A. Kemmerer, C. Kruegel, D. G.Steigerwald, and G. Vigna. The underground economy of fake antivirussoftware. In Economics of Information Security and Privacy III, pages55–78. Springer, 2013. → pages 15, 21124[106] L. J. Strahilevitz. A social networks theory of privacy. The University ofChicago Law Review, pages 919–988, 2005. → pages 4[107] K. Strater and H. R. Lipford. Strategies and struggles with privacy in anonline social networking community. In Proceedings of the 22nd BritishHCI Group Annual Conference on People and Computers: Culture,Creativity, Interaction-Volume 1, pages 111–119. British ComputerSociety, 2008. → pages 108[108] G. Stringhini, C. Kruegel, and G. Vigna. Detecting spammers on socialnetworks. In Proceedings of the 26th Annual Computer SecurityApplications Conference, pages 1–9. ACM, 2010. → pages 4, 60, 109[109] G. Stringhini, G. Wang, M. Egele, C. Kruegel, G. Vigna, H. Zheng, andB. Y. Zhao. Follow the green: growth and dynamics in twitter followermarkets. In Proceedings of the 2013 conference on Internet measurementconference, pages 163–176. ACM, 2013. → pages 43, 56, 64[110] S.-T. Sun, Y. Boshmaf, K. Hawkey, and K. Beznosov. A billion keys, butfew locks: the crisis of web single sign-on. In Proceedings of the 2010workshop on New security paradigms, pages 61–72. ACM, 2010. →pages 105[111] D. Thibeau and D. Reed. Open trust frameworks for open government:Enabling citizen involvement through open identity technologies. Whitepaper, OpenID Foudation and Information Card Foudation, 2009. →pages 105[112] K. Thomas and D. M. Nicol. The Koobface botnet and the rise of socialmalware. In Proceedings of the 5th International Conference on Maliciousand Unwanted Software, pages 63–70. IEEE, 2010. → pages 3, 102, 103[113] K. Thomas, C. Grier, and V. Paxson. Adapting social spam infrastructurefor political censorship. In Proceedings of the 5th USENIX workshop onLarge-Scale Exploits and Emergent Threats, pages 13–23. USENIXAssociation, 2012. → pages 3, 15[114] K. Thomas, D. McCoy, C. Grier, A. Kolcz, and V. Paxson. Traffickingfraudulent accounts: The role of the underground market in twitter spamand abuse. In Proceedings of the 22nd USENIX Conference on Security,pages 195–210. USENIX Association, 2013. → pages 3, 20, 21, 43, 51, 56125[115] Threat landscape research lab. Fortinet report on the anatomy of a botnet.Fortinet Corporation, 2013. → pages 52[116] S. T. Tong, B. Van Der Heide, L. Langwell, and J. B. Walther. Too muchof a good thing? the relationship between number of friends andinterpersonal impressions on Facebook. Journal of Computer-MediatedCommunication, 13(3):531–549, 2008. → pages 30[117] D. N. Tran, B. Min, J. Li, and L. Subramanian. Sybil-resilient onlinecontent voting. In NSDI, volume 9, pages 15–28, 2009. → pages 80[118] N. Tran, J. Li, L. Subramanian, and S. S. Chow. Optimal sybil-resilientnode admission control. In INFOCOM, 2011 Proceedings IEEE, pages3218–3226. IEEE, 2011. → pages 12, 41, 79, 80[119] Twitter. Investors relations: Annual earnings report.http://investor.twitterinc.com, October 2014. → pages 1[120] J. Tygar. Adversarial machine learning. IEEE Internet Computing, 15(5),2011. → pages 61, 69[121] H. R. Varian and W. Norton. Microeconomic analysis, volume 2. NortonNew York, 1992. → pages 46, 47, 54, 55[122] B. Viswanath, A. Post, K. P. Gummadi, and A. Mislove. An analysis ofsocial network-based sybil defenses. In Proceedings of ACM SIGCOMMComputer Communication Review, pages 363–374. ACM, 2010. → pages12, 13, 62, 65, 75, 79, 80[123] B. Viswanath, M. Mondal, A. Clement, P. Druschel, K. P. Gummadi,A. Mislove, and A. Post. Exploring the design space of socialnetwork-based sybil defenses. In In proceedings of the 4th InternationalConference on Communication Systems and Networks, pages 1–8. IEEE,2012. → pages 5, 62[124] L. Von Ahn, M. Blum, N. J. Hopper, and J. Langford. Captcha: Usinghard ai problems for security. In Advances in Cryptology—EUROCRYPT2003, pages 294–311. Springer, 2003. → pages 21, 102[125] C. Wagner, S. Mitter, C. Körner, and M. Strohmaier. When social botsattack: Modeling susceptibility of users in online social networks. InProceedings of the WWW, volume 12, 2012. → pages 58, 63, 64, 112126[126] G. Wang, C. Wilson, X. Zhao, Y. Zhu, M. Mohanlal, H. Zheng, and B. Y.Zhao. Serf and turf: crowdturfing for fun and profit. In Proceedings of the21st international conference on World Wide Web, pages 679–688. ACM,2012. → pages 21[127] G. Wang, T. Konolige, C. Wilson, X. Wang, H. Zheng, and B. Y. Zhao.You are how you click: Clickstream analysis for sybil detection. InProceedings of the 22nd USENIX Conference on Security, pages 241–256.USENIX Association, 2013. → pages 4, 60[128] G. Wang, M. Mohanlal, C. Wilson, X. Wang, M. Metzger, H. Zheng, andB. Y. Zhao. Social turing tests: Crowdsourcing sybil detection. InProceedings of the 20th Annual Network & Distributed System SecuritySymposium. ACM, 2013. → pages 63[129] Y. Wang, P. G. Leon, K. Scott, X. Chen, A. Acquisti, and L. F. Cranor.Privacy nudges for social media: an exploratory facebook study. InProceedings of the 22nd international conference on World Wide Webcompanion, pages 763–770. International World Wide Web ConferencesSteering Committee, 2013. → pages 107[130] D. J. Watts and S. H. Strogatz. Collective dynamics of small-worldnetworks. nature, 393(6684):440–442, 1998. → pages 87, 96[131] Y. Xie, F. Yu, Q. Ke, M. Abadi, E. Gillum, K. Vitaldevaria, J. Walter,J. Huang, and Z. M. Mao. Innocent by association: early recognition oflegitimate users. In Proceedings of the 2012 ACM conference onComputer and communications security, pages 353–364. ACM, 2012. →pages 99, 106[132] G. Yan, G. Chen, S. Eidenbenz, and N. Li. Malware propagation in onlinesocial networks: nature, dynamics, and defense implications. InProceedings of the 6th ACM Symposium on Information, Computer andCommunications Security, pages 196–206. ACM, 2011. → pages 3, 15[133] C. Yang, R. Harkreader, J. Zhang, S. Shin, and G. Gu. Analyzingspammers’ social networks for fun and profit: a case study of cybercriminal ecosystem on twitter. In Proceedings of the 21st internationalconference on World Wide Web, pages 71–80. ACM, 2012. → pages 64127[134] Z. Yang, C. Wilson, X. Wang, T. Gao, B. Y. Zhao, and Y. Dai. Uncoveringsocial network sybils in the wild. In Proceedings of the 2011 ACMSIGCOMM Internet Measurement Csonference, pages 259–268. ACM,2011. → pages 60, 63, 76[135] Z. Yang, C. Wilson, X. Wang, T. Gao, B. Y. Zhao, and Y. Dai. Uncoveringsocial network sybils in the wild. ACM Transactions on KnowledgeDiscovery from Data (TKDD), 8(1):2, 2014. → pages 4, 30[136] S. Yardi, N. Feamster, and A. Bruckman. Photo-based authenticationusing social networks. In Proceedings of the first workshop on Onlinesocial networks, pages 55–60. ACM, 2008. → pages 63, 103[137] H. Yu. Sybil defenses via social networks: a tutorial and survey. ACMSIGACT News, 42(3):80–101, 2011. → pages 3, 5, 30, 41, 58, 62, 65, 87,98[138] H. Yu, M. Kaminsky, P. B. Gibbons, and A. Flaxman. Sybilguard:defending against sybil attacks via social networks. ACM SIGCOMMComputer Communication Review, 36(4):267–278, 2006. → pages 12, 62,79[139] H. Yu, P. B. Gibbons, M. Kaminsky, and F. Xiao. Sybillimit: Anear-optimal social network defense against sybil attacks. In Proceedingsof IEEE Symposium on Security and Privacy, pages 3–17. IEEE, 2008. →pages 12, 41, 62, 79[140] C. M. Zhang and V. Paxson. Detecting and analyzing automated activityon twitter. In Passive and Active Measurement, pages 102–111. Springer,2011. → pages 104128Appendix ASecurity Analysis of ÍntegroIn what follows, we provide the required background on random walks after whichwe analyze the main security guarantee of Íntegro.A.1 BackgroundLet G = (V,E) be an undirected graph with n = |V | nodes and m = |E| undirectededges. Also, let w : E → R+ be a function that assigns each edge (vi,v j) ∈ E aweight w(vi,v j)> 0. The transition matrix P is an n×n matrix, where each entrypi j ∈ [0,1] represents the probability of moving from node vi ∈V to node v j ∈V ,as defined by:pi j :=w(vi,v j)deg(vi) if (vi,v j) ∈ E,0 otherwise.(A.1)The transition matrix P might not be symmetric but it is right-stochastic, as P is asquare matrix of non-negative real numbers and for each node vi ∈V ,∑(vi,v j)∈Epi j = 1. (A.2)The event of moving from one node to another in G is captured by a Markov129chain representing a random walk over G. In turn, a random walk W = 〈v1, . . . ,vi〉of length i ≥ 1 over G is a sequence of nodes that starts at the initial node v1and ends at the terminal node vi, following the transition probability defined inEquation A.1. The Markov chain is called ergodic if it is irreducible and aperiodic.In this case, the Markov chain has a unique stationary distribution to which therandom walk converges as i→ ∞. The stationary distribution pi of a Markovchain is a probability distribution that is invariant to the transition matrix, that is,whenever piP = pi . The stationary distribution of the Markov chain over G is a1×n probability vector, and is defined by:pi :=[deg(v1)vol(V ) . . .deg(vn)vol(V )], (A.3)where pi(v j) is the jth entry in pi and represents the landing probability of nodev j ∈V , and vol(U) is the volume of a node set U ⊆V , which is defined by:vol(U) := ∑v j∈Udeg(v j). (A.4)The marginal distribution pii of the Markov chain over G is a 1× n probabilityvector, where pii(v j) is the landing probability of node v j ∈ V at step i of therandom walk. Given an initial distribution pi0, the marginal distribution pii can beiteratively defined by:pii := pii−1P = pi0Pi, (A.5)and accordingly, pii(v j) can be computed by [40]:pii(v j) = ∑(vk,v j)∈Epii−1(vk) · w(vk,v j)deg(vk) . (A.6)The total variation distance ||pii−pi||TV between the marginal and stationary130distributions is a measure of how “close” these distribution are, and is defined by||pii−pi||TV :=12 ∑v j∈V|pii(v j)−pi(v j)|. (A.7)The mixing time T (ε) of the Markov chain over G, when parametrized by arelative variation error ε > 0, is the minimal length of the random walk requiredfor the marginal distribution to be ε-close to the stationary distribution in totalvariation distance, and is defined byT (ε) := min{i : ||pii−pi||TV ≤ ε} . (A.8)It thus follows that if i≥ T (ε), we have pii = pi .A.2 Mathematical ProofsWe next start by proving that reassigning edge weights in an undirected graphchanges its mixing time by only a constant factor. We subscript the used notationin order to differentiate between different graphs when necessary. For a givenOSN, we refer to its social graph after rate adjustment as the defense graph D.Lemma 1: Given a social graph G with a mixing time TG(ε), the correspondingdefense graph D after rate adjustment has a mixing time TD(ε) = O(TG(ε)).Proof. Recall that the mixing time of an undirected graph G = (V,E) is boundedby [102]:λ2(1−λ ) log( 12ε)≤ T (ε)≤log(n)+ log(1ε)1−λ , (A.9)where λ ∈ (−1,1) is the second largest eigenvalue of the transition matrix P ofthe graph G. For a social graph G and its defense graph D, we haveTD(ε)TG(ε)≤1−λG1−λD = O(1),131and thus, TD(ε) = O(TG(ε)).Given the bound in Equation A.9, a Markov chain over a graph G is fast mix-ing if T (ε) is polynomial in logn and log(1/ε). In the context of fake accountdetection, we consider the stricter case when ε = O(1/n), and so we call G fastmixing if T (ε) = O(logn).Let us refer to the landing probability pii(v j) of a node v j ∈V as its trust valueso that pii is the trust distribution in step i.1 Moreover, let the expansion χ(U) of anode set U ⊆V be defined byχ(U) := vol(∂ (U))vol(U) , (A.10)where ∂ (U) = {(vi,v j) ∈ E : vi ∈U, v j ∈ V \U} is the edge boundary of U . Inour threat model, we have ∂ (Vr) = ∂ (Vf ) = Ea, where edges in Ea are establishedat random.We now prove that during a random walk on the defense graph D = (V,E),where the walk starts from a known real node picked at random in the real region,the expected aggregate trust in the fake region D f monotonically increases bydiminishing increments until it converges to its stationary value.Lemma 2: Given a defense graph D with g = |Ea| randomly established attackedges, n0≥ 1 trusted real nodes, a total trust τ ≥ 1, and an initial trust distributionpi0(v j) =τ/n0 if v j is a trusted node,0 otherwise,the expected aggregate trust over the fake region in the (i+1)-th iterations in-creases by an amount of (χ(Vr) · τ)(1−χ(Vr)−χ(Vf ))i for each i≥ 0.Proof. We prove the lemma by induction. We use the iterative method describedin Equation A.5 to compute trust distribution for a random walk that starts from1In Chapter 3, we denoted the trust value pii by Ti. The reason we use pii herein is because it isthe standard notation used in analyzing stochastic processes [5].132a trusted node. We first define some notation. Let pii(Vr) be the aggregate trust inthe real region Dr after iteration i, as defined bypii(Vr) := ∑v j∈Vrpii(v j). (A.11)Similarly, let pii(Vf ) be the aggregate trust in D f after iteration i. As definedby pi0, initially, we have pi0(Vr) = τ and pi0(Vf ) = 0. Moreover, the total trust τ isreserved during the iterations, that is, pii(Vr)+pii(Vf ) = τ for each i≥ 0.In each iteration i, the total trust is redistributed in the graph. Consider itera-tion i+1. For each vi,v j ∈Vr, the edge (vi,v j)∈E carries w(vi,v j)(pii(Vr)/vol(Vr))trust on average. As the |∂ (Vr)| attack edges are established at random, it isexpected that vol(∂ (Vr))(pii(Vr)/vol(Vr)) trust, that is, χ(Vr) · pii(Vr), is passedthrough these edges to the fake region. The same also holds for the fake region,which means we can model the trust exchange between Dr and D f bypii+1(Vr) = pii(Vr)+χ(Vf ) ·pii(Vf )−χ(Vr) ·pii(Vr) andpii+1(Vf ) = pii(Vf )+χ(H) ·pii(Vr)−χ(Vf ) ·pii(Vf ),where the total trust τ is conserved throughout the process, as follows:pii+1(Vr)+pii+1(Vf ) = pii(Vr)+pii(Vf ) = τ.We now consider the base case of this lemma. Initially, for iteration i = 0, wehave χ(Vr) ·pi0(Vr) = χ(Vr) · τ and χ(Vf ) ·pi0(Vf ) = 0. Therefore,pi1(Vf )−pi0(Vf ) = χ(Vr) · τ.We next state the induction hypothesis. For each i ≥ 1, let us assume thefollowing statement is true:pii(Vf )−pii−1(Vf ) = (χ(Vr) · τ)(1−χ(Vr)−χ(Vf ))i−1.133Now, let us consider the trust exchange in iteration i+1:pii+1(Vf )−pii(Vf ) = χ(Vr) ·pii(Vr)−χ(Vf ) ·pii(Vf )By substituting pii(Vr) by pii−1(Vr)+ χ(Vf ) ·pii−1(Vf )− χ(Vr)τi−1(Vr), and doingsimilarly so for pii−1(Vf ), we getpii+1(Vf )−pii(Vf ) = χ(Vr)((1−χ(Vr)) ·pii−1(Vr)+χ(Vf ) ·pii−1(Vf ))−χ(Vf )((1−χ(Vf ))·pii−1(Vf )+χ(Vr) ·pii−1(Vr))=(χ(Vr) ·pii−1(Vr)−χ(Vf ) ·pii−1(Vf ))(1−χ(Vr)−χ(Vf )).We know that pii(Vf ) = pii−1(Vf )+χ(Vr) ·pii−1(Vr)−χ(Vf ) ·pii−1(Vf ), and sopii+1(Vf )−pii(Vf ) =(pii(Vf )−pii−1(Vf ))(1−χ(Vr)−χ(Vf )).Finally, by the induction hypothesis, we end up withpii+1(Vf )−pii(Vf ) = (χ(Vr) · τ)(1−χ(Vr)−χ(Vf ))i,which, by induction, completes the proof.Corollary 2: In the i-th iteration, the expected increment of aggregate trust in thefake region in upper bounded by (χ(Vr) · τ)(1−χ(Vr))i for each i≥ 0.We next bound the aggregate trust in the fake region pii(Vf ) after β iterations,where 1≤ β ≤ T (ε)−∆ and ∆> 1 is a positive natural number. We achieve thisby directly comparing piβ (Vf ) to its stationary value piT (ε)(Vf ), where T (ε)≥ β+∆. In fact, this result holds as long as there is at least a constant difference betweenthe mixing time T (ε) and β , or in other words, whenever T (ε)−β = Ω(1) andT (ε) is not arbitrarily large.Lemma 3: Given a defense graph D with a mixing time T (ε) ≥ 1 and a posi-tive integer β ∈ [1,T (ε)−∆] where ∆ > 1, the aggregate trust in the fake region134piβ (Vf ) after β iterations gets a fraction f ∈ (0,1) of that in the stationary distri-bution, that is, piβ (Vf ) = f · τ ·(vol(Vf )/vol(V )), wheref = β ·∑0≤i≤β−∆(1−χ(Vr)−χ(Vf ))iT (ε) ·∑0≤i≤T (ε)−∆(1−χ(Vr)−χ(Vf ))i (A.12)Proof. By Lemma 2, we know that the aggregate trust in the fake region mono-tonically increases with the number of iterations in the process defined by Equa-tion A.5. For iteration i = β , where 1≤ β ≤ T (ε)−∆, we havepiβ (Vf ) = ∑0≤i≤β−∆(χ(Vr) · τ)(1−χ(Vr)−χ(Vf ))i= (β ·χ(Vr) · τ) ∑0≤i≤β−∆(1−χ(Vr)−χ(Vf ))i.Similarly, for iteration i = γ , where γ = T (ε)≥ β +∆, we havepiγ(Vf ) = ∑0≤i≤γ−∆(χ(Vr) · τ)(1−χ(Vr)−χ(Vf ))i= (γ ·χ(Vr) · τ) ∑0≤i≤γ−∆(1−χ(Vr)−χ(Vf ))i.Now let us consider the ratio piβ (Vf )/piγ(Vf ). We havepiβ (Vf )piγ(Vf ) =(β ·χ(Vr) · τ)∑0≤i≤β−∆(1−χ(Vf )−χ(Vf ))i(γ ·χ(Vr) · τ)∑0≤i≤γ−∆(1−χ(Vr)−χ(Vf ))i .By multiplying both side by piγ(Vf ), we getpiβ (Vf ) =β ·∑0≤i≤β−∆(1−χ(Vr)−χ(Vf ))iγ ·∑0≤i≤γ−∆(1−χ(Vr)−χ(Vf ))i ·piγ(Vf ). (A.13)Now, recall that piγ(v j) = τ ·pi(v j) = τ ·(deg(v j)/vol(V )) for each v j ∈Vf , where135pi is the stationary distribution of the graph D (see Equation A.3), Accordingly,piβ (Vf ) =β ·∑0≤i≤β−∆(1−χ(Vr)−χ(Vf ))iγ ·∑0≤i≤γ−∆(1−χ(Vr)−χ(Vf ))i · τ ·vol(Vf )vol(V )= f · τ · vol(Vf )vol(V )Finally, as β ≤ γ −∆, we have β/γ ≤ (γ −∆)/γ . As γ is not arbitrarily large,β/γ < 1 holds. Therefore, f < 1.As the total trust τ is conserved, Corollary 3 below directly follows.Corollary 3: For a positive number ∆ > 1, if the aggregate trust in the fake re-gion after 1 ≤ β ≤ T (ε)−∆ iterations is a fraction f ∈ (0,1) of that in the sta-tionary distribution, then the aggregate trust in the real region during the sameiteration is c > 1 times of that in the stationary distribution, that is, piβ (Vr) =c · τ · (vol(Vr)/vol(V )), where c = 1+(1− f )(vol(Vf )/vol(Vr)).Given a fraction f > 1 and a multiplier c > 1, as defined by Lemma 3 andCorollary 3, the trust distribution over nodes in D after β iterations is defined bypiβ (v j) =f · τ · deg(vi)vol(V ) < 1 if v j ∈Vf ,c · τ · deg(vi)vol(V ) > 1 if v j ∈Vr.(A.14)Moreover, let p¯iβ (v j) = piβ (v j)/deg(vi) be the degree-normalized trust for eachv j ∈V , as derived from Equation A.14. We next prove that at most ( f/c) ·vol(Vf )fake nodes can have degree-normalized trust or rank values higher than or equalto (c · τ)/vol(V ).Lemma 4: Consider a defense graph D with a mixing time γ = T (ε), a fractionf < 0, and a multiplier c > 1 such that piβ (Vr) = c ·piγ(Vr) and piβ (Vf ) = f ·piγ(Vf )after 1 ≤ β ≤ T (ε)−∆ power iterations for some ∆ > 1. Regardless to how anattacker organizes the fake region, there can be only a set U ⊂ Vf of at most136( f/c) ·vol(Vf ) fake nodes such that each v j ∈ U has a degree-normalized trustp¯iβ (v j)≥ (c · τ)/vol(V ).Proof. For an attacker, the optimal strategy to maximize the cardinality of the setU is to assign (c · τ)/vol(V ) degree-normalized trust to as many fake nodes aspossible, and then leave the rest of the fake nodes with zero trust.We now prove by contradiction that |U | ≤ ( f/c) ·vol(Vf ). Assume the oppo-site, where |U |> ( f/c) ·vol(Vf ). Since each node v j ∈U is connected to at leastanother node vk ∈U , or otherwise it would be disconnected and p¯iβ (v j) = 0, thenthe aggregate degree-normalized trust p¯iβ (Vf ) in the fake region isp¯iβ (Vf ) = |U | · c · τvol(V ) (A.15)>fc ·vol(Vf ) ·c · τvol(V ) (A.16)> f · τ · vol(Vf )vol(V ) , (A.17)which by Lemma 3 is a contradiction.Finally, we prove an upper bound on ( f/c) ·vol(Vf ), as follow.Theorem 2: Given a social graph with a fast mixing real region and an attackerthat randomly establishes attack edges, the number of fake nodes that rank similarto or higher than real nodes after β = O(logn) iteration is O(vol(Ea) · logn).Proof. Consider a social graph G and its defense graph D. By Lemma 1, we knowthat re-weighting G changes its mixing time by only a constant factor. Therefore,we have TD(ε) = O(TG(ε)) and TDr(ε) = O(TGr(ε)). As Gr is fast mixing, wealso have TGr(ε) = O(logn) by definition. This also means TDr(ε) = O(logn). AsVf 6= /0, then by the bound in Equation A.9, we have TD(ε)−TDr(ε) =Ω(1). So,TD(ε)−β =Ω(1) as β = TDr(ε) = O(logn). Finally, by Lemma 4, we know thatat most ( f/c) ·vol(Vf ) fake nodes can rank same or equal to real nodes. We nowattempt to prove an upper bound on this quantity.137As the total trust τ is conversed after β = O(logn) iteration, we havef · τ · vol(Vf )vol(V ) + c · τ ·vol(Vr)vol(V ) = τ·That is,fc ·vol(Vf ) =vol(Vr)vol(V )f ·vol(Vf ) −1·By Lemma 3, we havefc ·vol(Vf ) =vol(Vr)(τ/piβ (Vf ))−1 · (A.18)Now, by Lemma 2 and Corollary 2, we have:piβ (Vf ) = ∑0≤i≤β−1(χ(Vr) · τ)(1−χ(Vr)−χ(Vf ))i< ∑0≤i≤β−1(χ(Vr) · τ)(1−χ(Vr))i= ∑0≤i≤β−1τ ·((1−χ(Vr))i− (1−χ(Vr))i+1)= τ ·(1− (1−χ(Vr))β)(A.19)By combining Equations A.18 and A.19, we get:fc ·vol(Vf ) < vol(Vr)((1−χ(Vr))−β −1)By replacing (1−χ(Vr))−β with 1+β ·χ(Vr)+o(χ2(Vr)), which is its Maclaurin138series, we end up with the followingfc ·vol(Vf ) < vol(Vr)((1−χ(Vr))−β −1)= vol(Vr) ·O(χ(Vr)) ·β= vol(Vr) ·O(χ(Vr)) ·O(logn)= O(vol(∂ (Vr)) · logn)= O(vol(Ea) · logn) ,which completes the proof.139Appendix BEvaluating Sybil Node DetectionAlgorithms with SyPySyPy is a Python package for Sybil node detection in social and information net-works. We designed SyPy to evaluate the effectiveness of graph-based Sybil nodedetection algorithms on a small scale, that is, networks which consist of thousandsof nodes and can fit into a single commodity machine’s memory. The package isnot designed to benchmark the efficiency of these algorithms on a large scale, asthere are existing distributed systems which better support large-scale evaluations.B.1 FrameworkSyPy provides an extensible framework to design, implement, and evaluate graph-based Sybil node detection algorithms. For example, SyPy includes 8 off-the-shelfalgorithms for fake account detection in online social networks (OSNs), includingÍntegro and SybilRank. In what follows, we describe the main abstractions definedin this framework.140B.1.1 Graphs and RegionsA region has a well-defined graph structure and is either Sybil (i.e., fake) or honest(i.e., real). For example, one can create a Sybil region consisting of 100 nodes thatare fully-connected (i.e., a complete graph) as follows:import sypysybil_region = sypy.Region(graph=sypy.CompleteGraph(num_nodes=100),name="SybilCompleteGraph",is_sybil=True)SyPy supports many random graph models, such as scale-free and small-worldgraphs. For example, one can create an honest region to model a community of5000 users as follows:honest_region = sypy.Region(graph=sypy.SmallWorldGraph(num_nodes=5000,node_degree=100,rewiring_prob=0.8),name="HonestSmallWorldGraph")In order to specify known honest accounts, one can pick nodes at random fromthe honest region as follows:honest_region.pick_random_honest_nodes(num_nodes=10)B.1.2 NetworksA network always consists of two regions: The honest region positioned to the left,and the Sybil region positioned to the right. The regions can have any graph struc-ture, and initially, they are disconnected. We can “stitch” the two regions together141in any way that resembles how Sybils connect with honest nodes in real-worldnetworks (e.g., using a random or targeted infiltration strategy). For example, onecan use the regions defined above to create a network and connect ten randomlypicked pairs of nodes—one node from each region in each pair—as follows:social_network = sypy.Network(left_region=honest_region,right_region=sybil_region,name="OnlineSocialNetwork")social_network.random_pair_stitch(num_edges=2)B.1.3 DetectorsAfter the network is created, one can run any of the supported graph-based Sybilnode detectors and compute their detection performance. For example, we canuse SybilRank for fake account detection in OSNs as follows:detector = sypy.SybilRankDetector(social_network)results = detector.detect()print "sensitivity={0:.2f}, specificity={1:.2f}".format(results.sensitivity(),results.specificity())B.2 BenchmarksSyPy offers a set of benchmarks to evaluate the effectiveness of detectors usingROC analysis, given a suitable operating threshold such as a pivot along a rankedlist of nodes. Benchmarks also compute the curve’s AUC, and are generally usedto perform sensitivity analysis.142ranking_benchmark = sypy.MultipleDetectorsBenchmark(detectors=[sypy.SybilRankDetector, sypy.IntegroDetector],network=social_network,thresholds=["pivot", "pivot"])ranking_benchmark.run()ranking_benchmark.plot_curve(file_name="roc_analysis.pdf")for benchmark in ranking_benchmark:print "detector={0}, auc={0:.2f}".format(benchmark.detector,benchmark.auc)SyPy also supports advanced benchmarks that allow evaluating detectors againsta dynamic network, where graph structures can change such as the establishmentof new attack edges. One can benchmark detectors under different number of at-tack edges as follow, where the output is a curve comparing the number of attackedges to the AUC of each detector.edges_benchmark = sypy.AttackEdgesDetectorsBenchmark(multi_benchmark=ranking_benchmark,values=[1,10,100,1000,10000]) # attack edgesedges_benchmark.run()edges_benchmark.plot_curve(file_name="edges_vs_auc.pdf")B.3 ExtensibilitySyPy is built on top of NetworkX,1 and hence it can support most of NetworkXfunctionality, which also means it is easily extensible. For example, one can visu-alize any region or network as follows:1http://networkx.github.io143Figure B.1: SyPy network visualizationsybil_region.visualize()sybil_region.visualize(file_path="{0}.pdf".format(sybil_region.name))social_network.visualize()social_network.visualize(file_path="{0}.pdf".format(network.name))Figure B.1 depicts how a network visualization in SyPy looks like. Noticethat SyPy color-codes the important elements of the network, along with a usefullegend. In addition, one can zoom in and out, move the graph left or right, andsave the visualization in many formats interactively.144


Citation Scheme:


Citations by CSL (citeproc-js)

Usage Statistics



Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            async >
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:


Related Items