You may notice some images loading slow across the Open Collections website. Thank you for your patience as we rebuild the cache to make images load faster.

UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Goal-driven exploration for Android applications Lai, Duling 2019

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Notice for Google Chrome users:
If you are having trouble viewing or searching the PDF with Google Chrome, please download it here instead.

Item Metadata


24-ubc_2019_september_lai_duling.pdf [ 4.43MB ]
JSON: 24-1.0380310.json
JSON-LD: 24-1.0380310-ld.json
RDF/XML (Pretty): 24-1.0380310-rdf.xml
RDF/JSON: 24-1.0380310-rdf.json
Turtle: 24-1.0380310-turtle.txt
N-Triples: 24-1.0380310-rdf-ntriples.txt
Original Record: 24-1.0380310-source.json
Full Text

Full Text

Goal-driven Exploration for Android ApplicationsbyDuling LaiB.A.Sc, Simon Fraser University, 2016A THESIS SUBMITTED IN PARTIAL FULFILLMENTOF THE REQUIREMENTS FOR THE DEGREE OFMASTER OF APPLIED SCIENCEinTHE FACULTY OF GRADUATE AND POSTDOCTORALSTUDIES(Electrical and Computer Engineering)The University of British Columbia(Vancouver)July 2019© Duling Lai, 2019The following individuals certify that they have read, and recommend to the Facultyof Graduate and Postdoctoral Studies for acceptance, a thesis entitled:GOAL-DRIVEN EXPLORATION FOR ANDROID APPLICATIONSsubmitted by DULING LAI in partial fulfillment of the requirementsforthe degree of MASTER OF APPLIED SCIENCEin ELECTRICAL AND COMPUTER ENGINEERINGExamining Committee:JULIA RUBIN, ELECTRICAL AND COMPUTER ENGINEERINGSupervisorKARTHIK PATTABIRAMAN, ELECTRICAL AND COMPUTER ENGINEERINGSupervisory Committee MemberPHILIPPE KRUCHTEN, ELECTRICAL AND COMPUTER ENGINEERINGSupervisory Committee MemberiiAbstractThis thesis proposes a solution for automated goal-driven exploration of Androidapplications – a scenario in which a user, e.g., security auditor, needs to dynamicallytrigger the functionality of interest in an application, e.g., to check whether user-sensitive info is only sent to recognized third-party servers. As the auditor mightneed to check hundreds or even thousands of apps, manually exploring each appto trigger the desired behavior is too time-consuming to be feasible. Existingautomated application exploration and testing techniques are of limited help in thisscenario as well, as their goal is mostly to identify faults by systematically exploringdifferent app paths, rather than swiftly navigating to the target functionality.The goal-driven application exploration approach proposed in this thesis, calledGOALEXPLORER, automatically generates an executable test script that directlytriggers the functionality of interest. The core idea behind GOALEXPLORER isto first statically model the application UI screens and transitions between thesescreens, producing a Screen Transition Graph (STG). Then, GOALEXPLORER usesthe STG to guide the dynamic exploration of the application to the particular targetof interest: an Android activity, API call, or a program statement. The resultsof our empirical evaluation on 93 benchmark applications and 95 most popularGooglePlay applications show that the STG is substantially more accurate thanother Android UI models and that GOALEXPLORER is able to trigger a targetfunctionality much faster than existing application exploration.iiiLay SummaryGoal-driven exploration of mobile applications is a scenario in which a user needsto promptly trigger a specific functionality of an application, to explore its behavior.Goal-driven exploration is useful in a variety of tasks, e.g. when a security auditorneeds to inspect an application for compliance with security protocols. Existingapplication testing techniques are not suitable for goal-driven exploration becausethey cannot promptly navigate to the functionality of interest.In this thesis, we present our approach to goal-driven exploration, called GOAL-EXPLORER. Our main insight is to first analyze the application without runningit and construct a model of its user interface. Then, we run the application anduse the constructed model for identifying a set of user gestures that navigate theexecution towards the desired, user-selected functionality.The evaluation result shows that GOALEXPLORER performs substantially betterthan other existing approaches, providing an efficient and effective solution to goal-driven exploration.ivPrefaceThe research project included in this thesis is original intellectual product of itsauthor, Duling Lai, done under the supervision of Prof. Julia Rubin (supervisor).vTable of ContentsAbstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iiiLay Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ivPreface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vTable of Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viList of Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ixList of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xList of Acronyms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiAcknowledgments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiiiDedication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiv1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.1 Problem Statement . . . . . . . . . . . . . . . . . . . . . . . . . 11.2 Research Objective . . . . . . . . . . . . . . . . . . . . . . . . . 31.3 Research Contributions . . . . . . . . . . . . . . . . . . . . . . . 41.4 Thesis Organization . . . . . . . . . . . . . . . . . . . . . . . . . 62 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7vi2.1 Android Applications . . . . . . . . . . . . . . . . . . . . . . . . 72.1.1 App Components . . . . . . . . . . . . . . . . . . . . . . 72.1.2 Component Lifecycle . . . . . . . . . . . . . . . . . . . . 82.1.3 App UI Structure . . . . . . . . . . . . . . . . . . . . . . 93 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123.1 Android Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . 123.1.1 Existing Android UI Models . . . . . . . . . . . . . . . . 123.1.2 Automated App Exploration . . . . . . . . . . . . . . . . 143.2 Desktop and Web Testing . . . . . . . . . . . . . . . . . . . . . . 174 GOALEXPLORER . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194.1 Screen Transition Graph . . . . . . . . . . . . . . . . . . . . . . 204.2 STG Extractor . . . . . . . . . . . . . . . . . . . . . . . . . . . . 234.2.1 Topology Extractor . . . . . . . . . . . . . . . . . . . . . 244.2.2 Screen Builder . . . . . . . . . . . . . . . . . . . . . . . 244.2.3 Inter-Component Analyzer . . . . . . . . . . . . . . . . . 274.3 Target Detector . . . . . . . . . . . . . . . . . . . . . . . . . . . 284.4 Dynamic Explorer . . . . . . . . . . . . . . . . . . . . . . . . . . 284.4.1 Action Picker . . . . . . . . . . . . . . . . . . . . . . . . 294.4.2 Input Generator . . . . . . . . . . . . . . . . . . . . . . . 304.4.3 Actuator . . . . . . . . . . . . . . . . . . . . . . . . . . . 315 Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 325.1 Experimental Setup . . . . . . . . . . . . . . . . . . . . . . . . . 335.1.1 Methodology . . . . . . . . . . . . . . . . . . . . . . . . 335.1.2 Exploration Targets . . . . . . . . . . . . . . . . . . . . . 345.1.3 Subject Apps . . . . . . . . . . . . . . . . . . . . . . . . 345.1.4 Metrics . . . . . . . . . . . . . . . . . . . . . . . . . . . 355.1.5 Environment . . . . . . . . . . . . . . . . . . . . . . . . 365.2 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36vii5.2.1 RQ1 (Coverage) . . . . . . . . . . . . . . . . . . . . . . . 365.2.2 RQ2 (Performance) . . . . . . . . . . . . . . . . . . . . . 405.2.3 RQ3 (Accuracy) . . . . . . . . . . . . . . . . . . . . . . . 435.2.4 Evaluation Summary . . . . . . . . . . . . . . . . . . . . 476 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 486.1 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 486.2 Limitations and Threats to Validity . . . . . . . . . . . . . . . . . 496.2.1 Limitations . . . . . . . . . . . . . . . . . . . . . . . . . 496.2.2 Threats to Validity . . . . . . . . . . . . . . . . . . . . . 506.3 Future Research Directions . . . . . . . . . . . . . . . . . . . . . 506.4 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52viiiList of TablesTable 4.1 Regular Expressions for Login Detection . . . . . . . . . . . . 31Table 5.1 Coverage results for the benchmark and GooglePlay apps. . . . 38Table 5.2 Performance results for the benchmark and GooglePlay apps. . 42ixList of FiguresFigure 1.1 Example App - ZEDGE. . . . . . . . . . . . . . . . . . . . . 2Figure 2.1 Activity Lifecycle Events. . . . . . . . . . . . . . . . . . . . 8Figure 2.2 Application Screens. . . . . . . . . . . . . . . . . . . . . . . 10Figure 4.1 GOALEXPLORER Overview. . . . . . . . . . . . . . . . . . . 20Figure 4.2 WTG (solid lines) and WTG-E (solid and dotted lines) of Zedge. 20Figure 4.3 STG of Zedge. . . . . . . . . . . . . . . . . . . . . . . . . . 22Figure 5.1 Coverage results for the benchmark and GooglePlay apps. . . 37Figure 5.2 Performance results for the benchmark and GooglePlay apps. . 41xList of AcronymsADB Android Debug Bridge. 29API Application Programming Interface. 2, 9, 13APK Android Package. 19ATG Activity Transition Graph. 12–14, 20FN False Negative. 43FP False Positive. 43ICC Inter-Component Communication. 27IETF Internet Engineering Task Force. 5MAC Message Authentication Code. 5NLP Natural Language Processing. 50, 51OAuth Open Authorization. 5RQ Research Question. 32SDK Software Development Kit. 2, 13STG Screen Transition Graph. 3, 4, 22, 23, 28, 29, 48xiUI User Interface. 4, 5, 7, 9–15, 17, 18, 29, 32, 45WTG Window Transition Graph. 12, 13, 20xiiAcknowledgmentsThroughout years of my graduate study, I have received a great deal of support andassistance from my supervisor, Dr. Julia Rubin, whose expertise and patient guid-ance was invaluable in formulating the thesis topic and methodology in particular.This thesis would not have been completed without her continuous support. Sheintroduced me to the research world, yet had more profound influence on my lifebeyond research.I also would like to thank my examination committees, Dr. Philippe Kruchtenand Dr. Karthik Pattabiraman, for reviewing this thesis and providing constructivecomments.I offer my enduring gratitude to my colleagues and supportive friends at theUBC, who have inspired and supported me to work in this field. I would like tothank Junbin Zhang, in particular, for his insightful comments and generous helpon the evaluation of the research project. I owe my gratitudes to Khaled Ahmed,for the inspiring discussions that help devise the methodology. I also want to thankYingying Wang for help me formulate this thesis and provide me canny advice onmy study.Special thanks are owed to my beloved parents, who have supported me through-out my years of education, both morally and financially. Their love and encourage-ment made me never feel alone in a country far from home.xiiiDedicationTo my parentsxivChapter 1Introduction1.1 Problem StatementMobile applications (apps) have grown from a niche field to the forefront of moderntechnology. Mobile phone users have now access to more than 5.5 million apps inthe major online stores [64] and at least as many more in alternative stores [12].As our daily life becomes more dependent on mobile apps, various stakeholders,including app developers, store owners, and security auditors, require efficienttechniques for validating the correctness, performance, and security of the apps.Dynamic execution is a popular technique used for auditing [34, 37, 66, 72]and validating [18, 45, 60, 33] mobile apps. For example, a security auditor oftenmonitors the network traffic generated by an app to ensure that user-sensitive info issent encrypted over the network [45, 60] and that it is sent to recognized third-partyservers only [18, 33]. It is common for the auditor to know which part of theapp is responsible for the functionality of interest, but dynamically triggering thatfunctionality is still a challenging task.Consider, for example, a popular mobile personalization app, Zedge [74], whichprovides access to a large collection of wallpapers, ringtones, etc. and has morethan a hundred million installs on Google Play. A security auditor exploring thisapp can quickly determine that it uses the Facebook SDK [16] and allows the users1to log in with their Facebook accounts. Such determination can be done by simplybrowsing the app code: the Facebook login is triggered by a particular API fromthe Facebook SDK [15]. Now, the auditor wishes to check what information istransmitted during the login process and what information is granted by Facebookwhen the user logs into their account [59].Figure 1.1: Example App - ZEDGE.2To perform this investigation, the auditor needs to generate a set of user gesturesthat navigate the app to the Facebook login screen. For Zedge, that entails anontrivial sequence of steps shown in Figure 1.1: first, scroll all the way downin the menu of all possible operations in Figures. 1.1(a-b) to the 11th menu item,then click the “Settings” button in Figure 1.1(b) to open the “Settings" view, thenclick “ZEDGE account” in Figure 1.1(c), and only then click on “Continue withFacebook" in Figure 1.1(d).Such a manual exploration, especially when one needs to analyze hundredsor thousands of different apps, is time-consuming. Existing testing frameworks,such as Espresso [24], UIAutomator [28], and Robotium [61], are not helpful inthis scenario as they are designed to provide the user a way to manually script testcases and later repeatedly execute these test cases.Automated app exploration techniques that exercise the app by generating usergestures and system events [1, 5, 10, 32, 35, 46, 47, 48, 54, 65] are of limitedhelp as well. The goal of these techniques is to detect faults through a thoroughexploration of different app behaviors [42]. However, when there is a large numberof feasible paths to explore, these techniques will have difficulties to quicklynavigate to the functionality of interest. For the Zedge example in Figure 1.1, boththe most prominent automated app exploration and testing tools, Sapienz [48] andStoat [65], failed to reach the Facebook login screen after two hours of execution,even when configured to prioritize previously unexplored paths. That is becausethese techniques lack knowledge of which of the unexplored paths is most likely tolead to the functionality of interest and thus end up exploring a large number ofpossible unrelated paths.1.2 Research ObjectiveIn this thesis, we introduce a goal-driven app exploration approach and the associ-ated tool, called GOALEXPLORER, which directly triggers a particular functionalityof interest. GOALEXPLORER combines a novel approach for modeling an app UserInterface (UI) as a Screen Transition Graph (STG) with a dynamic STG-driven app3exploration technique. Given a particular target of interest – an Android activity,API call, or a code statement, GOALEXPLORER automatically builds an executablescript that swiftly triggers the target.We believe that our work would benefit stakeholders who are interested invalidating the behaviors of Android apps on a large scale, e.g., app store ownersand security auditors. The static UI model proposed in our work, STG, can also beused by the research community to perform UI-related analyses, such as detectingspurious UI transitions [75], e.g., when clicking on the “Settings" button directsthe user to the “Shopping" page.1.3 Research ContributionsThe idea of building a static UI model of an Android app is not new by itself. To thebest of our knowledge, A3E [5] was the first to build a static model of the Androidapp UI, which only focused on Android activities. Gator [73] further extended thismodel to capture information about the menu items and the widgets that facilitatetransitions between the modeled UI components, e.g. activities and menus.However, the main building blocks of both these models are UI componentsrather than their composition into UI screens. As such, they do not accuratelymodel the current Android UI framework [8] that allows composing screens frommultiple components, such as fragments, menus, drawers, etc. Thus, they willmisrepresent transitions originating in fragments, e.g., the transitions betweenscreens in Figures. 1.1(b) and (c). Moreover, they do not model states and transi-tions triggered by background tasks and events (broadcast receivers, in Androidterminology). For example, an app relying on the Facebook login mechanismcan use a broadcast receiver to deliver login information to the interested appcomponents, as done in Zedge example in Figure 1.1: after the login is triggered inFigure 1.1(e), the app broadcasts the login status and, in the case of a successfullogin, additional meta-information such as the user email address, to the Controlleractivity in Figure 1.1(f). Without modeling broadcast receivers, a goal-drivenexploration tool will fail to reach that screen.4Our analysis shows that 63% of the top 100 apps from Google Play have atleast one screen transition originating from a fragment and 34% have at least onetransition originating from a background task or broadcast receiver. Modelingthese behaviors is thus critical for the goal-driven exploration task and we do so inour work.In addition, we introduce the concept of modeling UI screens as composition ofUI components rather than individual UI components. We empirically show that,for our task, such approach is superior to the existing concepts of modeling app UI.More specifically, our empirical evaluation of GOALEXPLORER on a largenumber of Android apps from existing benchmarks and the Google Play store showsthat it is able to explore a substantially higher portion of each app when comparedwith the static model generated by Gator, even if we extend Gator’s model toinclude additional elements, such as fragments, services, and broadcast receivers.When compared to state-of-the-art dynamic exploration and testing techniques, i.e.,Sapienz [48] and Stoat [65], GOALEXPLORER is able to explore a similar portionof each app, which attests to the accuracy of its statically-built model of screens andtransitions. However, GOALEXPLORER is able to reach the functionality of interestsubstantially faster than the dynamic exploration tools: it takes GOALEXPLORER47 seconds on average to trigger a target statement, compared with 710 secondsand 861 seconds for the baseline tools, respectively. As such, GOALEXPLORERprovides an efficient solution to the goal-driven exploration problem in mobileapps.Interestingly, our analysis of network traffic associated with Facebook login inZedge helped revealing a potential security vulnerability in this app: after receivingthe OAuth [58] authentication/bearer token from Facebook, the app openly sendsit to its server. Such behavior is discouraged by the Internet Engineering TaskForce (IETF) [30] because any party holding the token can use it to access theuser private information, without any further proof of possession. According toIETF, when sent over the network, bearer tokens have to be secured using a digitalsignature or a Message Authentication Code (MAC) [30]. Zedge is not following5these guidelines; thus, a man-in-the-middle can steal the token and access privateinformation of users logged into the app.To summarize, this thesis makes the following major contributions:• It defines the problem of goal-driven exploration for mobile apps: efficientlytriggering a functionality specified by the user.• It shows that existing approaches are insufficient for performing goal-drivenexploration.• It presents a technique for constructing a static model of an app UI calledScreen Transition Graph (STG) and using STG to generate an executablescript that triggers a specific functionality of interest defined by the user.• It implements the proposed technique in a tool named GOALEXPLORERand empirically shows its effectiveness on the 93 benchmark apps usedin related work and 95 top Google Play apps. Our implementation of theGOALEXPLORER and its experimental evaluation package are availableonline [2].1.4 Thesis OrganizationThe remainder of this thesis is structured as follows:• Chapter 2 provides the background on Android app component structure,lifecycle, and UI.• Chapter 3 reviews the related work.• Chapter 4 presents our approach and its implementation.• Chapter 5 outlines our evaluation methodology and presents the evaluationresults.• Chapter 6 concludes the thesis, discusses the limitations of our approach andthreats to its validity, and presents the possible future research directions.6Chapter 2BackgroundIn this chapter, we give the background on app component structure, lifecycle, andUI.2.1 Android Applications2.1.1 App ComponentsAndroid apps are built from four main types of components — activities, services,broadcast receivers, and content providers [23]. Activities are the main UI buildingblocks that facilitate the interactions between the user and the app. Servicesperform tasks that do not require UI, e.g., prefetching files for faster access orplaying music in the background after the user has switched to other activities.Broadcast receivers are components that respond to notifications. A broadcast canoriginate from(i) the system, e.g., to announce that the battery is low,(ii) another app, e.g., to notify that a file download is completed, or(iii) from other app components, e.g., to notify that the login was completedsuccessfully.Content providers manage access to data, e.g., for storing and retrieving contacts.7PausedStoppedRunningStartedDestroyedCreatedAndroid SystemonStartonCreateonResume onStoponDestroyonResumeonPauseonRestartFigure 2.1: Activity Lifecycle Events.Except broadcast receivers, all app components must be declared in the man-ifest file; the declarations include components’ basic properties, such as Javahandler classes. Broadcast receivers can be declared either in the manifest file ordynamically in code by calling system API Context.registerReceiver.2.1.2 Component LifecycleUnlike Java programs, an Android app does not have a main method; instead, eachapp component implements its own lifecycle events. The activity lifecycle [20] isshown in Figure 2.1. Once an activity is launched, the system calls its onCreatemethod, followed by onStart and onResume. Developers specify the behaviorof these methods, e.g., the UI setup is typically done in onCreate. The activitycompletes the initialization after onResume. It then enters a running state, andbegins handling callbacks, e.g., those triggered by user inputs or by system changes.The activity is paused when it loses focus, and can eventually be stopped anddestroyed by the operating system.The lifecycle of services is similar to that of activities, except additionalmethods that handle service bindings, i.e., cases when other app componentsmaintain a persistent connection to a service. Binding is achieved by calling8Context.bindService, followed by invocation of the onBind lifecycle methodof the service.Broadcast receivers only have one lifecycle event, onReceive, to handle thereceived events. Content providers do not expose any lifecycle events.The transition between Android components relies on Intents, which explic-itly or implicitly specify the target component and the data to be passed to thatcomponent. An activity, service, or broadcast receiver can launch another activity orservice by calling corresponding Android API functions, such as startActivityand startService, and passing the Intent as a parameter.Broadcast receivers are registered with intent-filters, which specify the typesof events that the broadcast receiver handles. Once an event occurs, the systemwill trigger the onReceive callback methods of all broadcast receivers registeredfor this event. For example, an app can register a broadcast receiver listeningto changes of the network status and, in the onReceive callback, start a certainactivity or service once the device has acquired access to the Internet.2.1.3 App UI StructureAn application screen is represented by an activity “decorated” by additional UIcomponents, such as fragments, menus, dialogs, and navigation drawers [23],which correspond to modular sections of the screen. Figure 2.2 depicts suchmodular composition of screens. The solid box in Figure 2.2a highlights thecontainer activity; its two hosted fragments are highlighted in Figure 2.2b.A fragment is a re-usable UI component with its own lifecycle and eventhandling. A fragment can be instantiated in multiple activities and must always behosted by at least one activity. Its lifecycle is directly affected by the owner activity,e.g., when the owner activity is destroyed, all of its hosted fragment instancesare also destroyed. Fragment instances can be added, replaced, or removed fromthe host activities through the FragmentTransaction APIs, forming differentscreens of the same activity. Multiple fragments can exist in a single screen, as inFigure 2.2b; a screen can have any number of fragments or no fragments at all.9DavidDavidLisa(a) Activity.DavidDavidLisa(b) Fragments.DavidDavidLisaSettingsFindShareMore...(c) Menu.Figure 2.2: Application Screens.Menus and navigation drawers are UI components used primarily to provideadditional options to the user. The former are typically used for system-wide events,such as adjusting settings and the latter – to ease access to other app components,e.g. launching new activities. Figure 2.2c shows the activity with a menu opened,which allows the user to access system-wide actions, such as settings and search.Figure 1.1(a) shows a drawer that occupies most of the screen and contains buttonsfor navigating to other parts of the app. The user activates and deactivates menusand drawers by interacting with their action buttons, such as the one in the top leftcorner of Figure 1.1(c) and the top right corner of Figure 2.2c.Dialogs are small windows that prompt the user to take an action before theapp can proceed with its execution, e.g., to agree to granting access to the devicelocation. Menus, navigation drawers, and dialogs must be hosted by either anactivity or a fragment.UI components are composed of widgets, such as Buttons and TextViews,which are called Views in the Android terminology. ViewGroups are invisiblecontainers used for grouping the widgets and specifying their layout before placing10them inside the UI components. User mostly interacts with UI widgets and aretypically unaware of the app and UI components and their lifecycle.11Chapter 3Related WorkIn this chapter, we discuss related work on testing Android apps (Section 3.1). Wethen review desktop and web application testing approaches related to goal-drivenexploration (Section 3.2).3.1 Android TestingOur goal-driven exploration technique consists of two parts – a statically con-structed UI model and a dynamic app exploration technique based on the model.Thus, Section 3.1.1 focuses on existing Android UI models and Section 3.1.2discusses existing automated app exploration techniques.3.1.1 Existing Android UI ModelsApproaches that statically construct an Android UI model, such as A3E [5] andGator [73], are the closest to our work. Activity Transition Graph (ATG) introducedby Azim and Neamtiu in A3E [5] captures app activities and transitions betweenthese activities. It does not model other app components and does not modelactions that trigger the transitions, i.e., which user input triggers the transitionbetween activities. Window Transition Graph (WTG) introduced by Yang et Gator [73] extends ATG by addressing some of these limitations: it models12the event handler for each transition and also incorporates menus and dialogs asdistinct nodes in the model. However, it does not consider fragments, drawers,services, and broadcast receivers.Due to these reasons, for the Zedge example in Figure 1.1, both ATG and WTGwill only include three nodes that correspond to the app activities: “Controller”,which corresponds to the activity in Figures 1.1(a-c) and (f), “Authenticator”, forthe activity in Figure 1.1(d), and “Facebook”, for the activity in Figure 1.1(e),which is implemented inside the Facebook SDK. ATG and WTG models willinclude no transitions, as all transitions in this app originate from fragments. Assuch, none of these models will contain the path that reaches the Facebook activity.Moreover, the models do not account for broadcast receivers and thus no activitythat follows the Facebook login, e.g., the one in Figure 1.1(f), is reachable.An even more severe drawback of these existing UI models is that they treat UIcomponents, such as activities and menus, as separate nodes. These models do notaccurately represent the composition of UI components into screens, which hinderstheir applicability for the goal-driven exploration scenario: as the representationsdo not include enough details to extract an executable path. In Chapter 4, we willdiscuss this limitation in more detail, outline alternative ways to model Android UIcomponents and their relationships, and then present our proposed solution, ScreenTransition Graph.A number of approaches build up on or extend ATG and WTG [9, 69, 71, 79].For example, StoryDroid [9] statically renders the UI of each activity in ATG.The goal of this visualization of an app UI is to assist app development process,e.g. to provide recommendations for UI design and layout code. Wang andRountev [69] statically estimate the execution time of each WTG path based on thecompute-intensive API invocations along the path. Wu et al. [71] adopt a similarapproach to detect if any WTG path is energy-inefficient by statically estimatingthe energy usage based on the energy-intensive API invocations along a WTG path.CHIME [79] considers information about activity launch modes, which determinehow activities are placed in stack: e.g., an activity launched with “singleTask"13mode is only allowed to have one instance and the latest instance will kill all otherinstances of the same activity in the stack. By extends the modeling of transitionsin ATG with launch modes, CHIME produces a more accurate graph than A3E.Yet, these approaches do not change the underlying design decisions related to therepresentation of UI elements in a model, thus sharing all the limitations of thesemodels.3.1.2 Automated App ExplorationThe primary goal of automated app exploration techniques is to detect faults in apps.These techniques are typically used by app developers to discover issues beforedeployment [11]. Automated app exploration can also be used as the starting pointof analyses performed by app store maintainers and auditors, e.g. to verify securityof apps published to the store. As discussed earlier, existing app explorationtechniques are usually designed to test the whole app and not optimized to test onlythe functionality of interest.In the remaining of this section, we categorize the existing automated appexploration techniques based on their main test methodologies, further dividingthis section into random testing, model-based testing, and symbolic execution.Random Testing. Random testing techniques generate random, independent inputevents sequences during the test session. Monkey [27] is the most famous ofsuch approaches, which is part of the official Android SDK Tools [22]. Monkeyrandomly generates various input events, including user gestures and system events,such as adjusting volume and capturing screenshots. Interestingly, up to date,Monkey achieves largely the same app-level coverage for real-world apps as themost advanced state-of-the-art exploration techniques [68].Dynodroid [46] is another pseudo-random test generation tool that extendsMonkey with feedbacks that continuously adjust the weights of possible inputevents. DiagDroid [41] randomly exercises apps and tracks the asynchronous exe-cutions to detect the performance issues related to these asynchronous executions.All such tools explore the app in a random manner, and are unable to rapidly trigger14the exploration target for the goal-driven exploration scenario.Model-based Testing. Model-based testing techniques automatically generatetest cases based on a model of the app under test. According to Kong et al. [42],model-based testing is the most common Android testing methodology and 63%of the techniques involve model-based testing steps. Most model-based app testingtechniques rely on a dynamically constructed model to facilitate the testing process.For example, Stoat [65] builds a model at runtime that captures the applicationscreen states and input events that trigger the changes in screen states, e.g., buttonclicks and signal strength change. Although Stoat also statically analyzes the apps,it only uses static analysis to identify all possible input events for the app, especiallythe system-level events that cannot be inferred during dynamic exploration. Stoat’sdynamic model contains probabilities to transitions, which are mutated during themodel refinement stage, to prioritize generation of test that optimize for higher testcoverage and better fault detection.Sapienz [48] is another state-of-the-art tool that seeks to maximize code cov-erage and minimize the length of the generated test sequences in order to revealmore faults in shorter time. It combines random exploration with pre-defined eventsequence patterns that capture complex interactions with the app. The dynamicUI model in Sapienz contains widgets in the app UI; the set of model states iscontinuously extended with states collected when interacting with the app widgets.APE [31] represents the dynamic model as a decision tree and continuouslytunes it with the feedback obtained during testing to construct a finer-grained modelthan Stoat [65] and Sapienz [48]. APE’s model contains attributes of all widgets,e.g. to indicate whether the widget is clickable or not, whereas such attributesare ignored in Stoat and Sapienz. For example, Stoat assumes that all items in alist (ListView in Android terminology) behave equivalently, which is inaccuratewhen the attributes of these items are completely different, e.g. one is clickableand others are not.Although the models used in these model-based testing techniques are different,the basic idea is similar: dynamically build a model of the app and mutate it so15that the generated tests achieve higher coverage and faults detection. Since thesetools focus on systematically executing the whole app to detect faults rather thanswiftly navigating to a particular functionality of interest, they are inefficient forgoal-driven exploration. Specifically, our experiments demonstrate that while Stoatand Sapienz are able to achieve coverage similar to GOALEXPLORER, the time ittakes these tools to reach the target is substantially higher. That is because withoutbuilding an accurate static model and identifying the target upfront, it is difficult todetermine which part of the app to ignore/explore first.Symbolic Execution. Another line of work explores approaches for generatingsemantically meaningful inputs through symbolic execution to facilitate app explo-ration. To perform symbolic execution on Android apps, several approaches [53,39, 70] build a customized Dalvik virtual machine so that the app can be exercisedin a controlled environment. For example, IntelliDroid [70] statically resolves thecallgraph path constraints leading to the malicious behaviors of Android apps andthen executes the subject app in a customized Dalvik virtual machine with inputsthat force the execution to a specific callgraph path. Such approaches can onlyapply to a specific version of Android, and the customized virtual machine has tobe rebuilt if an updated version of Android is available. In contrast, our approachdoes not require such modifications.Instead of building a customized virtual machine, a number of approaches [38,52, 62, 70, 19] instrument the application to force executing a given callgraph path.For example, Jensen et al. [38] proposed an approach that uses concolic executionto generate input event sequences that force the execution towards the target line ofcode in the callgraph. However, this approach only resolves the path constraintswithin the component. It requires an UI model of the app to handle the transitionsbetween different components of the application, and is complementary to ourwork on modeling UI screens and transition between these screens.Moreover, approaches based on symbolic execution usually face the pathexplosion problem [6]. The main sources of path explosion are loops and functioncalls, and the event-driven nature of Android apps make it even more challenging.16The existing techniques make simplifying assumptions to overcome path explosion,which introduce inaccuracy in extracting the path constraints. In addition, thecomplexity of symbolic execution grows exponentially with the code size. Mostof the symbolic execution approaches are evaluated on small open-source appsor malware samples. However, our observation of running those techniques onreal applications reveals that the majority of the constraints that they manage toresolve are related to if-conditions instead of UI-related operations. Due to theselimitations, the existing symbolic execution approaches are not applicable to thegoal-driven exploration.3.2 Desktop and Web TestingUI testing for desktop apps have been extensively studied mainly for executingmanually scripted test cases [49]. Techniques that generate UI models for desktopapps [63, 3, 78] are closest to our work. For example, Palus [78] used staticanalysis to refine dynamically-constructed model, generating test cases that detectmore bugs in Java applications. Gazoo [3] statically constructs a model of aJava application, which is used to generate test cases that cover larger portionof application and detect faults in the application. These UI testing approachesare not applicable to automated UI tests in mobile apps as desktop apps normallypossess a single entrypoint and mobile apps have multiple entrypoints due to theirevent-driven nature. However, these approaches have demonstrated the benefit ofusing a statically construct model to guide the exploration.As for web apps, app UI models often cannot be generated through staticanalysis. Since the majority of the web apps are JavaScript-based, static techniques,while work well on statically typed language, are error-prone in analyzing highlydynamic JavaScript apps [56].Given the difficulty in statically analyzing JavaScript applications, most of theexisting work focus on pure dynamic approaches [7, 13, 50, 51, 67, 17]. Guided testgeneration techniques produce test suites that achieve a specific goal [67, 17] areclosest to our work in the area of web testing. For example, the goal of WATEG [67]17is to achieve a higher coverage of specific business rules, such as calculating interestrates, loan eligibility, and customer status in a banking application. WATEG relieson the user for specifying business rules as predicates over the UI state of the app,e.g. a credit card number is entered and Pay button is pressed. It then performs atwo-phase exploration: it first crawls the web app to construct a UI model of thetransitions while collecting the predicates and then executes the app to follow thepaths in the generated model that satisfy all required predicates.FEEDEX [17] uses the coverage metrics measured during exploration sessionto guide the exploration towards unexplored parts of the application. Yet, its goalis different from ours, which is to trigger a functionality of interest. Moreover, allthese approaches generate app models solely from the data received during thedynamic exploration. They are similar to the model-based testing techniques forAndroid apps, which lack the knowledge of which unexplored path can lead tothe functionality of interest, hence not applicable to the goal-driven explorationproblem defined in this thesis.18Chapter 4GOALEXPLORERThis chapter introduces our approach for performing goal-driven exploration, whichis implemented in a tool called GOALEXPLORER [2]. The high-level overview ofGOALEXPLORER is presented in Figure 4.1. The shaded boxes are three majorsteps of GOALEXPLORER; the main components of each step are depicted in whiteboxes. The arrows indicate data or control flows between steps and components.GOALEXPLORER obtains two inputs: the Android Package (APK) file of theapp and the target functionality of interest, which can be provided as an app activity,an API call, or a code statement. For example, when monitoring the network trafficrelated to the Facebook login scenario in Zedge, we set as target a Facebook APIcall, as discussed in Chapter 1.In the first step, GOALEXPLORER statically constructs the Screen TransitionGraph (STG) of the app (the STG Extractor component in Figure 4.1). It thenmaps the target(s) to (possibly multiple) nodes of the graph (the Target Detectorcomponent). Finally, it uses the graph to guide the dynamic exploration to areachable target node, starting with the one that has the shortest path from theinitial screen (the Dynamic Explorer component). Due to the conservative natureof our static analysis, dynamic exploration can encounter infeasible paths; it thenbacktracks and try the next shortest path to the same target or chooses the nexttarget, if multiple targets are specified.19STG ExtractorTargetTopologyExtractorDynamic ExplorerInputGeneratorActuatorActionPickerScreenBuilderExecutable ScriptSTGInter-ComponentAnalyzerTarget DetectorAndroid DeviceFigure 4.1: GOALEXPLORER Overview.In what follows, we first introduce the STG and discuss our design choices. Wethen describe the STG Extractor (Section 4.2), Target Detector (Section 4.3), andDynamic Explorer components (Section 4.4).4.1 Screen Transition GraphAs discussed in Chapter 2, both Window Transition Graph (WTG) and its prede-cessor, Activity Transition Graph (ATG), model UI components as distinct nodesin the graph [5, 73]. Moreover, they do not model fragments, drawers, servicesand broadcast receivers. As such, for the example app in Figure 1.1, these graphswill only contain three activity nodes: elements #2, #3, and #4 presented with solidlines in Figure 4.2.Authenticatorsettings ZEDGE accountAccountManagerReceiverBrowseDrawerControllerSettingsFacebookAccountcontinue w. Facebookopen drawerclose drawer123 45Activity Fragment Broadcast Receiver Drawer EventTransitionloginaction logged inAuthFigure 4.2: WTG (solid lines) and WTG-E (solid and dotted lines) of Zedge.Our “direct” extensions to this model, which we refer to as WTG-E, introduces20a number of additional elements, presented with dotted lines in Figure 4.2. Forconciseness of the discussion, we only include in this snippet parts of the Zedgeshown in Figure 1.1:• Assuming drawers are handled similar to menus, as they correspond tosimilar UI concepts, element #1 in Figure 4.2 would represent the drawerin Figures 1.1(a,b). Adding drawers would also lead to creating transitionslabeled “open drawer” and “close drawer” between elements #1 and #2.• The transition labeled “settings” from element #1 to #2 would be contributedby an analysis of the drawer behavior that launches a new activity.• The “Account Manager” broadcast receiver, responsible for notifying otherapp components when Facebook authentication is completed, would berepresented by element #5, and its corresponding incoming and outgoingtransitions.• The fragments contributing to each activity could be represented by innerboxes inside elements #2 and #3, as only these two activities have fragmentsin this example. Representing the fragments would allow the model tocapture transitions from element #2 to #3 and #3 to #4, as these transitionsare triggered from fragments embedded in each corresponding activity.However, even the extended WTG-E model is sub-optimal for producing theexecution scenario that leads to the Facebook login screen in the Zedge example.That is because WTG-E’s main nodes are UI components rather than their com-position to screens. Thus, the model cannot represent the actual screens, omittingimportant information used for exploration.For example, from the WTG-E graph in Figure 4.2, one can conclude that“close drawer” action for element #1 that leads to element #2 – the “Controller”activity, can be followed by the “ZEDGE account” action to reach element #3 –the “Authenticator” activity. Such execution is not possible in practice as “ZEDGEaccount” action is only enabled when the “Controller” activity is opened with21the “settings“ fragment. However, when the app execution starts, the “Controller”activity is rather opened with the “browse“ fragment, thus there is no option topress the “ZEDGE account” button after closing the drawer. Without distinguishingbetween these different states of the “Controller” activity (element #2), one cannotsuccessfully explore the application.The Screen Transition Graph (STG) proposed in this work addresses thislimitation. It models application screens by representing the composition of thecontainer activity, hosted fragments, menus, navigation drawers, and dialogs oneach screen. Figure 4.3 shows the STG representation of the Zedge snippetin Figure 1.1. Unlike in the WTG-E model, the “Controller” activity here isrepresented by five different elements: #1-4 and #8. These elements correspondto different compositions of fragments and drawers hosted by the activity. Thisrepresentation clarifies that the only possible path to the Facebook login activity(element #6 in Figure 4.3) is to start from the “Controller” activity (element #1),open the drawer (arriving at element #2), then press the ‘settings” option (arrivingat element #3). Only after that one can transition to elements #4, #5, and #6.Authenticatorsettings ZEDGE accountAccountManagerReceiverDrawerControllerBrowseControllerBrowseControllerSettingsControllerFacebookControllerAccountcontinue w. Facebookopen drawerclose draweropen drawerclose drawer123 5 64 7 8Activity Fragment Broadcast Receiver Drawer EventTransitionDrawerSettingsloginaction logged inAuthFigure 4.3: STG of Zedge.Moreover, after the Facebook login is completed, the execution arrives atelement #8, which is another instance of the “Controller” activity, but with the“Account” fragment. It is clear from this representation that the user cannot press22“ZEDGE Account” again, as can mistakenly be concluded from the WTG-E repre-sentation in Figure 4.2.The example demonstrates that the STG representation is more accurate thanthe original WTG, and even WTG-E, for producing execution scripts in the goal-driven exploration scenario. We empirically evaluate the differences between theserepresentations in Chapter 5. Below, we give a more formal definition of STG.In STG, the main nodes Vs represent app screens and the supporting nodes Vrand Vb represent services and broadcast receivers, respectively. Each screen nodein Vs consists of one container activity VAs , zero or more fragments VFs , zero orone menu/drawer VMs , and zero or one dialog VDs . The collection of all screennodes corresponds to possible screens allowed in the app. Intuitively, an activitywith n fragments and one drawer could correspond to 2× 2n different screens(all combination of fragments, with and without the drawer). Of course, not allcombinations are possible in a particular app, and we describe our screen extractionalgorithm in Section 4.2.Supporting nodes are required as transitions between screens can pass throughthese elements. The edges E in STG correspond to transitions between nodesV =Vs∪Vr ∪Vb. Each edge e ∈ E has a label eτ which represents the event thattriggers the transition: either a user action, e.g. pressing the “ZEDGE account”button to move from element #3 to #5, or an intent that triggers the broadcastreceiver event, e.g. the notification received by the system for transitioning fromelement #7 to #8. Transitions that are triggered automatically, e.g., when a serviceopens a UI dialog without any notifications or clicks, have no event triggers in themodel: eτ = /0.4.2 STG ExtractorWe now describe our approach for constructing STG. The STG Extractor consists ofthree main components outlined in Figure 4.1: Topology Extractor, Screen Builder,and Inter-Component Analyzer. These components contribute to collecting andrefining STG elements.234.2.1 Topology ExtractorTopology Extractor identifies the main app components, i.e., activities, services,and broadcast receivers, and their call graphs. It relies on FlowDroid’s imple-mentation [4] to extract the components from both the app source code and xmlfiles, to associate each components with its entry points, i.e., lifecycle events andapplication-specific callback methods, and to build a call graph for the componentsthat comprise of the call graphs of the entry points. For example, an activity mayregister callbacks that get invoked when a button is pressed; or a location updateis received, and the respective callback handler would then being linked to theactivity’s call graph and being analyzed between the onResume() and onPause()lifecycle events of the activity.App methods annotated with “@JavascriptInterface” can also be triggered byJavaScript-enabled WebViews [29] . As the invocation of these methods dependson the dynamic WebViews content delivered at runtime and cannot be determinedstatically, Topology Extractor conservatively extends the call graph by assumingthat any JavaScript-enabled method in a component can be called by any JavaScript-enabled component’s WebView.Services and broadcast receivers collected by Topology Extractor are supportingnodes in STG and are added directly to Vr and Vb, respectively. App screens in Vsare built in the next step.4.2.2 Screen BuilderA key insight of our approach is modeling the UI screens and the transitionsbetween the screens. To this end, Screen Builder analyzes the call graph of eachactivity, collecting the hosted fragments, menus, dialogs, and navigation drawers.Fragments can be defined statically in the activity layout file or dynamically incode. Screen Builder starts by extracting the layout files using AXMLPrinter [55]and then identifies fragments they declare. To collect fragment declarations incode, Screen Builder scans the call graph of each activity, identifying fragmenttransaction methods that add, remove, or replace fragments. The full list of such24methods, collected from the Android Developer website [25], is available in ouronline appendix [2].The activity call graphs are scanned in multiple iterations. First, Screen Builderanalyzes the activity lifecycle events triggered when an activity is initialized: fromonCreate() to onResume(). Fragments collected from these lifecycle events willappear in the ’base’ screen of the activity, which is displayed when the activity islaunched.When collecting fragments, Screen Builder performs an inter-procedural flow-sensitive analysis to identify fragment transaction methods s that add, remove, orreplace fragments. It then performs a context-sensitive point-to analysis on thecalling path of s, starting from the lifecycle event, to identify the Java type of thefragment(s) handled in s. Once all fragments from the onCreate() to onResume()lifecycle events of an activity A are processed, Screen Builder creates the “base”screen node VAs in Vs for the activity, with VFs containing a (possibly empty) list ofthe identified fragments.Listing 4.1: Zedge code snippet.1 public class AuthActivity extends BaseActivity {2 protected void onCreate(Bundle bundle) {3 initMainFragment();4 }5 protected void initMainFragment() {6 FragmentManager manager = getSupportFragmentManager();7 Fragment mFragment = new AuthFragment();8 addFragment(manager, mFragment,;9 }10 static addFragment(FragmentManager manager, Fragment fragment, int id↪→ ) {11 FragmentTransaction transaction = manager.beginTransaction();12 transaction.add(id, fragment);13 transaction.commit();14 }15 }This process can be demonstrated on the simplified code snippet from Zedgein Listing 4.1. In lines 12-13, a fragment fragment is added to the activity25AuthActivity. The variable fragment can be traced back to the variablemFragement in line 8, which is defined in line 7 and has the type AuthFragment.This fragment addition is triggered from the onCreate lifecycle event in line3. Thus, Screen Builder will create a screen node for AuthActivity with onefragment of type AuthFragment.Lifecycle methods issued when the activity is paused, stopped, or resumed, aswell as callback methods, can further add, remove, or replace fragments in the “base”screen. Screen Builder thus analyzes possible lifecycle method invocation chains:“onPause→ onResume" and “onPause→ onStop→ onRestart→ onResume" (seeChapter 2) to extract the fragments from each of these chains. If the fragments in achain differ from the fragments in the “base” screen node VAs , it adds a new screennode VAs′ for the activity A, which contains a union of fragments in VAs and thoseidentified in the chain. It also adds a transition between VAs and VAs′ , with an emptylabel, as this transition is triggered automatically by the system.Fragments in callback methods are handled similarly. Since the order ofcallbacks cannot be predicted, Screen Builder assumes the callbacks can occurin any order; it thus creates a new screen VAs′′ for any possible order of callbackmethods which modify fragments of the “base” screen. For example, given twocallbacks c1 and c2, Screen Builder will create four additional screens augmentingthe “base” screen and the screens collected after analyzing the lifecycle events.That is, each of the already created screens s will “cloned” to produce four newscreen: after applying c1 only, c2 only, c1 and c2, and c2 and c1.Screen Builder also creates transition edges between these newly createdscreens and sets the transition label eτ to be the action triggering the correspondingcallback. That is, for callbacks defined for UI widgets, e.g., of Button.onClick(),Screen Builder sets as the transition label the label associated with the widget.For the system event callbacks, LocationManager.onLocationChanged(), thetransition label is empty.After fragments are handled, Screen Builder analyzes activity call graphs, aswell as its associated fragment call graphs, to identify methods initializing menus,26drawers, and dialogs. The full list of such methods is obtained from the AndroidDeveloper [26] and is available online [2]. Once an initialization of a menu, drawer,and dialog is found, Screen Builder copies each relevant screen s for the activity Ainto s′, adds the found menu, drawer, or dialog to s′, adds s′ to the set of all screennodes Vs, and creates a transition edge between s and s′. The transition label eτ isset to be the action that triggers the callback method.4.2.3 Inter-Component AnalyzerAfter the previous step, we have collected all screen nodes and the transitionsbetween screen nodes of the same activity but with different fragments, menus,drawers, and dialogs. The Inter-Component Analyzer collects the inter-componenttransitions between nodes corresponding to different Android components, e.g.screens of different activities, services, and broadcast receivers. These transitionsare performed via Inter-Component Communication (ICC) methods that launchnew app components, e.g. startActivity and bindService. The target of thetransition is specified in the Intent parameter of these methods, either explicitly orimplicitly. We rely on FlowDroid’s integration with IccTA [43] to identify ICClinks between app components.For each link where the source and target are services and/or broadcast receivers,Inter-Component Analyzer simply creates the corresponding transition edge e inSTG, If the target of the transition is a broadcast receiver, the transition label eτis set to be the broadcast which triggers the event, as specified in the broadcastreceiver intent-filter. If the target of the transition is a service, eτ = /0, as thistransition is triggered “automatically”, without any user or system event.Activities are represented by a number of screens and thus require a morenuanced treatment. If a source of an ICC link is an activity A, Inter-ComponentAnalyzer identifies the activity entry point p from which the communication isissued. It then finds all screen nodes in STG that correspond to A, which can bereached from the “base” screen node of A via transitions associated with the actionthat triggers p. It adds a transition from all these nodes to the nodes that represent27the targets, labeling them with the action that triggers p. When the target is anactivity as well, Inter-Component Analyzer finds only the “base” screen node ofthat activity and creates a transition to that node. That is because when a newactivity is launched, it starts in the initial screen; transitions between differentscreens of the target activity are already handled by Screen Builder.4.3 Target DetectorWhen the exploration target is specified in a form of an activity, Target Detectorsimply traverses and marks all screen nodes that correspond to that activity. Reach-ing any of these nodes will be considered reaching the target. For targets given asan API, Target Detector scans the app call graph to find all statements that invokesthe given API. For targets given as a single statement, Target Detector simply findsthe statement in the app call graph.Next, it maps the identified statements to STG nodes as follows: if a statementis identified in a lifecycle event of a component, it marks all nodes that correspondto that components as targets. If a statement is in a callback method, simplyreaching the component is insufficient as one also needs to invoke the callbackmethod itself. Therefore, Target Detector identifies the STG transitions labeled bythe action that triggers the callback and sets the destination of these transitions asthe targets.4.4 Dynamic ExplorerGiven an STG of the app and a set of targets, Dynamic Explorer performs goal-driven exploration guided by the STG, to trigger at least one of these targets. Ithas three main components outlined in Figure 4.1: Action Picker, Input Generator,and Actuator, executing in a loop until a target node is reached.284.4.1 Action PickerIn each iteration, Action Picker first evaluates all possible actions enabled from thecurrent screen, such as clicking a button or filling a textual input field. For eachscreen, the number of possible actions is determined by the number of widgets andthe type of each widget. We refer to all possible actions of a screen collectively asthe state of this screen. Action Picker retrieves the current activity and fragmentsusing an Android Debug Bridge (ADB) [21] shell dumpsys command and readsthe screen UI hierarchy using UIAutomator [28]. This information is used to mapthe current screen to the corresponding node in STG and verify that the explorationarrived at the intended STG node.If Action Picker is already “locked” on a particular target, it performs a breadth-first search on STG to find all paths from the current screen node S to that targetand sorts the paths from the shortest to the longest. It picks the next action fromthe shortest path that leads to the new screen node S′ and checks if the actionis available on the screen. If so, it proceeds to the next step: Input Generator.Otherwise, it attempts to change the state of the current screen S by dynamicallyexploring the available actions until the transition to S′ is available. That is donebecause some user actions do not trigger transitions between screens that are thusabsent from STG; yet, they rather toggle the state of the screen, which might enablefuture transitions. For example, when adding an item to the shopping cart, anapplication stays in the same screen. This action is still required to move to thenext screen – the checkout, which is only enabled when there is at least one itemadded to the shopping cart.To activate the actions that can enable future transitions, Action Picker adoptsa weighted UI exploration strategy, which was shown to efficiently discover newstates of a screen [46, 65]. The exploration strategy assigns each action e anexecution weight W (e), which is defined as:W (e) =α ∗Te+β ∗Ceγ ∗Fe29where Te is the event’s type, e.g., click, scroll; Fe is the event’s history executionfrequencies that measure how many times an event has been executed duringthis exploration session. To avoid division by zero, Fe is initially set as 1 and itincrements each time the event has been executed. Ce is the number of unvisitedchildren widgets indicating how many of the widgets with Fe = 1 are in the screenafter the event e has been executed. α , β , γ are the weight parameters that Stoat [65]carefully tuned based on their observation, hence we adopt the same value in ActionPicker, and set α , β , and γ to 0.4, 0.2, and 0.4 respectively.If S′ cannot be reached within a certain number of attempts (currently set to 50iterations), Action Picker backtracks and proceeds to the next path. If no actionis available on any of the paths reaching to the current target or if there is no“working” target set yet, Action Picker performs a breadth-first search to find thenext available target and repeats the search for that target. It returns “unreachable”if no action leading to any of the targets is found.4.4.2 Input GeneratorIf an action towards a target is picked, Input Generator checks whether specifictextual inputs are necessary to successfully trigger the transition.First, as many Google Play apps also require the user to log in for accessingsome of app features, an exploration tool cannot perform well without handling lo-gin screens. We equip GOALEXPLORER with the ability to handle logins, assumingthat the login credentials are supplied as an optional input.To this end, Input Generator analyzes the UI hierarchy of the current screen tosearch for textual input fields, such as EditText. It further evaluates whether thetextual inputs requires login credentials. Similar to prior work [36], this is done bychecking whether the widget ID, text, hint, label, and content description match theregular expressions associated with login and password strings, e.g., “username”,“user id”, etc. The exact list of the patterns is listed in Table 4.1.If a match is found, Input Generator enters the credentials into the correspond-ing text fields. Otherwise, it feeds random strings to all textual input fields in the30Table 4.1: Regular Expressions for Login DetectionConcept Regular Expressionusername (?i) ( user | account | client | phone | card ) [\s\_\-]*( name | id | number )(?i) ( log | sign ) [\s\_\-]* in[\s\_\-]* ( name | id | )(?i) [e] mailpassword (?i) ( pass | pin ) [\s\_\-]* ( word | code | )current screen.4.4.3 ActuatorActuator fires the action selected by the Action Picker. Its implementation extendsthe one of Stoat, which provides an automated framework that supports click, touch,edit (enter random texts), navigation (e.g., back, scroll, menu). The original Stoatonly supports Android API level 19 (i.e., Android 4.4), and we extend it to supportnewer Android platforms. Actuator also records all triggered actions, allowing usto produce an executable test scripts for reaching the target.31Chapter 5EvaluationWe now describe our experimental setup and discuss evaluation results. Our aim isto answer the following Research Questions (RQs):• RQ1 (Coverage): What is the fraction of targets GOALEXPLORER canreach and how does that compare with the baseline approaches?• RQ2 (Performance): How fast can GOALEXPLORER reach the target andhow does that compare with the baseline approaches?• RQ3 (Accuracy): What is the accuracy of the model constructed by GOAL-EXPLORER and the ability of dynamic UI exploration to mitigate modelinaccuracy?– RQ3.1 (FN): What is the fraction of cases when the model does notcontain transitions that exist in practice?– RQ3.2 (FP-Dynamic): What is the fraction of cases when the modelcontains transitions that exist but cannot be triggered dynamically?– RQ3.3 (FP): What is the fraction of cases when the model containstransitions that do not exist in practice?325.1 Experimental Setup5.1.1 MethodologyTo answer our research questions, we perform two main experiments. First, wecompare exploration that is based on STG with the exploration based on the WTGand WTG-E models described in Section 4.1. At the time of writing, the WTGmodel has been used as the foundation for practically all techniques that rely onstatically constructed UI model of a mobile app, e.g., [40, 76, 77, 69, 71]. As theoriginal WTG model introduced by Gator [73] cannot be directly used for goal-driven app exploration, we extracted the model produced by Gator and pluggedit into our Dynamic Explorer. Moreover, since Gator’s implementation is notcompatible with the current Android API level that we use in our work (API level23), Gator failed to run for most of the Google Play apps. We thus re-implementedthe WTG extraction process for apps with API level 23. We then extended WTGto handle fragments, services, broadcast receivers, etc., producing the WTG-Erepresentation, as described in Section 4.11. That is, we compare STG with threestatic models:• WTG (Gator): the original implementation from Gator;• WTG: our re-implementation of Gator’s model to handle modern apps;• WTG-E: our extension of WTG to handle fragments, drawers, services andbroadcast receivers.As a second line of experiments, we compare GOALEXPLORER with the state-of-the-art automated exploration techniques: Sapienz [48] and Stoat [65]. Werun both tools with one hour allocated execution time for each target. We choosethat timeout following the tools’ own evaluation methodology: both Sapienz andStoat have used one hour execution time when comparing test coverage to other1Concurrent to our work, similar extension was proposed in StoryDroid [9]; but its implementa-tion is not publicly available.33existing testing techniques. We used configuration parameters specified by thepapers describing these tools and kept default values for parameters not describedin the papers. We also experimented with modifying the configuration parameters,to ensure the baseline is most favorable for the goal-driven exploration task, e.g.,by increasing the sequence length and population size for Sapienz and prioritizingcoverage to test diversity for Stoat. As these adjustments did not affect the results,we kept the default configurations.Moreover, we extended both tools to handle login screens, as we did forGOALEXPLORER (see Section 4.4). Specifically, we added the same login logicthat we used for our tool to that of Stoat. As the relevant source code of Sapienz isnot open-sourced (the motif part is only available as binary), we implemented alogin detection component which runs in parallel to Sapienz and constantly checksif the execution arrived at the login screen. When it does, we pause Sapienz andhandle the login flow as we do for GOALEXPLORER and Stoat. The executiontime of this operation is negligible and is comparable to the handling of logins inGOALEXPLORER and Stoat.5.1.2 Exploration TargetsWe experiment with two scenarios. In the first one, we set each activity of thesubject apps as the target, one at a time, and repeat the experiment for all activitiesin an app. That allows us to align our results with those commonly reportedin related work. Then, we experiment with setting a code statement as a target,investigating a more nuanced, goal-driven exploration scenario which motivatesour work. To this end, we pick a particular API, namely URL.openConnection,identify all occurrences of this API in the subject apps, and set them as targets,one at a time.5.1.3 Subject AppsA survey of automated testing tools by Choudhary et al. [11] used 68 open-sourceF-Droid [14] apps in their survey, which de facto became a standard benchmark in34analyzing automated testing tools. Both Sapienz [48] and Stoat [65] were evaluatedon that benchmark and the Stoat authors further extended the benchmark withadditional 25 F-Droid apps, producing a benchmark with 93 F-Droid apps in total.We evaluate our approach on the same 93 benchmark apps.In addition, since F-Droid apps are easier targets, we downloaded the 100 mostpopular apps on the Google Play store as of July 31, 2018. This dataset allowsus to evaluate our approach on real applications commonly used in practice. Ofthe 100 Google Play apps, five failed to run on the emulator (the decision to useemulators for the experiments is discussed in the Environment sub-section below).We thus excluded these five applications from further analysis, arriving at the setof 93 benchmark and 95 Google Play apps. A detailed description of these apps isavailable online [2].5.1.4 MetricsTo answer RQ1 and RQ2, we measure the fraction of targets reachable by eachtool per app and the average time it takes to reach a target, respectively. Thegoal of these experiments is to detect whether design choices implemented inGOALEXPLORER allow us to cover a large fraction of app behaviors in a rapidmanner.For RQ3, we manually establish the ground truth of all paths leading to allreachable URL.openConnection targets in a subset of apps. Establishing groundtruth for closed-source Google Play apps is unrealistic because these apps arelargely obfuscated and manually tracking reachability of statements would not bereliable. We thus selected 36 open-source apps from our benchmark, which are alsoavailable on Google Play, to ensure we arrive at a representative and practicallyvaluable subset. For these 36 apps, the author of this thesis manually collectedthe set of all reachable statements that invoke URL.openConnection API andcross-validated the findings with another member of the research group. The set ofthe analyzed apps and the constructed ground truth is available online [2].We then analyze the main reasons preventing GOALEXPLORER from reaching35some of these targets, i.e., cases when the correct path is missing in STG. We alsoanalyze the cases GOALEXPLORER encounters spurious STG paths that cannotbe followed due to the over-approximation made by our model construction, i.e.,cases when the tool has explored a wrong path and needs to backtrack. The goal ofthese experiments is to evaluate the accuracy of the model produced by our tooland the ability of our dynamic explorer to recover from failures.5.1.5 EnvironmentWe run all tools on a 64-bit Ubuntu 16.04.4 physical machine with a 20-core CPU(Intel Xeon), allocating 64GB of RAM for each tool. We run each app for one hourwith each of the compared tools; the total evaluation time thus exceeds 500 hoursonly to answer RQ1 and RQ2. Therefore, we run the experiments on Androidemulators, to allow running multiple executions in parallel.5.2 Results5.2.1 RQ1 (Coverage)Figure 5.1a shows the box and whisker plot representing the minimum, me-dian, mean, and maximum percentage of activities reached by goal-driven ex-ploration under four different models: WTG (Gator), WTG, WTG-E, and STG(GOALEXPLORER). It also shows the fraction of activities reachable by the state-of-the-art dynamic exploration tools: Sapienz and Stoat. The results are plottedseparately for the 93 benchmark F-Droid apps and the 95 Google Play apps.Table 5.1a summarizes Figure 5.1a, reporting min, mean, median, and maxpercentage of target activities triggered. For the benchmark apps, GOALEXPLORERreaches between 17.9% and 100% of activities, with an average of 69.4% and amedian of 66.7%. For comparison, WTG and WTG-E reach 50.7% and 55.3%of activities receptively, on average per app. The divergence between the resultsof WTG (Gator) and WTG is because Gator cannot process 5 of the benchmark36Gator WTG WTGE GoalExplorer Sapienz Stoat0%25%50%75%100%BenchmarkllllllllllllGator WTG WTGE GoalExplorer Sapienz Stoat0%25%50%75%100%Google Play(a) Fraction of reachable activities in an app (higher is better).Gator WTG WTGE GoalExplorer Sapienz Stoat0%25%50%75%100%BenchmarkllllllllGator WTG WTGE GoalExplorer Sapienz Stoat0%25%50%75%100%Google Play(b) Fraction of reachable openConnection statements in an app (higher is better).Figure 5.1: Coverage results for the benchmark and GooglePlay apps.applications, resulting in crashes. These apps are excluded from the statisticsreported in the figure.For Google Play apps, the differences between the UI models is much moresubstantial. WTG (Gator) can only process nine out of 95 Google Play apps, thus37Table 5.1: Coverage results for the benchmark and GooglePlay apps.(a) Fraction of reachable activities in an app (higher is better).Category Metrics Gator WTG WTG-E GoalExplorer Sapienz StoatBenchmarkMin 0 0 14.3 17.9 17.9 17.9Mean 39.5 50.7 55.3 69.4 67.9 66.5Median 37.5 50 50 66.7 66.7 66.7Max 100 100 100 100 100 100Google PlayMin 0 0.5 1.4 1.7 1.7 1.7Mean 1.4 9.4 12.9 23.2 18.7 19.1Median 0 7.4 11.1 19.6 17.2 16.7Max 16.7 66.7 66.7 75 66.7 66.7(b) Fraction of reachable openConnection statements in an app (higher is better).Category Metrics Gator WTG WTGE GoalExplorer Sapienz StoatBenchmarkMin 0 0 0 50 0 0Mean 17.4 21.3 51.7 85.9 73.4 79.5Median 0 0 50 100 91.7 100Max 100 100 100 100 100 100Google PlayMin 0 0 0 0 0 0Mean 1.7 20.9 31.1 57.9 55.6 57.5Median 0 19 31.3 57.1 55.6 57.1Max 50 100 100 100 100 100its overall activity-level coverage is very low: around 1%. Our re-implementationof Gator, WTG, can process all apps but covers around 9% of each app activityon average. WTG-E increases the activity-level coverage to 12.9% on average,as it is able to deal with fragments, services, and broadcast receivers. Finally,GOALEXPLORER reaches the highest activity-level coverage of 23.2% on average,due to its accurate representation of app screens. This result demonstrates thatonly extending existing static models with handling of fragments, services, andbroadcast receivers, as we did in WTG-E, is less beneficial than the full set ofsolutions GOALEXPLORER applies.38Notably, GOALEXPLORER achieves similar (and even 3% higher) activity-levelcoverage than the state-of-the-art dynamic exploration tools, for both benchmarkand the Google Play apps: for the benchmark apps, Sapienz and Stoat cover 67.9%and 66.5% of activities per app on average, respectively, while GOALEXPLORERcovers 69.4%. For the Google Play apps, both tools reach around 19% of activities,on average, while GOALEXPLORER covers 23.2%. That is an encouraging result forGOALEXPLORER, showing that the STG model it builds statically is accurate andcomparable with the models constructed at runtime during the dynamic exploration.Unlike for the benchmarks, none of the tools is able to reach 100% activitylevel coverage for the Google Play apps. This is not a surprising result as mostof the benchmark apps are simple utility apps with limited functionality and arelatively small number of activities (9 on average). In contrast, Google Play appscontain complex logic and large number of activities (60 on average), some ofwhich might not be reachable at all [68].Figure 5.1b and Table 5.1b show similar data for the the percentage of Connection statements in an app. Here, GOALEXPLORER largelyoutperforms other static UI models, triggering 85.9% target API statements onbenchmark apps, compared with 17.4%, 21.3%, and 51.7% for WTG (Gator),WTG, and WTG-E, respectively. It again performs better than dynamic tools,triggering 12.5% more target statements on average than Sapienz (85.9% v.s.73.4% covered by Sapienz) and 6.4% more than Stoat (which covers 79.5%).For GooglePlay apps, GOALEXPLORER also substantially outperforms otherUI models, triggering 57.9% of target statements, while other models trigger only21% and 31% of the targets. GOALEXPLORER performs comparable with dynamictools: Sapienz triggers 55.6% of the target statements on average and Stoat –57.4%.39To answer RQ1: Our experiments show that using STG model allows GOAL-EXPLORER to trigger substantially more targets than for other statically-builtAndroid UI models, namely, WTG (Gator), WTG and WTG-E. Moreover,GOALEXPLORER is as effective as the dynamic tools in the ability to reach acertain target.5.2.2 RQ2 (Performance)To evaluate performance, we measure the minimum, mean, median, and maximumtime, in seconds, it takes GOALEXPLORER to reach a target activity or statement.For fair comparison with dynamic tools, we run these tool for one hour each andmeasure the time it takes a tool to reach each new target: an activity or a statement.Such a comparison is more favorable for these tools, as we do not restart the toolfrom scratch for any new target. We only present times of reaching targets that arereachable by all of the compared tools, excluding the original Gator implementationfrom our analysis as it runs only on 9 out of 188 apps.Table 5.2a and Table 5.2b present the results of the comparison for a targetactivity and statement, respectively. The results are also plotted in Figure 5.2, on alogarithmic scale. The results show that GOALEXPLORER can reach a target muchfaster than dynamic tools: for the benchmark apps, it takes GOALEXPLORER 13seconds on average to reach a target activity and 17.6 seconds to reach a targetstatement, compared with 453.7 and 480.7 seconds for Sapienz and 643.3 and632.4 seconds for Stoat, respectively. For the Google Play apps, GOALEXPLORERreaches the target activities in 26.3 seconds and target statements in 47.1 seconds,on average, compared with 619.5 and 701.3 seconds for Sapienz and 763.5 and860.9 seconds for Stoat, respectively. Not surprisingly, GOALEXPLORER performsbetter than other static models – WTG (26.9 seconds for target activity and 57.0seconds for statement) and WTG-E (27.2 seconds for target activity and 50.9seconds for statement). Due to its more accurate representation of Android UI,GOALEXPLORER encounters less backtracking, thus spends less time to trigger40WTG WTGE GoalExplorer Sapienz Stoat11010010005000BenchmarkWTG WTGE GoalExplorer Sapienz Stoat11010010005000Google Play(a) Average time, in seconds, to reach an activity (lower is better).WTG WTGE GoalExplorer Sapienz Stoat11010010005000BenchmarkWTG WTGE GoalExplorer Sapienz Stoat11010010005000Google Play(b) Average time, in seconds, to reach a openConnection (lower is better).Figure 5.2: Performance results for the benchmark and GooglePlay apps.41Table 5.2: Performance results for the benchmark and GooglePlay apps.(a) Average time, in seconds, to reach an activity (lower is better).Category Metrics WTG WTG-E GoalExplorer Sapienz StoatBenchmarkMin 3 2.8 2.1 2 3Mean 12.1 12.2 13 453.7 634.3Median 9.3 8.9 8.4 235 362Max 40 46.2 85.4 2631 3539Google PlayMin 5.3 4.4 5.3 2 2Mean 26.9 27.2 26.3 619.5 763.5Median 24.2 21.8 21.7 371 412.2Max 95.6 106.2 76 3196.8 3495.8(b) Average time, in seconds, to reach a openConnection (lower is better).Category Metrics WTG WTGE GoalExplorer Sapienz StoatBenchmarkMin 6.7 7.2 7.6 24.5 91.9Mean 16.5 17.4 17.6 480.7 632.4Median 15 16.1 16.5 205.2 563.4Max 35.1 38.3 33 1254.2 1282.2Google PlayMin 2.4 2.4 2.3 2 2Mean 56 50.4 47.1 710.3 860.9Median 39.1 31 32.5 596.9 737.6Max 271.6 213.5 190.4 2720.9 2931.7the targets.Our experiments also show that it takes GOALEXPLORER around 93 secondsper app to build the static model. Even given this number, it outperforms dynamicapp exploration tools: since our exploration is guided by a static model, GOAL-EXPLORER only performs the actions that lead to the target activity while Sapienzand Stoat, by design, aim to trigger various functionalities, not necessarily thoseleading to the target.42To answer RQ2: GOALEXPLORER can trigger exploration targets substan-tially faster than dynamic exploration tools, which shows its efficiency for thegoal-driven exploration scenario.5.2.3 RQ3 (Accuracy)To evaluate the accuracy, the author of this thesis manually identified all pathsleading to URL.openConnection statement in 36 apps, as discussed in Section 5.1.The analyzed apps contain 127 such statements in total (3.53 per app). Out ofthose, 102 are reachable (2.83 per app), 21 are in library methods that apps do notuse (which occurs in 8 apps), and the remaining 4 are in unused app methods (allin the same app).We analyze the accuracy of GOALEXPLORER along three directions:• RQ3.1 (FN): What is the fraction of cases when the model does not containtransitions that exist in practice? Such cases are False Negatives (FNs) ofour STG model.• RQ3.2 (FP-Dynamic): What is the fraction of cases when the model con-tains transitions that exist but cannot be triggered dynamically? Such casesare False Positives (FPs) not handled by dynamic exploration.• RQ3.3 (FP): What is the fraction of cases when the model contains transi-tions that do not exist in practice? Such cases are False Positives (FPs) ofthe model.In what follows, we discuss these cases one by one and also list their implica-tions on the ability of GOALEXPLORER to reach the desired target.RQ3.1 (FN): Model Does Not Contain the Correct TransitionsWhen a correct transition does not exist in the model, GOALEXPLORER cannotconstruct a complete path to the target and thus will not attempt to follow that path.43In our sample, that happen in eight cases in total, in five apps. The reasons formissing a transition (STG false negatives) are discussed below:(I) Reflection: two transitions (in one app) are missing due to the limitationin handling complex reflective calls. GOALEXPLORER relies on call graphconstruction implemented by FlowDroid, which only handles reflective calltargets for constant string objects.(II) Unmodeled Component: two transitions (in two apps) are missing due tounmodeled components, such as App Widgets, which are miniature appviews that can be embedded in other apps, e.g. in the home screen. GOAL-EXPLORER does not model these components as they are placed outside ofthe main app.(III) Intent Resolution: four transitions (in two apps) are missing due to failuresin resolving the targets of intents, e.g. which activity to be launch by theintent. GOALEXPLORER relies on a constant propagation technique fromIC3 [57] to resolve the intents and identify the targets. When the constantpropagation failed, the correct transitions are missing in STG.GOALEXPLORER finds alternative paths to avoid missing the targets for fourout of the eight missing transitions in STG; the remaining ones lead to fourmissed targets – two are due to reflection and two are consequences of unmodeledcomponents. It should be noted that Sapienz and Stoat cannot trigger the latter twomissed targets.RQ3.2 (FP-Dynamic): Model Contains Correct Transitions That Cannot beTriggeredGOALEXPLORER encounters 44 correct transitions in 21 apps that it fails to triggerdynamically (2.1 transitions per app). The main causes for such failures (falsepositives of dynamic exploration) are discussed below:(I) Event Order: in nine cases (in five apps, 1.8 transitions per app), a transitionis only possible under a certain order of events in a screen. For example, in44the BookCatalogue app, the target can be reached only after the user markscertain electronic books as an anthology. While dynamic explorer is able toresolve many cases that require such interactions (17 cases got successfullyresolved in our data set), the correct resolution is not always guaranteed.(II) Semantic Inputs: in 17 cases (in 11 apps, 1.54 transitions per app), meaning-ful inputs (other than login credentials) are required explore the path to target.For example, a valid zip code is required to start the search in the Mileageapp and an .mp3 file has to be selected for upload in AnyMemo. NeitherGOALEXPLORER nor the contemporary dynamic exploration techniques cangenerate such semantic inputs.(III) Remote Triggers: in 13 cases (in eight apps, 1.63 transitions per app), thetransitions can only be triggered given a certain response from the remoteserver. For example, in SyncToPix, a target can only be triggered when theapp receives a certain reply from the server, to syncing data. During ourtesting period, such reply was never received and none of the tools were ableto trigger these transitions.(IV) Widget Identification: in 5 cases (all from the same app), transitions cannotbe triggered due to the failure in widget identification. GOALEXPLORERrelies on UIAutomator [28] to identify widgets where the input events,e.g. clicks, should be applied. UIAutomator fails to locate the widgetthat does not have a resource identification assigned. When that happens,GOALEXPLORER delegates to the dynamic UI exploration, which randomlyinteracts with the screen to hit the correct widget without locating it by theresource identification.Overall, the 44 transitions that GOALEXPLORER could not follow resulted inmissing 11 targets in nine apps; GOALEXPLORER finds alternative paths to theremaining 33 targets. Sapienz and Stoat cannot trigger 10 of the missing targetseither.45RQ3.3 (FP): Model Contains Transitions That Do Not Exist in RealityGOALEXPLORER identifies spurious 42 transitions, in 27 apps (1.55 transitionsper app), that do not exist in reality (STG false positives). Below, we analyze thecauses for these false positives:(I) Callback Sequence: in 28 cases (in 17 apps, 1.64 transitions per app) incor-rect transitions in STG are caused by over-approximation in modeling thesequence of the callbacks. Since the order of callbacks cannot be predictedstatically, GOALEXPLORER assumes that the callbacks can happen in anyorder (see Screen Builder in Section 4.2.2), producing screens in STG thatare not feasible in practice.(II) Fragment Properties: in 14 cases (in seven apps, 2.33 transitions per app)incorrect transitions are due to not analyzing detailed fragment properties.Apps can hide the fragments by setting its properties, i.e. changing thevisibility level to View.GONE. We only model the set of fragments but donot track the properties of each fragment in our model, over-approximatingthe set of fragments on a screen, some of which are invisible in practice.While GOALEXPLORER over-approximates the set of possible transitionswhen constructing STG, resulting in transitions that do not exist in reality, allthese transitions were “ignored” during exploration by backtracking and selectinganother path. In fact, none of them impacted the GOALEXPLORER’s ability totriggering the targets. However, such spurious transitions, as well as transitionsthat cannot be triggered (RQ3.2: false positives of dynamic exploration), prolongthe exploration as it takes time to resolve the false positives dynamically. Ourresults demonstrate that the execution time of the tool is not substantially affectedby that and GOALEXPLORER still able to reach the target substantially faster thandynamic tools.Overall, GOALEXPLORER fails to reach 16 targets in 11 apps due to false nega-tives (missing correct transitions) and false positives of dynamic exploration (beingunable to follow the correct transition). The baseline dynamic app exploration tools46can reach two and four of these targets, for Sapienz and Stoat, respectively. At thesame time, GOALEXPLORER can reach 14 and 6 additional targets that these toolscannot reach, respectively. We thus believe the results demonstrate that that thestatically constricted model is accurate enough to facilitate the goal-explorationtask.5.2.4 Evaluation SummaryOverall, our experiments demonstrate that the combination of the static STG modeland the run-time exploration techniques applied by GOALEXPLORER is valuablefor the goal-driven exploration task. By using this techniques, tool is able to reachsubstantially more targets than existing techniques based on static models of anapp UI. It is able to reach a comparable number of targets as the dynamic appexploration techniques, only substantially faster, which further attest to the qualityand value of the statically-built model.GOALEXPLORER is able to trigger 84.3% of all reachable targets, better thanthe state-of-the-art dynamic app exploration techniques (72.5% for Sapienz and78.4% for Stoat). The main reasons for GOALEXPLORER’s missed targets arethe absence of a certain event sequence and the lack of meaningful inputs, e.g.,files of a certain type. GOALEXPLORER encounters transitions that cannot befollowed and backtracks 2.39 times per app, which does not substantially impactthe exploration time; it still triggers the target substantially faster than the baselinedynamic tools.47Chapter 6Discussion6.1 SummaryIn this thesis, we first defined the problem of goal-driven exploration for Androidapplications – directly triggering the functionality of interest in an application.We then discussed existing approaches, demonstrating that most of the existingapproaches focus on systematically exercising the whole application rather thanswiftly navigating to the target functionality. As these approaches often lack theknowledge of unexplored paths that are likely to lead to the functionality of interest,they are not applicable to the goal-driven exploration scenario.We proposed a goal-driven exploration approach that statically constructs a UImodel of the application to guide the dynamic exploration towards a particular tar-get of interest: an Android activity, API call, or a program statement. We analyzedthe existing UI models of Android applications to find that they do not provide anaccurate representation for the goal-driven exploration, thus we presented a newUI model – the Screen Transition Graph (STG). We implemented a tool, calledGOALEXPLORER, that constructs STG and uses it for goal-driven exploration.We empirically evaluated GOALEXPLORER on 93 benchmark applications and95 the most popular GooglePlay applications. The evaluation result showed thatthe STG is substantially more accurate than other Android UI models and that48GOALEXPLORER is able to trigger a target functionality much faster than existingapplication exploration.6.2 Limitations and Threats to Validity6.2.1 LimitationsThe main limitation of our approach is that it does not accurately model com-binations of events on the same screen required to issue a transition. Our staticUI model over-approximates the set of possible transitions, thus some collectedtransition paths might not be feasible in practice, due to such constraints, e.g.,transitions requiring a certain combination of events or a specific system state.However, GOALEXPLORER incorporates the dynamic exploration logic to handlesuch cases at runtime, which has proven to be effective in our evaluation of thetool.Although we extended our implementation to handle login screens, GOALEX-PLORER cannot handle screens that require other types of semantic inputs, suchas a zip code or a specific type of file. This limitation is common to many otherautomated exploration approaches, and is left for future work.Another limitation of our approach is tied with the weakness of static analysis:GOALEXPLORER is not able to generate accurate UI models for apps that use na-tive code, dependency injection, and complex reflective calls. As GOALEXPLORERrelies on existing static analysis tools, FlowDroid and Soot, for constructing thestatic UI model of the app, the generated model might be incomplete when thesetools produce inaccurate results. Such limitation is common for any static analysistool. Although the model is not perfect, GOALEXPLORER demonstrates its useful-ness by achieving even slightly higher coverage than existing dynamic automatedtesting techniques.496.2.2 Threats to ValidityThe external validity of our results might be affected by the selection of subjectapps that we used and our results may not necessarily generalize beyond oursubjects. Moreover, the benchmark F-Droid apps we used are from several yearsago, thus they may not represent the latest apps in functionality and app design.We attempted to mitigate this threat by using “standardized" set of benchmarkapps and by extending this set to include the 95 top Google Play apps. As weused most-popular real-world apps of considerable size, we believe our results willgeneralize for other apps.For internal validity, our static analysis relies on FlowDroid to construct thecallgraph and collect the callbacks. The validity of our results thus directly dependson the accuracy of that tool.6.3 Future Research DirectionsWe envision two possible directions for future work:(1) Tackling transitions that require specific semantic inputs, e.g. zip code, is animportant challenge for future work. Recent work [44] proposed a NaturalLanguage Processing (NLP) approach to effectively generates semantictextual inputs based on the textual contents of the screen. Integrating suchlearning algorithms could potentially improve GOALEXPLORER.(2) Another possible direction for future work is to improve the accuracy ofthe statically-constructed STG. For example, GOALEXPLORER currentlyanalyzes the set of fragments in each screen without modeling the prop-erties of these fragments, leading to the incorrect transitions discussed inSection 5.2.3. Tracking the properties of fragments can help identify thefragments that are set invisible and exclude them from the model, henceproducing a more accurate STG.506.4 ConclusionThis thesis introduced GOALEXPLORER – a tool for goal-driven exploration ofAndroid applications. Such a tool is useful for scenarios when a security auditorneeds to automatically trigger a functionality of interest in an app. The maincontributions of GOALEXPLORER are(a) the static technique for constructing STG – a model that represents UI screensand transition between the screens, and(b) an engine that uses STG to build an executable script that triggers the func-tionality of interest.Our empirical evaluation shows that the STG model introduced in GOALEX-PLORER is more accurate than other existing static UI models of Android ap-plications. That is because it models advanced UI elements used in contempo-rary Android applications, including fragments, services, and broadcast receivers.Moreover, unlike earlier work that models UI components, STG represents thecomposition of these components to UI screens, which makes it more appropriatefor targeted-exploration executions. Compared to dynamic application explorationand testing tools, GOALEXPLORER relies on the statically-build model of anapplication, which allows it to reach the target substantially faster.As future work, GOALEXPLORER can be extended to support semantic tex-tual inputs, which can be done using NLP. Improving the accuracy of statically-produced STG, e.g., by tracking the properties of fragments, is another possibleextension of this work.51Bibliography[1] D. Amalfitano, A. R. Fasolino, P. Tramontana, S. De Carmine, and A. M.Memon. Using GUI Ripping for automated testing of Android applications.In Proc. of ASE’2012, pages 258–261, 2012.[2] Anonymous. Supplementary materials. anonymized for the review., 2019.(Last accessed: Jan 2019).[3] S. Arlt, A. Podelski, C. Bertolini, M. Schäf, I. Banerjee, and A. M. Memon.Lightweight static analysis for GUI testing. In Proc. of ISSRE’2012, pages301–310, 2012.[4] S. Arzt, S. Rasthofer, C. Fritz, E. Bodden, A. Bartel, J. Klein, Y. Le Traon,D. Octeau, and P. McDaniel. FlowDroid: Precise context, flow, field,object-sensitive and lifecycle-aware taint analysis for Android apps. In Proc.of PLDI’2014, pages 259–269, 2014.[5] T. Azim and I. Neamtiu. Targeted and depth-first exploration for systematictesting of Android apps. In Proc. of OOPSLA’2013, pages 641–660, 2013.[6] R. Baldoni, E. Coppa, D. C. D’elia, C. Demetrescu, and I. Finocchi. Asurvey of symbolic execution techniques. ACM Computing Surveys, pages50:1–50:39, 2018.[7] K. Benjamin, G. Von Bochmann, M. E. Dincturk, G.-V. Jourdan, and I. V.Onut. A strategy for efficient crawling of rich internet applications. In Proc.of ICWE’2011, pages 74–89, 2011.[8] T. Bray. Fragments for all., 2011. (Last accessed: Jan 2019).52[9] S. Chen, L. Fan, C. Chen, T. Su, W. Li, Y. Liu, and L. Xu. StoryDroid:Automated generation of storyboard for Android apps. In Proc. ofICSE’2019, pages 596–607, 2019.[10] W. Choi, G. Necula, and K. Sen. Guided GUI testing of Android apps withminimal restart and approximate learning. In Proc. of OOPSLA’2013, pages623–640, 2013.[11] S. R. Choudhary, A. Gorla, and A. Orso. Automated test input generation forAndroid: Are we there yet? In Proc. of ASE’2015, pages 429–440, 2015.[12] A. Dogtiev. App stores list 2018., 2018. (Lastaccessed: Jan 2019).[13] C. Duda, G. Frey, D. Kossmann, R. Matter, and C. Zhou. AJAX crawl:Making AJAX applications searchable. In Proc. of ICDE’2009, pages 78–89,2009.[14] F-Droid Limited. F-Droid., 2019. (Lastaccessed: Jan 2019).[15] Facebook. Facebook login API.,2019. (Last accessed: Jan 2019).[16] Facebook. Facebook SDK for Android., 2019. (Lastaccessed: Jan 2019).[17] A. M. Fard and A. Mesbah. Feedback-directed exploration of webapplications to derive test models. In Proc. of ISSRE’2013, pages 278–287,2013.[18] D. Ferreira, V. Kostakos, A. R. Beresford, J. Lindqvist, and A. K. Dey.Securacy: An empirical investigation of Android applications’ networkusage, privacy and security. In Proc. of WiSec’2015, pages 11:1–11:11, 2015.[19] X. Gao, S. H. Tan, Z. Dong, and A. Roychoudhury. Android testing viasynthetic symbolic execution. In Proc. of ASE’2018, pages 419–429, 2018.53[20] Google. Activity-lifecycle concepts., 2019. (Lastaccessed: Jan 2019).[21] Google. Android debug bridge (ADB)., 2019.(Last accessed: Apr 2019).[22] Google. Android SDK tools release note., 2019.(Last accessed: Jan 2019).[23] Google. App components.,2019. (Last accessed: Jan 2019).[24] Google. Espresso., 2019.(Last accessed: Jan 2019).[25] Google. Fragment.,2019. (Last accessed: Jan 2019).[26] Google. Menus., 2019. (Lastaccessed: Jan 2019).[27] Google. Monkey.,2019. (Last accessed: Jan 2019).[28] Google. UI Automator.,2019. (Last accessed: Jan 2019).[29] Google. Webview.,2019. (Last accessed: May 2019).[30] O. W. Group. The OAuth 2.0 authorization framework: Bearer token usage.,2019. (Last accessed: Apr 2019).54[31] T. Gu, C. Sun, X. Ma, C. Cao, C. Xu, Y. Yao, Q. Zhang, J. Lu, and Z. Su.Practical GUI testing of Android applications via model abstraction andrefinement. In Proc. of ICSE’2019, pages 269–280, 2019.[32] S. Hao, B. Liu, S. Nath, W. G. Halfond, and R. Govindan. PUMA:Programmable UI-automation for large-scale dynamic analysis of mobileapps. In Proc. of MobiSys’2014, pages 204–217, 2014.[33] M. Henze, J. Pennekamp, D. Hellmanns, E. Mühmer, J. H. Ziegeldorf,A. Drichel, and K. Wehrle. CloudAnalyzer: Uncovering the cloud usage ofmobile apps. In Proc. of the MobiQuitous’2017, pages 262–271, 2017.[34] O. Hou. A look at Google bouncer., 2012.(Last accessed: Jan 2019).[35] G. Hu, X. Yuan, Y. Tang, and J. Yang. Efficiently, effectively detectingmobile app bugs with AppDoctor. In Proc. of EuroSys’2014, pages18:1–18:15, 2014.[36] J. Huang, Z. Li, X. Xiao, Z. Wu, K. Lu, X. Zhang, and G. Jiang. SUPOR:Precise and scalable sensitive user input detection for Android apps. In Proc.of USENIX Security 2015, pages 977–992, 2015.[37] IBM. Identify and remediate application security vulnerabilities with IBMapplication security., 2019.(Last accessed: Jan 2019).[38] C. S. Jensen, M. R. Prasad, and A. Møller. Automated testing with targetedevent sequence generation. In Proc. of ISSTA’2013, pages 67–77, 2013.[39] R. Johnson and A. Stavrou. Forced-path execution for Android applicationson x86 platforms. In Proc. of SERE-C’2013, pages 188–197, 2013.[40] M. Junaid, D. Liu, and D. Kung. Dexteroid: Detecting malicious behaviorsin Android apps using reverse-engineered lifecycle models. Computers &Security, 59:92 – 117, 2016.[41] Y. Kang, Y. Zhou, H. Xu, and M. R. Lyu. DiagDroid: Android performancediagnosis via anatomizing asynchronous executions. In Proc. of FSE’2016,pages 410–421, 2016.55[42] P. Kong, L. Li, J. Gao, K. Liu, T. F. Bissyandé, and J. Klein. Automatedtesting of Android apps: A systematic literature review. IEEE Transactionson Reliability, 68(1):45–66, 2019.[43] L. Li, A. Bartel, T. F. Bissyandé, J. Klein, Y. Le Traon, S. Arzt, S. Rasthofer,E. Bodden, D. Octeau, and P. McDaniel. IccTA: Detecting inter-componentprivacy leaks in Android apps. In Proc. of ICSE’15, pages 280–291, 2015.[44] P. Liu, X. Zhang, M. Pistoia, Y. Zheng, M. Marques, and L. Zeng. Automatictext input generation for mobile testing. In Proc. of ICSE’17, pages 643–653,2017.[45] Y. Liu, H. H. Song, I. Bermudez, A. Mislove, M. Baldi, and A. Tongaonkar.Identifying personal information in Internet traffic. In Proc. of COSN’2015,pages 59–70, 2015.[46] A. Machiry, R. Tahiliani, and M. Naik. Dynodroid: An input generationsystem for Android apps. In Proc. of ESEC/FSE’2013, pages 224–234, 2013.[47] R. Mahmood, N. Mirzaei, and S. Malek. EvoDroid: Segmented evolutionarytesting of Android apps. In Proc. of FSE’2014, pages 599–609, 2014.[48] K. Mao, M. Harman, and Y. Jia. Sapienz: Multi-objective automated testingfor Android applications. In Proc. of ISSTA’2016, pages 94–105, 2016.[49] A. Memon, I. Banerjee, B. N. Nguyen, and B. Robbins. The first decade ofGUI ripping: Extensions, applications, and broader impacts. In Proc. ofWCRE’2013, pages 11–20, 2013.[50] F. Menczer, G. Pant, and P. Srinivasan. Topical web crawlers: Evaluatingadaptive algorithms. ACM Trans. Internet Technol., pages 378–419, 2004.[51] A. Mesbah, A. van Deursen, and S. Lenselink. Crawling AJAX-based webapplications through dynamic analysis of user interface state changes. ACMTrans. Web, pages 3:1–3:30, 2012.[52] N. Mirzaei, H. Bagheri, R. Mahmood, and S. Malek. SIG-Droid: Automatedsystem input generation for Android applications. In Proc. of ISSRE’2015,pages 461–471, 2015.56[53] N. Mirzaei, S. Malek, C. S. Pa˘sa˘reanu, N. Esfahani, and R. Mahmood.Testing Android apps through symbolic execution. SIGSOFT Softw. Eng.Notes, 37(6):1–5, 2012.[54] K. Moran, L.-V. Mario, C. Bernal-Cárdenas, C. Vendome, andD. Poshyvanyk. Automatically discovering, reporting and reproducingAndroid application crashes. In Proc. of ICST’2016, pages 33–44, 2016.[55] R. Naga. AXML Printer.,2018. (Last accessed: Jan 2019).[56] F. S. Ocariza Jr., K. Pattabiraman, and A. Mesbah. Autoflox: An automaticfault localizer for client-side JavaScript. In Proc. of ICST’2012, pages 31–40,2012.[57] D. Octeau, D. Luchaup, M. Dering, S. Jha, and P. McDaniel. Compositeconstant propagation: Application to Android inter-componentcommunication analysis. In Proc. of ICSE’2015, pages 77–88, 2015.[58] A. Parecki. An open protocol to allow secure authorization in a simple andstandard method from web, mobile and desktop applications., 2019. (Last accessed: Apr 2019).[59] Privacy International. How apps on Android share data with facebook., 2018. (Last accessed:Jan 2019).[60] J. Ren, A. Rao, M. Lindorfer, A. Legout, and D. Choffnes. ReCon:Revealing and controlling PII leaks in mobile network traffic. In Proc. ofMobiSys’2016, pages 361–374, 2016.[61] RobotiumTech. Robotium., 2016. (Lastaccessed: Jan 2019).[62] J. Schütte, R. Fedler, and D. Titze. ConDroid: Targeted dynamic analysis ofAndroid applications. In Proc. of AINA’2015, pages 571–578, 2015.[63] S. Staiger. Reverse engineering of graphical user interfaces using staticanalyses. In Proc. of WCRE ’2007, pages 189–198, 2007.57[64] Statista. Number of apps available in leading app stores., 2018. (Lastaccessed: Jan 2019).[65] T. Su, G. Meng, Y. Chen, K. Wu, W. Yang, Y. Yao, G. Pu, Y. Liu, and Z. Su.Guided, stochastic model-based GUI testing of android apps. In Proc. ofFSE’2017, pages 245–256, 2017.[66] Tencent. Tencent Kingkong app scan., 2019. (Lastaccessed: Jan 2019).[67] S. Thummalapenta, K. V. Lakshmi, S. Sinha, N. Sinha, and S. Chandra.Guided test generation for web applications. In Proc. of ICSE’2013, pages162–171, 2013.[68] W. Wang, D. Li, W. Yang, Y. Cao, Z. Zhang, Y. Deng, and T. Xie. Anempirical study of Android test generation tools in industrial cases. In Proc.of ASE’2018, pages 738–748, 2018.[69] Y. Wang and A. Rountev. Profiling the responsiveness of Androidapplications via automated resource amplification. In Proc. ofMOBILESoft’2016, pages 48–58, 2016.[70] M. Wong and D. Lie. IntelliDroid: A targeted input generator for thedynamic analysis of Android malware. In Proc. of NDSS’2016, pages 21–24,2016.[71] H. Wu, S. Yang, and A. Rountev. Static detection of energy defect patterns inAndroid applications. In Proc. of CC’2016, pages 185–195, 2016.[72] M. Xia, L. Gong, Y. Lyu, Z. Qi, and X. Liu. Effective real-time Androidapplication auditing. In Proc. of SP’2015, pages 899–914, 2015.[73] S. Yang, H. Zhang, H. Wu, Y. Wang, D. Yan, and A. Rountev. Static windowtransition graphs for Android. In Proc. of ASE’2015, pages 658–668, 2015.[74] Zedge. ZEDGE.,2019. (Last accessed: Jan 2019).58[75] A. Zeller. Mining apps for anomalies. In Proc. of SoftwareMining’2016,ASE’2016 Workshop, pages 1–1, 2016.[76] H. Zhang, H. Wu, and A. Rountev. Automated test generation for detectionof leaks in Android applications. In Proc. of AST’2016, pages 64–70, 2016.[77] L. L. Zhang, C.-J. M. Liang, Z. L. Li, Y. Liu, F. Zhao, and E. Chen.Characterizing privacy risks of mobile apps with sensitivity analysis. IEEETransactions on Mobile Computing, 17(2):279–292, 2017.[78] S. Zhang, D. Saff, Y. Bu, and M. D. Ernst. Combined static and dynamicautomated test generation. In Proc. of ISSTA’2011, pages 353–363, 2011.[79] Y. Zhang, Y. Sui, and J. Xue. Launch-mode-aware context-sensitive activitytransition analysis. In Proc. of ICSE’2018, pages 598–608, 2018.59


Citation Scheme:


Citations by CSL (citeproc-js)

Usage Statistics



Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            async >
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:


Related Items