Open Collections

UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

On the detection, localization and repair of client-side JavaScript faults Ocariza, Frolin S. 2016

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Notice for Google Chrome users:
If you are having trouble viewing or searching the PDF with Google Chrome, please download it here instead.

Item Metadata


24-ubc_2016_november_ocariza_frolin.pdf [ 2.38MB ]
JSON: 24-1.0319058.json
JSON-LD: 24-1.0319058-ld.json
RDF/XML (Pretty): 24-1.0319058-rdf.xml
RDF/JSON: 24-1.0319058-rdf.json
Turtle: 24-1.0319058-turtle.txt
N-Triples: 24-1.0319058-rdf-ntriples.txt
Original Record: 24-1.0319058-source.json
Full Text

Full Text

On the Detection, Localization and Repair of Client-SideJavaScript FaultsbyFrolin S. Ocariza, Jr.B.A.Sc. (Hons.), The University of Toronto, Canada, 2010M.A.Sc., The University of British Columbia, Canada, 2012A THESIS SUBMITTED IN PARTIAL FULFILLMENTOF THE REQUIREMENTS FOR THE DEGREE OFDoctor of PhilosophyinTHE FACULTY OF GRADUATE AND POSTDOCTORALSTUDIES(Electrical and Computer Engineering)The University of British Columbia(Vancouver)October 2016© Frolin S. Ocariza, Jr., 2016AbstractWith web application usage becoming ubiquitous, there is greater demand for mak-ing such applications more reliable. This is especially true as more users rely onweb applications to conduct day-to-day tasks, and more companies rely on theseapplications to drive their business. Since the advent of Web 2.0, developers of-ten implement much of the web application’s functionality at the client-side, usingclient-side JavaScript. Unfortunately, despite repeated complaints from develop-ers about confusing aspects of the JavaScript language, little work has been doneanalyzing the language’s reliability characteristics. With this problem in mind, weconducted an empirical study of real-world JavaScript bugs, with the goal of un-derstanding their root cause and impact. We found that most of these bugs areDOM-related, which means they occur as a result of the JavaScript code’s interac-tion with the Document Object Model (DOM).Having gained a thorough understanding of JavaScript bugs, we designed tech-niques for automatically detecting, localizing and repairing these bugs. Our local-ization and repair techniques are implemented as the AUTOFLOX and VEJOVIStools, respectively, and they target bugs that are DOM-related. In addition, ourdetection techniques – AUREBESH and HOLOCRON – attempt to find inconsisten-cies that occur in web applications written using JavaScript Model-View-Controller(MVC) frameworks. Based on our experimental evaluations, we found that thesetools are highly accurate, and are capable of finding and fixing bugs in real-worldweb applications.iiPrefaceThis dissertation is comprised of work conducted by myself, in collaboration withmy advisors (Karthik Pattabiraman and Ali Mesbah) and two other colleagues. Inparticular, Chapters 2 to 5 are based on work published in various conferencesand journals, and Chapter 6 is based on a paper to be submitted at a softwareengineering conference. I was the main author in all of these papers, and I wasresponsible for writing the manuscript, designing the approach (where applicable),and running the experiments. My collaborators were responsible for editing andwriting portions of the manuscripts, manually analyzing bug reports, and extendingmy fault localization tool.The publications associated with each chapter are listed below.• Chapter 2:– “An Empirical Study of Client-Side JavaScript Bugs” [130]. F. Ocariza,K. Bajaj, K. Pattabiraman and A. Mesbah. In Proceedings of the Inter-national Symposium on Empirical Software Engineering and Measure-ment (ESEM 2013). ACM/IEEE. 55–64.– “A Study of Causes and Consequences of Client-Side JavaScript Bugs”[133] F. Ocariza, K. Bajaj, K. Pattabiraman, A. Mesbah. Transactionson Software Engineering (TSE). IEEE Computer Society.• Chapter 3:– “AutoFLox: An Automatic Fault-Localizer for Client-Side JavaScript”[129]. F. Ocariza, K. Pattabiraman and A. Mesbah. In Proceedingsof the International Conference on Software Testing, Verification andValidation (ICST 2012). IEEE Computer Society. 31–40.– “Automatic Fault Localization for Client-Side JavaScript” [134].F. Ocariza, G. Li, K. Pattabiraman, A. Mesbah. Journal of SoftwareTesting, Verification and Reliability (STVR). Vol 26, Issue 1. 2016.Wiley. 69–88• Chapter 4:iii– “Vejovis: Suggesting Fixes for JavaScript Faults” [131]. F. Ocariza,K. Pattabiraman and A. Mesbah. In Proceedings of the InternationalConference on Software Engineering (ICSE 2014). ACM. 837–847• Chapter 5:– “Detecting Inconsistencies in JavaScript MVC Applications” [132]. F.Ocariza, K. Pattabiraman and A. Mesbah. In Proceedings of the In-ternational Conference on Software Engineering (ICSE 2015). IEEEComputer Society. 177–186.• Chapter 6:– The author wrote a paper for this work which is in preparation for sub-mission to a software engineering conference.ivTable of ContentsAbstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iiPreface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iiiTable of Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vList of Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ixList of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiAcknowledgements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiiiDedication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xv1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.1 Research Questions . . . . . . . . . . . . . . . . . . . . . . . . . 51.2 Publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72 Characteristics of JavaScript Faults . . . . . . . . . . . . . . . . . . 92.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92.2 Background and Motivation . . . . . . . . . . . . . . . . . . . . 112.2.1 Web Applications . . . . . . . . . . . . . . . . . . . . . . 122.2.2 JavaScript Bugs . . . . . . . . . . . . . . . . . . . . . . . 122.2.3 Goal and Motivation . . . . . . . . . . . . . . . . . . . . 152.3 Experimental Methodology . . . . . . . . . . . . . . . . . . . . . 162.3.1 Research Questions . . . . . . . . . . . . . . . . . . . . . 162.3.2 Experimental Objects . . . . . . . . . . . . . . . . . . . . 182.3.3 Collecting the Bug Reports . . . . . . . . . . . . . . . . . 182.3.4 Analyzing the Collected Bug Reports . . . . . . . . . . . 202.4 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 242.4.1 Fault Categories . . . . . . . . . . . . . . . . . . . . . . 262.4.2 Consequences of JavaScript Faults . . . . . . . . . . . . . 292.4.3 Causes of JavaScript Faults . . . . . . . . . . . . . . . . . 32v2.4.4 Browser Specificity . . . . . . . . . . . . . . . . . . . . . 352.4.5 Triage and Fix Time for JavaScript Faults . . . . . . . . . 372.4.6 Prevalence of Type Faults . . . . . . . . . . . . . . . . . 392.4.7 Threats to Validity . . . . . . . . . . . . . . . . . . . . . 402.5 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 422.6 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . 462.7 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 473 Automatic JavaScript Fault Localization . . . . . . . . . . . . . . . 493.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 493.2 Challenges and Motivation . . . . . . . . . . . . . . . . . . . . . 513.2.1 Running Example . . . . . . . . . . . . . . . . . . . . . 513.2.2 JavaScript Fault Localization . . . . . . . . . . . . . . . . 533.2.3 Challenges in Analyzing JavaScript Code . . . . . . . . . 553.3 Scope of this Chapter . . . . . . . . . . . . . . . . . . . . . . . . 563.4 Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 573.4.1 Usage Model . . . . . . . . . . . . . . . . . . . . . . . . 573.4.2 Trace Collection . . . . . . . . . . . . . . . . . . . . . . 583.4.3 Trace Analysis . . . . . . . . . . . . . . . . . . . . . . . 603.4.4 Support for Challenging Cases . . . . . . . . . . . . . . . 633.4.5 Assumptions . . . . . . . . . . . . . . . . . . . . . . . . 663.5 Tool Implementation . . . . . . . . . . . . . . . . . . . . . . . . 673.6 Empirical Evaluation . . . . . . . . . . . . . . . . . . . . . . . . 693.6.1 Goals and Research Questions . . . . . . . . . . . . . . . 693.6.2 Methodology . . . . . . . . . . . . . . . . . . . . . . . . 693.6.3 Accuracy of AUTOFLOX . . . . . . . . . . . . . . . . . . 703.6.4 Real Bugs . . . . . . . . . . . . . . . . . . . . . . . . . . 733.6.5 Performance . . . . . . . . . . . . . . . . . . . . . . . . 763.7 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 773.7.1 Limitations . . . . . . . . . . . . . . . . . . . . . . . . . 773.7.2 Threats to Validity . . . . . . . . . . . . . . . . . . . . . 783.8 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . 783.8.1 Web Application Reliability . . . . . . . . . . . . . . . . 783.8.2 Fault Localization . . . . . . . . . . . . . . . . . . . . . 793.9 Conclusions and Future Work . . . . . . . . . . . . . . . . . . . . 814 Automatic JavaScript Fault Repair . . . . . . . . . . . . . . . . . . . 824.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 824.2 Background and Challenges . . . . . . . . . . . . . . . . . . . . 844.3 Common Developer Fixes . . . . . . . . . . . . . . . . . . . . . 884.3.1 Methodology . . . . . . . . . . . . . . . . . . . . . . . . 884.3.2 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . 89vi4.4 Fault Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 924.5 Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 924.5.1 Data Collector . . . . . . . . . . . . . . . . . . . . . . . 944.5.2 Analyzing Symptoms . . . . . . . . . . . . . . . . . . . . 954.5.3 Suggesting Treatments . . . . . . . . . . . . . . . . . . . 1014.5.4 Implementation: Vejovis . . . . . . . . . . . . . . . . . . 1044.6 Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1044.6.1 Subject Systems . . . . . . . . . . . . . . . . . . . . . . 1044.6.2 Methodology . . . . . . . . . . . . . . . . . . . . . . . . 1054.6.3 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . 1064.7 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1104.8 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1114.9 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1125 Detecting Inconsistencies in JavaScript MVC Applications . . . . . 1135.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1135.2 Running Example . . . . . . . . . . . . . . . . . . . . . . . . . . 1155.3 Consistency Issues . . . . . . . . . . . . . . . . . . . . . . . . . 1195.4 Formal Model of MVC Applications . . . . . . . . . . . . . . . . 1215.5 Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1225.5.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . 1235.5.2 Finding the Models, Views and Controllers . . . . . . . . 1255.5.3 Inferring Identifiers . . . . . . . . . . . . . . . . . . . . . 1265.5.4 Discovering MVC Groupings . . . . . . . . . . . . . . . 1295.5.5 Detecting Inconsistencies . . . . . . . . . . . . . . . . . . 1295.6 Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . 1315.7 Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1315.7.1 Subject Systems . . . . . . . . . . . . . . . . . . . . . . 1315.7.2 Methodology . . . . . . . . . . . . . . . . . . . . . . . . 1325.7.3 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . 1365.8 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1385.9 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1405.10 Conclusion and Future Work . . . . . . . . . . . . . . . . . . . . 1416 Cross-Language Inconsistency Detection . . . . . . . . . . . . . . . 1426.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1426.2 Background and Motivation . . . . . . . . . . . . . . . . . . . . 1456.2.1 Definitions . . . . . . . . . . . . . . . . . . . . . . . . . 1456.2.2 Motivating Examples . . . . . . . . . . . . . . . . . . . . 1466.2.3 Challenges . . . . . . . . . . . . . . . . . . . . . . . . . 1476.3 Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1486.3.1 Transforming Code into Trees . . . . . . . . . . . . . . . 149vii6.3.2 Finding Common Patterns . . . . . . . . . . . . . . . . . 1506.3.3 Establishing Rules from Patterns . . . . . . . . . . . . . . 1516.3.4 Detecting Violations . . . . . . . . . . . . . . . . . . . . 1556.4 Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . 1576.5 Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1586.5.1 Subject Systems . . . . . . . . . . . . . . . . . . . . . . 1586.5.2 Experimental Methodology . . . . . . . . . . . . . . . . 1586.5.3 Prevalence of Inconsistencies (RQ1) . . . . . . . . . . . . 1606.5.4 Real Bugs (RQ2) . . . . . . . . . . . . . . . . . . . . . . 1626.5.5 Performance (RQ3) . . . . . . . . . . . . . . . . . . . . . 1656.5.6 Threats to Validity . . . . . . . . . . . . . . . . . . . . . 1656.6 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1666.7 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1676.8 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1687 Conclusions and Future Work . . . . . . . . . . . . . . . . . . . . . 1697.1 Expected Impact . . . . . . . . . . . . . . . . . . . . . . . . . . 1707.2 Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177viiiList of TablesTable 2.1 Experimental subjects from which bug reports were collected. 17Table 2.2 Impact Categories. . . . . . . . . . . . . . . . . . . . . . . . 21Table 2.3 Fault categories of the bug reports analyzed. Library data areshown in italics. . . . . . . . . . . . . . . . . . . . . . . . . . 25Table 2.4 Number of code-terminating failures compared to output-relatedfailures. Library data are shown in italics. Data for DOM-related faults only are shown in parentheses. . . . . . . . . . . 30Table 2.5 Impact categories of the bug reports analyzed. Library data areshown in italics. Impact categories data for DOM-related faultsonly are shown in parentheses. . . . . . . . . . . . . . . . . . 31Table 2.6 Error locations of the bug reports analyzed. Library data areshown in italics. . . . . . . . . . . . . . . . . . . . . . . . . . 33Table 2.7 Browser specificity of the bug reports analyzed. Library dataare shown in italics. . . . . . . . . . . . . . . . . . . . . . . . 36Table 2.8 Average triage times (T) and fix times (F) for each experimentalobject, rounded to the nearest whole number. Library data areshown in italics. . . . . . . . . . . . . . . . . . . . . . . . . . 38Table 3.1 Results of the experiment on open-source web applications, as-sessing the accuracy of AUTOFLOX. . . . . . . . . . . . . . . 71Table 3.2 Results of the experiment on production websites, assessing therobustness of AUTOFLOX (in particular, how well it works inproduction settings). . . . . . . . . . . . . . . . . . . . . . . 73Table 3.3 Web applications and libraries in which the real bugs to whichAUTOFLOX is subjected appear. . . . . . . . . . . . . . . . . 75Table 3.4 Performance results . . . . . . . . . . . . . . . . . . . . . . . 76Table 4.1 List of commonly used CSS selector components. . . . . . . . 85Table 4.2 List of applications used in our study of common fixes. . . . . 89Table 4.3 List of output messages. . . . . . . . . . . . . . . . . . . . . 103ixTable 4.4 Accuracy results, with edit distance bound set to infinity i.e., nobound assigned. BR1 refers to the first bug report, and BR2, thesecond bug report (from each application). Data in parenthesesare the results for when the edit distance bound is set to 5. . . 107Table 4.5 Rank of the correct fix when suggestions are sorted by edit dis-tance. The denominator refers to the total number of sugges-tions. Top ranked suggestions are in bold. . . . . . . . . . . . 108Table 4.6 Performance results. . . . . . . . . . . . . . . . . . . . . . . 109Table 5.1 Real bugs found. The “Fault Type” column refers to the faulttype number, as per Table 5.2. . . . . . . . . . . . . . . . . . 132Table 5.2 Types of faults injected. MV refers to “model variable”, and CFrefers to “controller function”. . . . . . . . . . . . . . . . . . 134Table 5.3 Fault injection results. The size pertains to the combined linesof HTML and JavaScript code, not including libraries. . . . . 135Table 5.4 Fault injection results per property. . . . . . . . . . . . . . . . 138Table 6.1 Number of real bugs found per application. The size is shown inKB, with lines of code (LOC) in parentheses. The size pertainsto both the HTML and JavaScript code in each application, notincluding libraries. . . . . . . . . . . . . . . . . . . . . . . . 162xList of FiguresFigure 2.1 Example that describes the error, fault, and failure of a JavaScriptbug reported in Moodle. . . . . . . . . . . . . . . . . . . . . 14Figure 2.2 Bug reports per calendar year. Note that there was one addi-tional bug report from 2003, and two additional bug reportsfrom 2015. . . . . . . . . . . . . . . . . . . . . . . . . . . . 23Figure 2.3 Pie chart of the distribution of fault categories. . . . . . . . . 24Figure 2.4 Temporal graphs showing (a) the percent of DOM-related faultsper year; (b) the percent of code-terminating failures per year.The red regression line represents DOM-related faults, whilethe blue regression line represents non-DOM-related faults;(c) the percent of bug reports whose error is located in theJavaScript code, per year; (d) the percent of browser-specificbugs per year; and (e) the percent of type faults per year . . . 28Figure 2.5 Box plot for (a) triage time, and (b) fix time . . . . . . . . . . 37Figure 2.6 Bar graphs showing (a) the percentage of bug reports that aretype faults versus the percentage of bug reports that are nottype faults and (b) the distribution of type fault categories . . . 39Figure 3.1 Example JavaScript code fragment based on . . . 52Figure 3.2 Block diagram illustrating the proposed fault localization ap-proach. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57Figure 3.3 Example trace record for Line 5 of the running example fromFigure 3.1. . . . . . . . . . . . . . . . . . . . . . . . . . . . 59Figure 3.4 Abridged execution trace for the running example showing thetwo sequences and the relevant sequence. Each trace record isappended with either a marker or the line number relative tothe function. Numbers in parentheses refer to the line numbersrelative to the entire JavaScript file. root refers to code out-side a function. The line marked with a (**) is the direct DOMaccess, and the goal of this design is to correctly identify thisline as the direct DOM access. . . . . . . . . . . . . . . . . . 61Figure 3.5 Example illustrating the approach for supporting eval. . . . 64xiFigure 3.6 Example illustrating the mapping from the beautified versionto the minified version, in the approach for supporting minifiedcode. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66Figure 4.1 JavaScript code of the running example. . . . . . . . . . . . . 86Figure 4.2 The DOM state during execution of the JavaScript code inthe running example. For simplicity, only the elements underbody are shown. . . . . . . . . . . . . . . . . . . . . . . . . 87Figure 4.3 High-level block diagram of our design. . . . . . . . . . . . . 93Figure 5.1 HTML code of the “Search” view (search.html) . . . . . . . . 116Figure 5.2 HTML code of the “Results” view (results.html) . . . . . . . 117Figure 5.3 JavaScript code of the models and controllers . . . . . . . . . 118Figure 5.4 JavaScript code of the routes . . . . . . . . . . . . . . . . . . 118Figure 5.5 Block diagram of the def-use and grouping model for MVCframework identifiers. Solid arrows indicate a “defines” rela-tion, while dashed arrows indicate a “uses” relation. Models,views, and controllers connected with the same line types forma grouping. . . . . . . . . . . . . . . . . . . . . . . . . . . . 119Figure 5.6 Block diagram of our approach. . . . . . . . . . . . . . . . . 124Figure 5.7 Portion of findIdentifiers that updates ωM for every model.The other “identifier inclusion functions” are updated in a sim-ilar way. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126Figure 5.8 Tree representing the model variables in the Results modelof MovieSearch, including all the nested objects. The identi-fiers are shown at the top of each node, while the types areshown at the bottom. . . . . . . . . . . . . . . . . . . . . . . 128Figure 6.1 Overview of our fault detection approach . . . . . . . . . . . 148Figure 6.2 Example of an intra-pattern consistency rule violation . . . . . 153Figure 6.3 Example of an unconditional link rule violation. The subtreesare slightly altered for simplicity. . . . . . . . . . . . . . . . 156Figure 6.4 Percentage of bug reports classified as an inconsistency foreach MVC framework . . . . . . . . . . . . . . . . . . . . . 161Figure 6.5 Number of inconsistency categories with a particular frequency.Most inconsistency categories have just 1-2 inconsistencies inthem. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161Figure 6.6 Categories of code smells found by HOLOCRON. HC standsfor Hardcoded Constants, UVU for Unsafe Value Usage, andMPI stands for Multi-Purpose Identifiers . . . . . . . . . . . . 164xiiAcknowledgementsFirst of all, I would like to extend my heartfelt gratitude to my doctoral advisors,Karthik Pattabiraman and Ali Mesbah; their guidance throughout the many years Ispent in grad school has undoubtedly been one of the most crucial aspects of notonly my eventual attainment of my degree, but also, my professional and personalgrowth. Karthik, you have been there since the beginning, having also been myMASc advisor, and I want to thank you for giving me the opportunity to work inyour lab and constantly pushing me to strive harder and be better, whether it bein presenting, writing papers, and, in general, conducting research. Ali, you havebeen there pretty much since you joined UBC, and I would like to thank you forsupporting me throughout those years, and for providing me with very valuableinput in my work, especially in terms of learning more about web applications andsubmitting to reputable software engineering conferences/journals.I would also like to thank my thesis committee members and examiners, includ-ing Sathish Gopalakrishnan and Matei Ripeanu, as well as my internship mentorsand my CSRG colleagues for all the feedback that they have provided over theyears, and for sitting through the many practice presentations I have given. I wouldlike to say a special thanks to all my lab mates – past and present – from both thedependable systems lab and SALT lab. Even though I am somewhat notorious fornot physically being in the lab as often as everyone else, the days that I did getto spend there have been thoroughly enjoyable; I appreciate all the times that youhave helped me with my research, and it was a pleasure being able to provide myown input regarding your research. I am sure that the friendships we have formedwill extend beyond the lab.Of course, this section would not be complete if I neglected to acknowledgemy family and my closest friends. Thank you first and foremost to my parents,xiiiFrolin and Jeannette, who, needless to say, have provided me with immense emo-tional support, and laid out the foundations that helped me become who I am to-day. Thank you to my sister Linnette and my brother Marco, who were alwaysthere whenever I needed some time off from doing research. Thank you to all myrelatives – most of whom are in the Philippines, some of whom are in Canada, andsome of whom are in other parts of the world – for being a constant source of inspi-ration. I would also like to thank the CMPC members at Canadian Martyrs parish,especially the youth, for all their prayers and encouragement; special mention goesout to Kylo, Elijah, Mateo, Andrew, Erwin, and Paul, as well as fellow leaders Rvieand Jennifer; I thank God not only for this thesis, but also for the many abundantgraces – including all these friendships and all the trials – that came along with it.xivDedicationTo my friends and family.xvChapter 1IntroductionThe popularity of web applications in today’s world is undeniable, with almost50% of the entire world population visiting such applications, according to themost recent Internet World Stats survey [76]. In addition, web applications havebecome very important in conducting day-to-day tasks, such as search and socialmedia, as well as large-scale corporate endeavours. As a result of this popularity,there is greater demand in ensuring that these web applications are reliable; in otherwords, the web application must contain very few functional bugs, which lead toeither unexpected code termination or incorrect output. Indeed, web applicationsriddled with functional bugs can have potentially huge impact, especially sincethese applications have become the main product of some of the largest companiesin the world (e.g., Google, Facebook). Further, conducting various security-criticaltasks (e.g., credit card transactions, online banking) using web applications hasbecome commonplace, which underscores the need for reliability.Modern web applications are composed of two main components: the server-side and the client-side. In this setup, the server-side is responsible for backend op-erations such as responding to webpage requests and updating the database, whilethe client-side is responsible for providing an interface with which the user (i.e.,the client) can interact with the application. In the past, the server-side providedmost of the web application’s functionality, with the client-side relegated to sim-ply displaying the webpages sent by the server in response to a page request. Asa result, researchers exclusively focused on analyzing and improving the web ap-plication’s reliability at the server-side [61, 85, 163]. However, with the rise inpopularity of Asynchronous JavaScript and XML (AJAX) development, much of1the web application’s functionality is now being offloaded to the client-side [159].Therefore, in order to ensure the reliability of the web application as a whole,the web developer also needs to ensure the reliability of the JavaScript1 code at theclient-side. This is because the JavaScript code is used to implement the function-ality of the web application at the client-side, dictating how its various componentswork and interact with each other, and how the application responds to actions per-formed by the user. While other scripting languages exist for the web, JavaScripthas become the de facto scripting language in this context, with over 93% of web-sites in the Internet using it as of May 2016 [169], and with the language toppingthe “Most Popular Technologies” category of the two most recent StackOverflowDeveloper Surveys [157]. Its popularity among web developers stems from themany advantages it presents, particularly from the standpoint of interactivity. Firstand foremost, the use of JavaScript is precisely what allows the developer to of-fload key functionalities to the client-side, which reduces the number of costlyrequests made to the server each time the user interacts with the webpage. Second,JavaScript’s loose semantics make it very flexible, which gives developers freedomnot only in terms of the types of programs they can create, but also in the way thatthey organize the code for these programs. Finally, JavaScript is very easy to de-ploy, as it is an interpreted language whose effects can immediately be seen on thebrowser, without undergoing complicated compilation or installation.Having established the importance of JavaScript, we now ask the question: isthe JavaScript code in most of these web applications already reliable? Frequentquestions and complaints from developers about confusing aspects of the languageseem to suggest otherwise. For example, there are a large number of JavaScript-related questions asked by frustrated programmers in the Q&A website StackOver-flow [20]; in fact, “JavaScript” recently overtook “Java” as the most popular tag inStackOverflow, according to the latest RedMonk Programming Language Rank-ings [135]. Also, several books have been written specifically to help programmersdelineate between the “good” and “bad” parts of JavaScript [40, 182]. Claims per-taining to reliability issues in JavaScript, however, have primarily been anecdotalin nature. In order to identify the reliability issues in JavaScript as a real problem,1For simplicity, we use the term “JavaScript” to pertain to “client-side JavaScript”, which is thesuperset of the core JavaScript language used for web application development2we need to demonstrate that JavaScript faults are both prevalent and impactful.Unfortunately, recent research has placed studies regarding JavaScript relia-bility on the sidelines, mostly in favour of studies pertaining to JavaScript’s se-curity [64, 145, 175, 180], privacy [77, 124], and performance [53, 142, 144].Further, as mentioned earlier, researchers in the past have also extensively stud-ied the causes of web application faults at the server-side [29, 85, 136], includingthose that analyzed session-based workloads [61], server logs [163], and websiteoutage incidents [138]. While useful, these server-side studies do not provide aclear view of client-side reliability, as programming practices for the latter differsignificantly from those of the former. The use of the Node.js environment at theserver-side bridges the gap between the server and the client to a limited extent,in that JavaScript is now also being used to program the server; however, theseprior server-side studies either predate or do not consider Node.js, and client-sideJavaScript code contains frequently-used APIs and features that are not availablein server-side JavaScript code.In our prior work, to answer the question posed above regarding the reliabilityof JavaScript, we conducted an empirical study of unhandled JavaScript exceptionslogged as console messages [128]. In this study, we found that the most popularwebsites throw four JavaScript console messages on average – where these consolemessages are displayed as a result of an unhandled exception during the executionof the JavaScript code – even if the user is simply interacting with these websitesnormally (i.e., without attempting to break the website). However, this preliminarywork only establishes the prevalence of JavaScript bugs. Ultimately, we want toknow how these bugs appear, so that we can take steps towards preventing them,or eliminating them altogether. For this reason, the first goal of the dissertationwork is to find a way to understand both the root causes and the impact of theseJavaScript bugs.Once we have sufficient understanding of JavaScript bugs, we can then lever-age this understanding to find the best way to deal with these bugs. There aregenerally two approaches taken by researchers: (1) error prevention and (2) faultdetection.2 Error prevention involves finding ways to minimize the number of mis-2In this dissertation, we use the term error to pertain to the mistake committed by the programmerwhen writing web application code; fault, to pertain to the propagation of the error into a JavaScript3takes made by a developer in the process of writing JavaScript code. Much ofthe work on error prevention for web applications has focused on code completionmethods [2, 100, 149]. For example, Bajaj et al. introduced a tool called Domple-tion [19], which helps programmers set up interactions of the JavaScript code withthe Document Object Model (DOM) by autocompleting CSS selectors. In addition,most JavaScript IDEs provide some form of autocomplete functionality, includingEclipse [44], Aptana [12], and Brackets [3]. This error prevention approach isuseful, but at the same time, it is limited, because errors are an inevitable part ofprogramming. Further, not all errors are committed in the main web applicationcode written by the developers; for instance, some errors are found in third partylibraries. Hence, while error prevention helps the developer avoid certain errors,they do not give developers full assurance of the reliability of the code.The second approach – fault detection – considers the possibility that the Ja-vaScript code is faulty, and tries to find these faults post facto. There has beenconsiderable work done on detecting syntax errors in JavaScript code, includingJSLint [43] and Closure Compiler [57]; however, these tools do not detect semanticerrors. There has also been considerable work on detecting faults through softwaretesting [15, 109, 113–115, 137], as well as several record-replay tools developedfor creating test cases [10, 30, 111, 151, 178]. Unfortunately, testing is limited, be-cause it is a purely dynamic approach, which makes it susceptible to miss faults dueto the large number of DOM states present in web applications. More importantly,testing alone does not suffice in improving the reliability of the code, because thedeveloper would still have to debug any problems detected by the test cases, whichis a very tedious task for developers [84, 165]. Prior research has explored ways toautomate this debugging process, but most of these techniques look only for HTMLvalidation issues, and primarily focus on the server-side [14, 32, 148]. With theseissues in mind, our second goal is to find an efficient way to automatically detect,localize, and repair JavaScript faults on the client-side. Accomplishing this goalis challenging for various reasons:• JavaScript is a dynamic language whose runtime behaviour is often contin-variable, method call, or return value during execution; and failure, to pertain to the visible mani-festation of the error in the form of an exception or output corruption. We introduce these conceptsmore formally in Chapter 2.4gent upon the Document Object Model (DOM) state; the DOM state pertainsto the hierarchy and contents of the DOM data structure at a certain pointduring execution of the JavaScript code. Therefore, the DOM also needsto be analyzed in addition to the JavaScript code, which makes the analysismore challenging due to the large number of DOM states present in manyweb applications. As a result, static analysis often does not suffice, andhence, some dynamic analysis is required that keeps track of the JavaScriptcode’s interaction with the DOM during execution;• JavaScript executes asynchronously, and is triggered by the occurrence ofevents, including user-triggered events (i.e., clicks, hovers), page-triggeredevents (e.g., loads), and asynchronous function calls. These events may oc-cur in different orders; although JavaScript follows a sequential executionmodel, it does not provide deterministic ordering, thereby complicating anal-ysis;• Many JavaScript frameworks have been created to facilitate the process ofwriting JavaScript code. These frameworks are widely used; for example,over 70% of all websites use jQuery as of 2016 [168]. Unfortunately, theyalso tend to complicate analysis of JavaScript code [100]. This is becausedifferent frameworks have been developed with different – and sometimesmutually exclusive – goals in mind [9]. For example, current frameworksadhere to varying programming patterns, such as Object-Oriented Program-ming (e.g., jQuery), Aspect-Oriented Programming (e.g., AspectJS), andModel-View-Controller (e.g., AngularJS), among others.1.1 Research QuestionsIn light of what has been discussed, the overarching goal of this dissertation is tounderstand the nature of JavaScript faults, and to use this understanding to developautomated techniques that improve the reliability of JavaScript-based web appli-cations at the client-side. To achieve this goal, we answer the research questionslisted below. More specifically, the first research question concerns the understand-ing of JavaScript faults, while the second and third research questions concern the5localization, repair, and detection of these faults in an automated manner.RQ1A : What are the characteristics of JavaScript faults in web applications?In addition to establishing the prevalence of JavaScript faults, we also needto understand what caused them (i.e., the root cause) and what consequencesthey have (i.e., failure and impact). We addressed this research question byconducting a large-scale empirical study of 502 JavaScript bug reports, com-ing from 19 open-source web applications. One of the main findings of ourstudy is that the majority of JavaScript faults relate to the interaction be-tween the JavaScript code and the DOM; we call these DOM-related faults.We formally define DOM-related faults and describe our empirical study infurther detail in Chapter 2.RQ1B : How can we accurately and efficiently localize and repair JavaScriptfaults that appear in web applications?Fault localization and repair are often the most time-consuming tasks in de-bugging. Using information gained from our bug report study, we developedtwo approaches that perform these tasks automatically, specifically targetingDOM-related faults. The first approach, which we describe in Chapter 3, per-forms automatic fault localization using a backward-slicing approach, andhas been implemented in a tool called AUTOFLOX. The second approach,which we describe in Chapter 4, performs dynamic analysis of the JavaScriptcode to provide repair suggestions, and has been implemented in a tool calledVEJOVIS.RQ1C : Can we create a general technique that detects JavaScript faults in thepresence of JavaScript frameworks? If so, how?Before JavaScript faults can be localized and repaired, they must first bedetected. In our initial attempt to address this question, we identified fourtypes of inconsistencies that occur in JavaScript code that use the AngularJSframework [58], and we developed a static analysis approach for automat-ically detecting these inconsistencies. We implemented this technique in atool called AUREBESH, the details of which we describe in Chapter 5.6AUREBESH, however, has significant limitations because it is specializedfor (1) the framework that the web application uses (i.e., AngularJS), and(2) the types of inconsistencies that we identified above. Recognizing theselimitations, we designed a static analysis technique that works not just forAngularJS applications, but, more generally, for web applications written us-ing JavaScript Model-View-Controller (MVC) frameworks. This techniqueperforms subtree pattern matching in the AST to infer the consistency rulesautomatically rather than require us to specify them, and is able to find in-consistencies that occur between two different programming languages. Weimplemented this technique in a tool called HOLOCRON, which is describedin Chapter 6.1.2 PublicationsIn response to the research questions enumerated in Chapter 1.1, we have publishedthe following conference and journal papers:• F. Ocariza, K. Pattabiraman, and A. Mesbah, “AutoFLox: an automatic faultlocalizer for client-side JavaScript,” IEEE International Conference on Soft-ware Testing, Verification and Validation (ICST), 2012, 31-40. Best PaperAward Candidate. (Acceptance Rate: 27%) [129]• F. Ocariza, K. Bajaj, K. Pattabiraman, and A. Mesbah, “An empirical studyof client-side JavaScript bugs,” IEEE/ACM International Symposium on Em-pirical Software Engineering and Measurement (ESEM), 2013, 55-64. (Ac-ceptance Rate: 28%) [130]• F. Ocariza, K. Pattabiraman, A. Mesbah, “Vejovis: suggesting fixes forJavaScript faults,” IEEE/ACM International Conference on Software Engi-neering (ICSE), 2014, 837-847. (Acceptance Rate: 20%) [131]• F. Ocariza, K. Pattabiraman, A. Mesbah, “Detecting inconsistencies in Java-Script MVC applications,” IEEE/ACM International Conference on SoftwareEngineering (ICSE), 2015, 11 pages. (Acceptance Rate: 18.5%) [132]7• F. Ocariza, G. Li, K. Pattabiraman, A. Mesbah, “Automatic fault localiza-tion for client-side JavaScript,” Journal of Software Testing, Verification &Reliability (STVR), 2016, vol. 26, issue 1, 69-88. [134]• F. Ocariza, K. Bajaj, K. Pattabiraman, A. Mesbah, “A study of causes andconsequences of client-side JavaScript bugs,” IEEE Transactions on Soft-ware Engineering (in press). 17 pages. [133]8Chapter 2Characteristics of JavaScript FaultsThis chapter describes the empirical study that we conducted on JavaScript bugreports.3 The goal of the work described by this chapter is to answer RQ1A fromChapter 1.1: What are the characteristics of JavaScript faults in web applications?Therefore, this empirical study allows us to understand the root causes, propagationcharacteristics, and impact of JavaScript bugs, which will enable us to mitigatethese bugs.We first introduce the problem and define some important terms in the next twosections, after which we will describe our experimental methodology and results.2.1 IntroductionJavaScript contains several features that set it apart from traditional languages.First of all, JavaScript code executes under an asynchronous model. This allowsevent handlers to execute on demand, as the user interacts with the web applicationcomponents. Secondly, much of JavaScript is designed to interact with the DOM,which, as described in Chapter 1, is a dynamic tree-like structure that includes thecomponents in the web application and how they are organized. Using DOM APIcalls, JavaScript can be used to access or manipulate the components stored in theDOM, thereby allowing the web page to change without requiring a page reload.While the above features allow web applications to be highly interactive, theyalso introduce additional avenues for faults in the JavaScript code. In a previous3The main study in this chapter will appear at the Transactions on Software Engineering(TSE) [133]. The initial conference version appeared at the International Symposium on Empiri-cal Software Engineering and Measurement (ESEM 2013) [130].9study [128], we collected JavaScript console messages (i.e., unhandled JavaScriptexceptions) from fifty popular web applications to understand how prone web ap-plications are to JavaScript faults and what kinds of JavaScript faults appear inthese applications. While the study pointed to the prevalence of JavaScript faults,it did not explore their impact or root cause, nor did it analyze the kinds of fail-ures they caused. Understanding the root cause and impact of the faults is vitalfor developers, testers, as well as tool builders to increase the reliability of webapplications.In this chapter, our goal is to discover the causes of JavaScript faults (the er-ror) in web applications, and analyze their consequences (the failure and impact).Towards this goal, we conduct an empirical study of over 500 publicly availableJavaScript bug reports. We choose bug reports as they typically have detailed infor-mation about a JavaScript fault and also reveal how a web application is expectedto behave; this is information that would be difficult to extract from JavaScript con-sole messages or static analysis. Further, we confine our search to bug reports thatare marked “fixed”, which eliminates spurious or superfluous bug reports.A major challenge with studying bug reports, however, is that few web appli-cations make their bug repositories publicly available. Even those that do, oftenclassify the reports in ad-hoc ways, which makes it challenging to extract the rele-vant details from the report [27]. Therefore, we systematically gather bug reportsand standardize their format in order to study them.Our work makes the following main contributions:• We collect and systematically categorize a total of 502 bug reports, from15 web applications and four JavaScript libraries, and put the reports in astandard format;• We classify the JavaScript faults into multiple categories. We find that onecategory dominates the others, namely DOM-related JavaScript faults (moredetails below);• We analyze how many of the bugs can be classified as type faults, whichhelps us assess the usefulness of programming languages that add strict typ-ing systems to JavaScript, such as TypeScript [112] and Dart [22];10• We quantitatively analyze the nature (i.e., cause and consequences) and theimpact of JavaScript faults;• Where appropriate, we perform a temporal analysis of each bug report char-acteristic that we analyze. The results of this analysis will indicate how tech-nological changes over the years have set the trend for these characteristics,enabling us to see if we are moving towards the right direction in improvingthe reliability of client-side JavaScript;• We analyze the implications of the results on developers, testers, and tooldevelopers for JavaScript code.Our results show that around 68% of JavaScript faults are DOM-related faults,which occur as a result of a faulty interaction between the JavaScript code andthe DOM. A simple example is the retrieval of a DOM element using an incorrectID, which can lead to a null exception. Further, we find that DOM-related faultsaccount for about 80% of the highest impact faults in the web application. Finally,we find that the majority of faults arise due to the JavaScript code rather than serverside code/HTML, and that there are a few recurring programming patterns that leadto these bugs.In addition to the above results, we also find that a small – but non-negligible– percentage (33%) of the bug reports are type faults, which we describe in Sec-tion 2.2. Furthermore, in our temporal analysis, we observed both downward trendsin certain metrics (e.g., browser specificity) and upward trends in others (e.g., num-ber of errors committed in the JavaScript code).2.2 Background and MotivationThis section provides background information on the structure of modern web ap-plications, and how JavaScript is used in such applications. We also define termsused throughout this dissertation such as JavaScript error, fault, failure, and im-pact. Finally, we describe the goal and motivation of our study.112.2.1 Web ApplicationsModern web applications contain three client-side components: (1) HTML code,which defines the webpage’s initial elements and its structure; (2) CSS code, whichdefines these elements’ initial styles; and (3) JavaScript code, which defines client-side functionality in the web application. These client-side components can eitherbe written manually by the programmer, or generated automatically by the server-side (e.g., PHP) code.The Document Object Model (DOM) is a dynamic tree data structure that de-fines the elements in the web application, their properties including their stylinginformation, and how the elements are structured. Initially, the DOM contains theelements defined in the HTML code, and these elements are assigned the stylinginformation defined in the CSS code. However, JavaScript can be used to ma-nipulate this initial state of the DOM through the use of DOM API calls. Forexample, an element in the DOM can be accessed through its ID by calling thegetElementById() method. The attributes of this retrieved DOM element canthen be modified using the setAttribute() method. In addition, elements canbe added to or removed from the DOM by the JavaScript code.In general, a JavaScript method or property that retrieves elements or attributesfrom the DOM is called a DOM access method/property. Examples of these meth-ods/properties include getElementById(), getElementsByTagName(),and parentNode. Similarly, a JavaScript method or property that is used toupdate values in the DOM (e.g., its structure, its elements’ properties, etc.) iscalled a DOM update method/property. Examples include setAttribute(),innerHTML, and replaceChild(). Together, the access and update method-s/properties constitute the DOM API.2.2.2 JavaScript BugsJavaScript is particularly prone to faults, as it is a weakly typed language, whichmakes the language flexible but also opens the possibility for untyped variables tobe (mis)used in important operations. In addition, JavaScript code can be dynami-cally created during execution (e.g., by using eval), which can lead to faults thatare only detected at runtime. Further, JavaScript code interacts extensively with12the DOM, which makes it challenging to test/debug, and this leads to many faultsas we find in our study.JavaScript Bug Sequence. In this dissertation, we use the term bug as a catch-allterm that pertains to an undesired behaviour of the web application’s functionality.The following sequence describes the progression of a JavaScript bug, and theterms we use to describe this sequence:1. The programmer makes a mistake at some point in the code being written orgenerated. These errors can range from simple mistakes such as typograph-ical errors or syntax errors, to more complicated mistakes such as errors inlogic or semantics. The error can be committed in the JavaScript code, or inother locations such as the HTML code or server-side code (e.g., PHP).2. The error can propagate, for instance, into a JavaScript variable, the param-eter or assignment value of a JavaScript method or property, or the returnvalue of a JavaScript method during JavaScript code execution. Hence, bythis point, the error has propagated into a fault.3. The fault either directly causes a JavaScript exception (code-terminatingfailure) or a corruption in the output (output-related failure). This is calledthe failure.Figure 2.1 shows a real-world example of the error, fault, and failure associ-ated with a JavaScript bug report from the Moodle web application. Note that foroutput-related failures, the pertinent output can be one or a combination of manythings, including the DOM, server data, or important JavaScript variables. Wewill be using the above error-fault-failure model to classify JavaScript bugs, asdescribed in Section 2.3.DOM-Related Faults. We define a DOM-related fault as follows:Definition 1 (DOM-Related Fault) A JavaScript bug B is considered to havepropagated into a DOM-related fault if the corresponding error causes a DOMAPI method DA m to be called (or causes an assignment to a DOM API prop-erty DA p to be made), such that a parameter P passed to DA m (or a value Aassigned to DA p) is incorrect.13Error: The programmer forgets to initialize the value of thecmi.evaluation.comments variable.Fault: The cmi.evaluation.comments variable – which isuninitialized and hence has the value null – is used to access aproperty X (i.e., cmi.evaluation.comments.X) duringJavaScript execution.Failure: Since cmi.evaluation.comments is null, the codeattempting to access a property through it leads to a null exception,which terminates JavaScript execution.Figure 2.1: Example that describes the error, fault, and failure of a JavaScript bug reported in Moodle.In other words, if a JavaScript error propagates into the parameter of a DOMaccess/update method or to the assignment value for a DOM access/update prop-erty – thereby causing an incorrect retrieval or an incorrect update of a DOM el-ement – then the error is said to have propagated into a DOM-related fault. Forexample, if an error eventually causes the parameter of the DOM access methodgetElementById() to represent a nonexistent ID, and this method is calledduring execution with the erroneous parameter, then the error has propagated intoa DOM-related fault. However, if the error does not propagate into a DOM ac-cess/update method/property, the error is considered to have propagated into anon-DOM-related fault. Note that based on this definition, the presence of a largenumber of DOM interactions in the JavaScript code does not necessarily implythe presence of a large percentage of DOM-related faults, as not all errors wouldnecessarily propagate to any DOM API method calls in the code.Type Faults. We are also interested in determining the prevalence of type faults,which we define as follows:Definition 2 (Type Fault) A JavaScript bug B is considered to have propagatedinto a type fault if there exists a statement L in the JavaScript code such that,during the execution of the code that reproducesB, the statementL references an14expression or variable E that it inherently assumes to be of type t1, but E ’s actualtype at runtime is t2 (with t2 6= t1).In other words, a type fault occurs if the JavaScript code erroneously assumesduring execution that a certain value is of a certain type (e.g., using the string APIon a variable that is a number at runtime). Note that our comparison of types bearssome similarities to Pradel et al.’s definition of consistent types [141]. In particular,we both make a distinction between different “type categories”, such as primitivetypes and custom types; we describe this in further detail in Section 2.3.4.Severity. While the appearance of a failure is clear-cut and mostly objective (i.e.,either an exception is thrown or not; either an output contains a correct value ornot), the severity of the failure is subjective, and depends on the context in whichthe web application is being used. For example, an exception may be classified asnon-severe if it happens in a “news ticker” web application widget; but if the newsticker is used for something important – say, stocks data – the same exception maynow be classified as severe. In this dissertation, we will refer to the severity as theimpact of the failure. We determine impact based on a qualitative analysis of theweb application’s content and expected functionality.2.2.3 Goal and MotivationOur overall goal in this chapter is to understand the sources and the impact ofclient-side JavaScript faults in web applications. To this end, we conduct an empir-ical study of JavaScript bug reports in deployed web applications. There are severalfactors that motivated us to pursue this goal. First, understanding the root cause ofJavaScript faults could help make developers aware of programming pitfalls to beavoided, and the results could pave the way for better JavaScript debugging tech-niques. Second, analyzing the impact could steer developers’ and testers’ attentiontowards the highest impact faults, thereby allowing these faults to be detected early.Finally, we have reason to believe that JavaScript faults’ root causes and impactsdiffer from those of traditional languages because of JavaScript’s permissive na-ture and its many distinctive features (e.g., event-driven model; interaction withthe DOM; dynamic code creation; etc.)Other work has studied JavaScript faults through console messages or through15static analysis [64, 65, 79, 186]. However, bug reports contain detailed informa-tion about the root cause of the faults and the intended behaviour of the application,which is missing in these techniques. Further, they typically contain the fix asso-ciated with the fault, which is useful in further understanding it, for example, todetermine fix times.2.3 Experimental MethodologyWe describe our methodology for the empirical study on JavaScript faults. First,we enumerate the research questions that we want to answer. Then we describethe web applications we studied and how we collected their bug reports. All ourcollected empirical data is available for download.42.3.1 Research QuestionsTo achieve our goal, we address the following research questions through our bugreport study:RQ1: What categories of faults exist among reported JavaScript faults, and howprevalent are these fault categories?RQ2: What is the nature of failures stemming from JavaScript faults? What is theimpact of the failures on the web applications?RQ3: What is the root-cause of JavaScript faults? Are there specific programmingpractices that lead to JavaScript faults?RQ4: Do JavaScript faults exhibit browser-specific behaviour?RQ5: How long does it take to triage a JavaScript fault reported in a bug reportand assign it to a developer? How long does it take programmers to fix theseJavaScript faults?RQ6: How prevalent are type faults among the reported bugs?RQ7: How have the characteristics of JavaScript faults – particularly those ana-lyzed in the above research questions – varied over time?4∼frolino/projects/js-bugs-study/16Table 2.1: Experimental subjects from which bug reports were collected.Application Application Version Type Description Size of Bug Report Search Filter # ofID Name Range JS Code Reports(KB) Collected1 Moodle 1.9-2.3.3 Web Application Learning Management 352 (Text contains javascript OR js OR jquery) AND (Issue type is bug)AND (Status is closed) - Number of Results: 1209302 Joomla 3.x Web Application Content Management 434 (Category is JavaScript) AND (Status is Fixed) - Number of Results:62113 WordPress 2.0.6-3.6 Web Application Blogging 197 ((Description contains javascript OR js) OR (Keywords containsjavascript OR js)) AND (status is closed) - Number of Results: 875304 Drupal 6.x-7.x Web Application Content Management 213 (Text contains javascript OR js OR jQuery) AND (Category is bugreport) AND (Status is closed(fixed)) - Number of Results: 608305 Roundcube 0.1-0.9 Web Application Webmail 729 ((Description contains javascript OR js) OR (Keywords containsjavascript OR js)) AND (status is closed) - Number of Results: 234306 WikiMedia 1.16-1.20 Web Application Wiki Software 160 (Summary contains javascript) AND (Status is resolved) AND(Resolution is fixed) - Number of Results: 49307 TYPO3 1.0-6.0 Web Application Content Management 2252 (Status is resolved) AND (Tracker is bug) AND (Subject containsjavascript) (Only one keyword allowed) - Number of Results: 81308 TaskFreak 0.6.x Web Application Task Organizer 74 (Search keywords contain javascript OR js) AND (User is any user)- Number of Results: 5769 Horde 1.1.2-2.0.3 Web Application Webmail 238 (Type is bug) AND (State is resolved (bug)) AND ((Summary con-tains javascript) OR (Comments contain javascript)) - Number of Re-sults: 3003010 FluxBB 1.4.3-1.4.6 Web Application Forum System 8 (Type is bug) AND (Status is fixed) AND (Search keywords containjavascript) - Number of Results: 8511 LimeSurvey 1.9.x-2.0.x Web Application Survey Maker 442 (Status is closed) AND (Resolution is fixed) AND (Text containsjavascript) - Number of Results: 2523012 DokuWiki 2009-2014 Web Application Wiki Software 446 (Status is all closed tasks) AND (Task type is bug report) AND (Textcontains javascript OR js) - Number of Results: 1593013 phpBB 3.0.x Web Application Forum System 176 (Status is closed) AND (Resolution is fixed) AND (Text containsjavascript OR js) - Number of Results: 1123014 MODx 1.0.3-2.3.0 Web Application Content Management 1229 (Type is issue) AND (Status is closed) AND (Text containsjavascript OR js) - Number of Results: 2193015 EZ Systems 3.5-5.3 Web Application Content Management 180 (Issue type is bug) AND (Status is closed) AND (Resolution is fixed)AND (Text contains javascript OR js) - Number of Results: 2293016 jQuery 1.0-1.9 Library — 94 (Type is bug) AND (Resolution is fixed) - Number of Results: 2421 3017 Prototype.js 1.6.0-1.7.0 Library — 164 (State is resolved) - Number of Results: 142 3018 MooTools 1.1-1.4 Library — 101 (Label is bug) AND (State is closed) - Number of Results: 52 3019 Ember.js 1.0-1.1 Library — 745 (Label is bug) AND (State is closed) - Number of Results: 347 30172.3.2 Experimental ObjectsTo ensure representativeness, we collect and categorize bug reports from a widevariety of web applications and libraries. Each object is classified as either a webapplication or a JavaScript library. In preliminary work [130], we initially madethis distinction to see if there are any differences between JavaScript faults in webapplications and those in libraries. We did not, however, find any substantial dif-ferences after performing our analysis; therefore, our selection of additional ex-perimental objects compared to this preliminary work was not influenced by thisdistinction, and we do not report them separately. In total, we collected and ana-lyzed 502 bug reports from 15 web applications and four libraries.Table 2.1 lists the range of the software versions considered for each experi-mental object. The web applications and libraries were chosen based on severalfactors, including their popularity, their prominent use of client-side JavaScript,and the descriptiveness of their bug reports (i.e., the more information its bug re-ports convey, the better). Another contributing factor is the availability of a bugrepository for the web application or library, as such repositories were not alwaysmade public. In fact, finding web applications and libraries that satisfied thesecriteria was a major challenge in this study.2.3.3 Collecting the Bug ReportsFor each web application bug repository, we collect a total of min{30, NumJSRe-ports} JavaScript bug reports, where NumJSReports is the total number of JavaScriptbug reports in the repository. We chose 30 as the maximum threshold for eachrepository to balance analysis time with representativeness. To collect the bug re-ports for each repository, we perform the following steps:Step 1 Use the filter/search tool available in the bug repository to narrow downthe list of bug reports. The filters and search keywords used in each bugrepository are listed in Table 2.1. In general, where appropriate, we used“javascript” and “js” as keywords to narrow down the list of bug reports (insome bug repositories, the keyword “jQuery” was also used to narrow downthe list even further). Further, to reduce spurious or superfluous reports, weonly considered bug reports with resolution “fixed”, and type “bug” or “de-18fect” (i.e., bug reports marked as “enhancements” were neglected). Table 2.1also lists the number of search results after applying the filters in each bugrepository. The bug report repositories were examined between January 30,2013 and March 13, 2013 (applications #1-8, 16-19), and between February30, 2015 and March 25, 2015 (applications #9-15).Step 2 Once we have the narrowed-down list of bug reports from Step 1, we man-ually examine each report in the order in which it was retrieved. Since thefilter/search features of some bug tracking systems were not as descriptive(e.g., the TYPO3 bug repository only allowed the user to search for bugreports marked “resolved”, but not “fixed”), we also had to manually checkwhether the bug report satisfied the conditions described in Step 1. If the con-ditions are satisfied, we analyzed the bug report; otherwise, we discarded it.We also discarded a bug report if its fault is found to not be JavaScript-related– that is, the error does not propagate into any JavaScript code in the web ap-plication. This step is repeated until min{30, NumJSReports} reports havebeen collected in the repository. The number of bug reports we collectedfor each bug repository is shown in Table 2.1. Note that three applicationshad fewer than 30 reports that satisfied the above criteria, namely Joomla,TaskFreak, and FluxBB. For all remaining applications, we collected 30 bugreports each.Step 3 For each report, we created an XML file that describes and classifies theerror, fault, failure, and impact of the JavaScript bug reported. The XML filealso describes the fix applied for the bug. Typically this data is presentedin raw form in the original bug report, based on the bug descriptions, de-veloper discussions, patches, and supplementary data; hence, we needed toread through, understand, and interpret each bug report in order to extractall the information included in the corresponding XML file. We also includedata regarding the date and time of each bug being assigned and fixed in theXML file. We have made these bug report XML files publicly available forreproducibility.4192.3.4 Analyzing the Collected Bug ReportsThe collected bug report data, captured in the XML files, enable us to qualitativelyand quantitatively analyze the nature of JavaScript bugs.Fault Categories. To address RQ1, we classify the bug reports according to thefollowing fault categories that were identified through an initial pilot study:• Undefined/Null Variable Usage: A JavaScript variable that has a null orundefined value – either because the variable has not been defined or hasnot been assigned a value – is used to access an object property or method.Example: The variable x, which has not been defined in the JavaScript code,is used to access the property bar via• Undefined Method: A call is made in the JavaScript code to a method thathas not been defined. Example: The undefined function foo() is called inthe JavaScript code.• Incorrect Method Parameter: An unexpected or invalid value is passed toa native JavaScript method, or assigned to a native JavaScript property. Ex-ample: A string value is passed to the JavaScript Date object’s setDate()method, which expects an integer. Another example is passing an ID stringto the DOM method getElementById() that does not correspond to anyIDs in the DOM. Note that this latter example is a type of DOM-relatedfault, which is a subcategory of Incorrect Method Parameter faults wherethe method/property is a DOM API method/property (as defined in Sec-tion 2.2.2).• Incorrect Return Value: A user-defined method is returning an incorrectreturn value even though the parameter(s) is/are valid. Example: The user-defined method factorial(3) returns 2 instead of 6.• Syntax-Based Fault: There is a syntax error in the JavaScript code. Ex-ample: There is an unescaped apostrophe character in a string literal that isdefined using single quotes.• Other: Errors that do not fall into the above categories. Example: There is anaming conflict between methods or variables in the JavaScript code.20Table 2.2: Impact Categories.Type Description Examples1 Cosmetic Table is not centred; header istoo small2 Minor functionality loss Cannot create e-mail addressescontaining apostrophe charac-ters, which are often only usedby spammers3 Some functionality loss Cannot use delete button todelete e-mails, but delete keyworks fine4 Major functionality loss Cannot delete e-mails at all;cannot create new posts5 Data loss, crash, or security issue Browser crashes/hangs; entireapplication unusable; save but-ton does not work and preventsuser from saving considerableamount of data; informationleakageNote that we do not find instances where a bug report belongs to multiple faultcategories, and hence, we disregard the “Multiple” category in our description ofthe results.Failure Categories. The failure category refers to the observable consequenceof the fault. For each bug report, we marked the failure category as either Code-terminating or Output-related, as defined in Section 2.2.2. This categorizationhelps us answer RQ2.Impact Categories. We manually classify the impact of a JavaScript bug accord-ing to the classification scheme used by Bugzilla.5 This scheme is applicable to anysoftware application, and has also been used in other studies [28, 114]. Table 2.2shows the categories. This categorization helps us answer RQ2.Error Locations. The error location refers to the code unit or file where the errorwas made (either by the programmer or the server-side program generating theJavaScript code). For each bug report, we marked the error location as one ofthe following: (1) JavaScript code (JS); (2) HTML Code (HTML); (3) Server-sidecode (SSC); (4) Server configuration file (SCF); (5) Other (OTH); and (6) Multipleerror locations (MEL). In cases where the error location is marked as either OTHor MEL, the location(s) is/are specified in the error description. The error locations5 determined based on information provided in the bug report description andcomments. This categorization helps us answer RQ3.Browser Specificity. In addition, we also noted whether a certain bug report isbrowser-specific – that is, the fault described in the report only occurs in one ortwo web browsers, but not in others – to help us answer RQ4.Time for Fixing. To answer RQ5, we define the triage time as the time it tooka bug to get assigned to a developer, from the time it was reported (or, if there isno “assigned” marking, the time until the first comment is posted in the report).We also define fix time as the time it took the corresponding JavaScript fault toget marked as “fixed”, from the time it was triaged. We recorded the time takenfor each JavaScript bug report to be triaged, and for the report to be fixed. Otherstudies have classified bugs on a similar basis [11, 104]. Further, we calculate timesbased on the calendar date; hence, if a bug report was triaged on the same date asit was reported, the triage time is recorded as 0.Type Faults. Programming languages such as TypeScript [112] and Dart [22]aim to minimize JavaScript faults through a stricter typing system for JavaScript;hence, the main fault model targeted by these languages are type faults. In ourwork, we assess the usefulness of strong typing in such languages by examiningthe prevalence of type faults among the bug reports, which addresses RQ6.In our analysis, we consider four different “type categories”, listed below.• Primitive types [P]: Values of type string, boolean, or number• Null/undefined types [Nu]: null or undefined• Native “class” types [Nc]: Objects native to client-side JavaScript (e.g.,Function, Element, etc.)• Custom “class” types [C]: User-defined objectsWe first categorize a bug report as either a type fault or not a type fault, basedon the definition provided in Section 2.2.2. For each bug report categorized as atype fault, we further categorize it as belonging to one of 16 subcategories, eachof the form, “<type category> expected, but <type category> actual,” which, for222004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014Number of Bug Reports Per YearNumber of bug reports020406080100Figure 2.2: Bug reports per calendar year. Note that there was one additional bug report from 2003, andtwo additional bug reports from 2015.simplicity, we abbreviate as <type category>E<type category>A. For example, atype fault belongs to the PEPA category if a value is expected to be of a certainprimitive type in the JavaScript code – say boolean – but its actual type at run-time is a different primitive type – say string. Similarly, an NcENuA type faultoccurs if a value is expected to be of a native “class” type, but its actual type atruntime is null or undefined.Note that type comparisons are made in a way similar to Pradel et al.’s methodfor detecting inconsistent types [141]. Finally, note that we classify a bug report as“not a type fault” if we do not find a statementL that satisfies the definition givenin Section 2.2.2.Temporal Analysis. Where appropriate, for every data point we collect to answerthe first six research questions, we analyze how these data have changed over time.The purpose of this analysis – which answers RQ7 – is to help us understand andspeculate how historical factors such as browser improvements, the appearance ofnew JavaScript frameworks, and the rise in popularity of “Q&A” websites (e.g.,StackOverflow), among others, have helped in the improvement of the reliabilityof client-side JavaScript, or degraded it.23!"#$%"$#&'())*+,-.,/)$*!0,1$*23*!"#$%"$#*4$567#*83*9":7--$:5*;$5(-"*+,)($*<3*=>"5,?@A,0$#*B,()5*C<3*D56$-*<3*'75*ED4@;$),5$#*F3*ED4@-$),5$#*FC3*9":7--$:5*4$567#*G,-,H$5$-*I23*Figure 2.3: Pie chart of the distribution of fault categories.To perform this temporal analysis, we mark each bug report with the year inwhich it was reported. In our specific case, the bug reports we collected werereported over a period of 13 calendar years, from 2003 to 2015. However, weneglect the years 2003 and 2015 in our analysis, since fewer than 3 bug reportswere marked with each of these years; hence, we only consider 11 calendar yearsin our analysis (i.e., 2004 to 2014), each of which corresponds to at least 17 bugreports. The number of bug reports in each of these calendar years is shown inFigure ResultsIn this section, we present the results of our empirical study on JavaScript bugreports. The subsections are organized according to the research questions in Sec-tion 2.3: Fault categories of the bug reports analyzed. Library data are shown in italics.Application Undefined/Null Undefined Incorrect Syntax- Other Incorrect Method PercentVariable Usage Method Return Based Parameter DOM-Value Fault DOM-related Not DOM-related Total relatedMoodle 3 3 0 7 0 15 2 17 50%Joomla 1 0 0 3 0 6 1 7 55%WordPress 1 2 0 3 1 21 2 23 70%Drupal 0 1 0 5 0 23 1 24 77%Roundcube 3 0 0 4 0 22 1 23 73%WikiMedia 2 4 0 5 0 15 4 19 50%TYPO3 2 2 0 7 1 18 0 18 60%TaskFreak 1 0 0 0 0 4 1 5 67%Horde 1 0 0 4 0 24 1 25 80%FluxBB 0 0 0 0 0 5 0 5 100%LimeSurvey 0 0 0 4 0 24 2 26 80%DokuWiki 2 0 0 3 2 20 3 23 67%phpBB 3 1 0 3 0 23 0 23 77%MODx 1 1 0 3 1 21 3 24 70%EZ Systems 2 2 0 7 1 17 1 18 57%jQuery 0 0 1 0 0 26 3 29 87%Prototype.js 0 1 2 0 0 22 5 27 73%MooTools 3 1 3 0 1 19 3 22 63%Ember.js 2 1 4 0 2 16 5 21 53%Overall 27 19 10 58 9 341 38 379 68%252.4.1 Fault CategoriesTable 2.3 shows the breakdown of the fault categories in our experimental objects.The pie chart in Figure 2.3 shows the overall percentages. As seen from the tableand the figure, approximately 75% of JavaScript faults belong to the “IncorrectMethod Parameter” category. This suggests that most JavaScript faults result fromerrors related to setting up the parameters of native JavaScript methods, or thevalues assigned to native JavaScript properties.Finding 1: “Incorrect Method Parameter” faults account for around 75% ofJavaScript faults.In our earlier study of unhandled JavaScript exceptions [128] and a preliminaryversion of our study on the fault localization of JavaScript bugs [129], we alsonoticed many “Incorrect Method Parameter” faults, but their prevalence was notquantified. Interestingly, we also observed in these earlier studies that many ofthe methods and properties affected by these faults are DOM methods/properties– in other words, DOM-related faults, as defined in Section 2.2. Based on theseprior observations, we became curious as to how many of these “Incorrect MethodParameter” faults are DOM-related.We further classified the “Incorrect Method Parameter” faults based on themethods/properties in which the incorrect values propagated, and found that 91%of these faults are DOM-related faults. This indicates that among all JavaScriptfaults, approximately 68% are DOM-related faults (see right-most pie chart in Fig-ure 2.3). We find that DOM-related faults range from 50 to 100% of the totalJavaScript faults across applications, as seen on the last column of Table 2.3.Lastly, in order to assess how many of the DOM-related faults result fromdevelopers’ erroneous understanding of the DOM, we make a distinction betweenstrong DOM-related faults and weak DOM-related faults. A DOM-related fault isclassified as strong if the incorrect parameter passed to the DOM method/propertyrepresents an inconsistency with the DOM; this includes, for example, cases wherean incorrect or non-existent selector/ID is passed to a DOM method. Otherwise,the DOM-related fault is classified as weak; this includes, for example, cases where26the wrong text value is assigned to innerHTML, or the wrong attribute value isassigned to an attribute. Overall, we find that strong DOM-related faults makeup 88% of all DOM-related faults. This result therefore strongly indicates thatmost DOM-related faults occur as a result of an inconsistency between what thedevelopers think is the DOM contents, and what actually is the DOM’s contents.Finding 2: DOM-related faults account for 91% of “Incorrect Method Parame-ter” faults. Hence, the majority – around 68% – of JavaScript faults are DOM-related, and the majority – around 88% – of these DOM-related faults are of the“strong” variety.Temporal Analysis. Figure 2.4a shows a scatter plot of the percentage of bugreports marked as DOM-related per calendar year. The linear regression line has adownward slope, indicating an overall decrease in the percentage of DOM-relatedfaults over the years. However, this decrease is very small, i.e., a decrease of 7percentage points, which corresponds to a 10% decrease. Hence, the percentage ofDOM-related faults reported in the repositories we analyzed has remained ratherconsistent over the years.Finding 3: The percentage of DOM-related faults among all JavaScript faultshas experienced only a very small decrease (10%) over the past ten years.272004 2006 2008 2010 2012 2014020406080100DOM-Related Faults Per YearYearPercent DOM-Related (%)(a)2004 2006 2008 2010 2012 2014020406080100Code-Terminating Failures Per YearYearPercent Code-Terminating (%)DOM-relatedNon-DOM-related(b)2004 2006 2008 2010 2012 2014020406080100Errors Located in JavaScript Code Per YearYearPercent of Errors Located in JavaScript Code (%)(c)2004 2006 2008 2010 2012 2014020406080100Browser Specificity Per YearYearPercent Browser Specific (%)(d)2004 2006 2008 2010 2012 2014020406080100Type Faults Per YearYearPercent of Type Faults (%)(e)Figure 2.4: Temporal graphs showing (a) the percent of DOM-related faults per year; (b) the percent of code-terminating failures per year. The red regression linerepresents DOM-related faults, while the blue regression line represents non-DOM-related faults; (c) the percent of bug reports whose error is located in theJavaScript code, per year; (d) the percent of browser-specific bugs per year; and (e) the percent of type faults per year282.4.2 Consequences of JavaScript FaultsWe now show the failure categories of the bug reports we collected, as well as theimpact of the JavaScript faults that correspond to the reports.Failure Categories. Table 2.4 shows the distribution of failure categories amongstthe collected reports; all faults are classified as either leading to a code-terminatingfailure or an output-related failure (these terms are defined in Section 2.3.4). Notethat the goal of this classification is not to assess the severity of the bugs, butrather, to determine the nature of the failure (i.e., whether there is a correspondingerror message or not). Making this distinction will be helpful for developers offault localization tools, for example, as these error messages can naturally act as astarting point for analysis of the bug.As the table shows, around 57% of JavaScript faults are code-terminating,which means that in these cases, an exception is thrown. Faults that lead to code-termination are generally easier to detect, since the exceptions have one or morecorresponding JavaScript error message(s) (provided the error can be reproducedduring testing). On the other hand, output-related failures do not have such mes-sages; they are typically only detected if the user observes an abnormality in thebehaviour of the application.Since the majority of JavaScript faults are DOM-related, we explored howthese failure categories apply to DOM-related faults. Interestingly, we found thatfor DOM-related faults, most failures are output-related (at 57%), while for non-DOM-related faults, most failures are code-terminating (at 86%). This result sug-gests that DOM-related faults may be more difficult to detect than non-DOM-related faults, as most of them do not lead to exceptions or error messages.Finding 4: While most non-DOM-related JavaScript faults lead to exceptions(around 86%), only a relatively small percentage (43%) of DOM-related faultslead to such exceptions.Temporal Analysis. A scatter plot of the percentage of code-terminating fail-ures per year is shown in Figure 2.4b. This figure shows the percentage of code-terminating failures, over time, for each fault category (i.e., DOM-related vs non-29Table 2.4: Number of code-terminating failures compared to output-related failures. Library data areshown in italics. Data for DOM-related faults only are shown in parentheses.Application Code-terminating Output-relatedMoodle 21 (8) 9 (7)Joomla 8 (3) 3 (3)WordPress 11 (3) 19 (18)Drupal 12 (5) 18 (18)Roundcube 18 (11) 12 (11)WikiMedia 19 (4) 11 (11)TYPO3 21 (9) 9 (9)TaskFreak 3 (2) 3 (2)Horde 17 (11) 13 (13)FluxBB 3 (3) 2 (2)LimeSurvey 11 (7) 19 (17)DokuWiki 9 (4) 21 (16)phpBB 19 (12) 11 (11)MODx 21 (14) 9 (7)EZ Systems 27 (15) 3 (2)jQuery 17 (13) 13 (13)Prototype.js 10 (7) 20 (15)MooTools 21 (10) 9 (9)Ember.js 16 (5) 14 (11)Overall 284 (146) 218 (195)DOM-related). In both cases, there is a net decrease in the number of code-terminating failures, with a slightly larger decrease for non-DOM-related faults.The overall decrease in code-terminating failures may be explained in part by theimprovements in error consoles as well as the introduction of tools such as Fire-bug,6 both of which facilitate the process of debugging code-terminating failureswithin the web browser.Finding 5: The percentage of code-terminating failures experienced a net de-crease for both DOM-related faults (17% decrease) and non-DOM-related faults(21% decrease) over the past ten years.Impact Categories. The impact indicates the severity of the failure. Hence, wealso classify bug reports based on impact categories as defined in Section 2.3.4(i.e., Type 1 has lowest severity, and Type 5 has highest severity).The impact category distribution for each web application and library is shown6 2.5: Impact categories of the bug reports analyzed. Library data are shown in italics. Impact cate-gories data for DOM-related faults only are shown in parentheses.Application Type 1 Type 2 Type 3 Type 4 Type 5Moodle 10 (5) 12 (5) 0 (0) 6 (3) 2 (2)Joomla 2 (2) 2 (0) 4 (2) 2 (1) 1 (1)WordPress 4 (4) 7 (3) 12 (9) 3 (2) 4 (3)Drupal 3 (3) 2 (1) 17 (12) 1 (1) 7 (6)Roundcube 2 (2) 5 (4) 14 (9) 5 (3) 4 (4)WikiMedia 2 (1) 8 (6) 15 (6) 1 (0) 4 (2)TYPO3 0 (0) 4 (2) 20 (13) 5 (2) 1 (1)TaskFreak 2 (1) 1 (1) 1 (0) 2 (2) 0 (0)Horde 6 (3) 7 (6) 13 (11) 2 (2) 2 (2)FluxBB 1 (1) 1 (1) 2 (2) 1 (1) 0LimeSurvey 5 (4) 4 (2) 19 (16) 1 (1) 1 (1)DokuWiki 5 (3) 7 (3) 16 (13) 2 (1) 0 (0)phpBB 3 (0) 5 (4) 16 (14) 6 (5) 0 (0)MODx 2 (1) 5 (3) 18 (14) 3 (2) 2 (1)EZ Systems 1 (1) 2 (0) 25 (15) 2 (1) 0 (0)jQuery 3 (3) 13 (13) 1 (1) 11 (8) 2 (1)Prototype.js 0 (0) 7 (6) 19 (12) 2 (2) 2 (2)MooTools 0 (0) 16 (8) 10 (8) 3 (3) 1 (0)Ember.js 2 (0) 15 (10) 10 (4) 2 (1) 1 (1)Overall 53 (34) 123 (78) 232 (161) 60 (41) 34 (27)in Table 2.5. Most of the bug reports were classified as having Type 3 impact (i.e.,some functionality loss). Type 1 and Type 5 impact faults are the fewest, with53 and 34 bug reports, respectively. Finally, Type 2 and Type 4 impact faults arerepresented by 123 and 60 bug reports, respectively. The average impact of thecollected JavaScript bug reports is close to the middle, at 2.80, which is in linewith other studies [34].Table 2.5 also shows the impact distribution for DOM-related faults in paren-theses. As seen in the table, each impact category is comprised primarily of DOM-related faults. Further, almost 80% (27 out of 34) of the highest severity faults (i.e.,Type 5 faults) are DOM-related. Additionally, 13 of the 19 experimental subjectscontain at least one DOM-related fault with Type 5 impact. This result suggeststhat high severity failures often result from DOM-related faults. We find that thesehigh-impact faults broadly fall into three categories.1. Application/library becomes unusable. This occurs because an erroneousfeature is preventing the user from using the rest of the application, particu-larly in DOM-related faults, which make up 11 of the 15 faults in this cate-31gory. For example, one of the faults in Drupal prevented users from loggingin (due to incorrect attribute values assigned to the username and passwordelements), so the application could not even be accessed.2. Data loss. Once again, this is particularly true for DOM-related faults, whichaccount for 13 out of the 14 data-loss-causing faults that we encountered.One example comes from Roundcube; in one of the bug reports, the faultcauses an empty e-mail to be sent, which causes the e-mail written by theuser to be lost. As another example, a fault in WordPress causes server data(containing posts) to be deleted automatically without confirmation.3. Browser hangs and information leakage. Hangs often occur as a resultof a bug in the browser; the Type 5 faults leading to browser hangs that weencountered are all browser-specific. Information leakage occurred twice –in TYPO3 and MODx – as a result of JavaScript faults that caused potentiallysecurity-sensitive code from the server to be displayed on the page; one ofthese bugs leading to information leakage is DOM-related.Finding 6: About 80% of the highest severity JavaScript faults are DOM-related.2.4.3 Causes of JavaScript FaultsLocations. Before we can determine the causes, we first need to know where theprogrammers committed the programming errors. To this end, we marked the errorlocations of each bug report; the error location categories are listed in Section 2.3.4.The results are shown in Table 2.6. As the results show, the vast majority (83%) ofthe JavaScript faults occur as a result of programming errors in the JavaScript codeitself. When only DOM-related faults were considered, a similar distribution offault locations was observed; in fact, the majority is even larger for DOM-relatedfaults that originated from the JavaScript code, at 89%. Although JavaScript codecould be automatically written by external tools, we observed that the fix for thesebugs involved the manual modification of the JavaScript file(s) where the error is32Table 2.6: Error locations of the bug reports analyzed. Library data are shown in italics.Legend: JS = JavaScript code, HTML = HTML code, SSC = Server-side code, SCF = Server configuration file,OTH = Other, MEL = Multiple error locationsApplication JS HTML SSC SCF OTH MELMoodle 22 2 6 0 0 0Joomla 9 0 1 0 0 1WordPress 24 0 6 0 0 0Drupal 29 0 1 0 0 0Roundcube 26 0 4 0 0 0WikiMedia 25 0 5 0 0 0TYPO3 18 1 9 2 0 0TaskFreak 6 0 0 0 0 0Horde 22 1 4 2 1 0FluxBB 5 0 0 0 0 0LimeSurvey 25 1 3 0 1 0DokuWiki 26 1 2 0 1 0phpBB 23 5 0 1 1 0MODx 23 1 6 0 0 0EZ Systems 19 5 6 0 0 0jQuery 30 – – – 0 0Prototype.js 25 – – – 4 1MooTools 30 – – – 0 0Ember.js 30 – – – 0 0Overall 417 17 53 5 8 2located. This observation provides a good indication that JavaScript faults typicallyoccur because the programmer herself writes erroneous code, as opposed to server-side code automatically generating erroneous JavaScript code, or HTML.Finding 7: Most JavaScript faults (83%) originate from manually-writtenJavaScript code as opposed to code automatically generated by the server.Patterns. To understand the programmer mistakes associated with JavaScript er-rors, we manually examined the bug reports for errors committed in JavaScriptcode (which were the dominant category). We found that 55% of the errors fellinto the following common patterns (the remaining 45% of the errors followedmiscellaneous patterns):1. Erroneous input validation. Around 16% of the bugs occurred becauseinputs passed to the JavaScript code (i.e., user input from the DOM or in-puts to JavaScript functions) are not being validated or sanitized. The most33common mistake made by programmers in this case is neglecting valid inputcases. For example, in the jQuery library, the replaceWith() methodis allowed to take an empty string as input; however, the implementation ofthis method does not take this possibility into account, thereby causing thecall to be ignored.2. Error in writing a string literal. Approximately 13% of the bugs werecaused by a mistake in writing a string literal in the JavaScript code. Theseinclude forgetting prefixes and/or suffixes, typographical errors, and includ-ing wrong character encodings. About half of these errors relate to writing asyntactically valid but incorrect CSS selector (which is used to retrieve DOMelements) or regular expression.3. Forgetting null/undefined check. Around 10% of the bugs resulted frommissing null/undefined checks for a particular variable, assuming that thevariable is allowed to have a value of null or undefined.4. Neglecting differences in browser behaviour. Around 9% of the bugs werecaused by differences in how browsers treat certain methods, properties oroperators in JavaScript. Of these, around 60% pertain to differences in howbrowsers implement native JavaScript methods. For example, a fault oc-curred in WikiMedia in Internet Explorer 7 and 8 because of the differentway those browsers expect the history.go() method to be used.5. Error in syntax. Interestingly, around 7% of bugs resulted from syntax er-rors in the JavaScript code that were made by the programmer. Note, also,that we found instances where server-side code generated syntactically in-correct JavaScript code, though this is not accounted for here.Finding 8: There are several recurring error patterns – causing JavaScript faults– that arise from JavaScript code.Temporal Analysis. When plotted per year, the percentage of bug reports whosecorresponding error is located in the JavaScript code results in a regression line that34has a positive slope, as seen in Figure 2.4c. In other words, the percentage of bugreports whose errors are located in the JavaScript code generally increased from theyear 2004 to 2014; in this case, based on the endpoints of the regression line, therewas an increase of around 25%. We believe this trend is a product of client-sidescripting gaining more prominence in web application development as the yearswent by. In particular, during this time period, new and richer web standards forECMAScript [45] and XMLHttpRequest [159] were being introduced, giving wayfor the rise in popularity of AJAX, which developers could use to offload server-side functionality to the client-side. As a result, since JavaScript is being used morefrequently, more errors are being committed in JavaScript code. In addition, theoverall increase in the trend may also be attributable to JavaScript code becomingmore complex as JavaScript gradually rose in popularity over time.Finding 9: Among JavaScript bugs, the percentage of errors committed in theJavaScript code has experienced a 25% increase over the past ten years.2.4.4 Browser SpecificityWe analyzed the browser specificity of the bug reports we collected. A bug isbrowser specific if it occurs in at most two web browsers. As Table 2.7 shows, mostJavaScript faults (77%) are non-browser specific (the same percentage is acquiredwhen only DOM-related faults are considered). However, among the browser-specific faults, about 64% are specific to Internet Explorer (IE).After analyzing the IE-specific faults, we found that most of them (56%) weredue to the use of methods and properties that were not supported in that browser(particularly in earlier versions, pre-Internet Explorer 8). This is likely because theuse of browser-specific method and property names (which may not be standards-compliant) is more prevalent in IE than in other browsers. In addition, IE haslow tolerance of small errors in the JavaScript code. For example, 21% of theIE-specific faults occurred because IE could not handle trailing commas in object-creation code; while these trailing commas are syntax errors as per the ECMAScriptstandard, other browsers can detect their presence and remove them.35Table 2.7: Browser specificity of the bug reports analyzed. Library data are shown in italics.Legend: IE = Internet Explorer, FF = Firefox, CHR = Chrome, SAF = Safari, OPE = Opera, OTH = Other, NBS= Not browser-specific, MUL = MultipleApplication IE FF CHR SAF OPE OTH NBS MULMoodle 4 0 0 0 0 0 25 1Joomla 1 0 0 0 0 0 10 0WordPress 1 0 0 0 0 1 28 0Drupal 2 0 1 1 0 0 26 0Roundcube 5 0 1 0 1 1 22 0WikiMedia 6 0 0 0 0 0 24 0TYPO3 7 1 0 0 1 0 20 1TaskFreak 1 0 0 1 0 0 4 0Horde 8 1 0 3 0 0 18 0FluxBB 0 0 0 0 0 0 5 0LimeSurvey 5 1 0 0 0 0 24 0DokuWiki 2 1 2 0 1 0 23 1phpBB 2 2 0 1 2 1 22 0MODx 3 0 0 1 0 0 26 0EZ Systems 1 0 1 0 0 0 27 1jQuery 7 0 0 0 0 0 22 1Prototype.js 8 1 1 2 1 0 14 3MooTools 10 2 0 0 1 0 17 0Ember.js 2 0 0 0 0 0 28 0Overall 75 9 6 9 7 3 385 8Finding 10: Most JavaScript faults (77%) are not browser-specific.Temporal Analysis. Figure 2.4d shows the scatter plot for the percentage ofbrowser specific bug reports per year. Here, the regression line has a negative slope;more specifically, the regression line shows the browser specificity decreasing byaround 50% (i.e., from 32% in 2004 to 17% in 2014). This decrease in browserspecificity is consistent with results found in a recent study [20], which show anoverall decrease in cross-browser-related questions in StackOverflow from 2009 to2012; this work posits that the decrease may have been caused by the maturation ofbrowser support for HTML5, which was the focus of the study. Other factors thatmay have contributed to this decline include the introduction of JavaScript librariesdesigned to eliminate cross-browser incompatibilities, some of which are used inthe web applications we studied, as well as extensive research done recently onways to mitigate cross-browser compatibility issues [36, 37, 108, 150].36DOM Non-DOM All0246810Triage TimeFault CategoryTime (Days)(a)DOM Non-DOM All020406080100Fix TimeFault CategoryTime (Days)(b)Figure 2.5: Box plot for (a) triage time, and (b) fix timeFinding 11: The percentage of browser-specific faults among all JavaScriptfaults has experienced a 50% decrease over the past ten years.2.4.5 Triage and Fix Time for JavaScript FaultsWe calculated the triage time and fix time for each bug report. The results areshown in Figure 2.5a (triage time) and Figure 2.5b (fix time) as box plots (theoutliers are not shown). Most of the bug reports were triaged on the same day theywere reported, which explains why the median of the triage time for all bug reports,as seen in Figure 2.5a, is 0. Also, as seen in Figure 2.5b, the median of the fix timefor all bug reports is 5 days. Finally, in addition to the box plots, we calculated themean triage and fix times for each bug report. We found that, on average, the triagetime for JavaScript faults is approximately 29 days, while the average fix time isapproximately 65 days (see Table 2.8).As before, we made the same calculations for DOM-related faults and non-DOM-related faults. The comparisons are shown in Figures 2.5a and 2.5b, as wellas in Table 2.8. From the box plots, we find that both DOM-related faults and non-DOM-related faults have a median triage time of 0 days, which indicates that themajority of either of these faults gets triaged on the same day as they are reported.For the fix times, DOM-related faults have a median fix time of 6 days, compared37Table 2.8: Average triage times (T) and fix times (F) for each experimental object, rounded to the nearestwhole number. Library data are shown in italics.Application All Faults DOM-Related Non-DOM-RelatedFaults Only Faults OnlyT F T F T FMoodle 248 10 205 12 292 8Joomla 4 57 1 67 8 46WordPress 1 138 2 150 0 108Drupal 7 66 3 47 22 130Roundcube 18 118 25 160 0 9WikiMedia 18 26 36 44 1 8TYPO3 7 55 7 67 6 36TaskFreak 23 17 32 23 6 5Horde 4 7 5 8 0 1FluxBB 1 5 1 5 – –LimeSurvey 10 47 5 58 28 5DokuWiki 11 13 10 15 13 9phpBB 3 60 3 64 4 46MODx 86 46 116 64 16 4EZ Systems 35 41 22 41 52 42jQuery 1 33 1 36 1 10Prototype.js 28 343 33 294 14 478MooTools 10 48 10 49 9 47Ember.js 0 11 1 14 0 8Overall 29 65 26 71 37 52to 2 days for non-DOM-related faults. With respect to the means (Table 2.8), wefound that DOM-related faults have an average triage time of 26 days, compared to37 days for non-DOM-related faults. On the other hand, DOM-related faults havean average fix time of 71 days, compared to 52 days for non-DOM-related faults.Taking the above results into account, it appears that DOM-related faults gen-erally have lower triage times than non-DOM-related faults, while DOM-relatedfaults have higher fix times than non-DOM-related faults. This suggests that devel-opers find DOM-related faults important enough to be triaged more promptly thannon-DOM-related faults. However, DOM-related faults take longer to fix, perhapsbecause of their inherent complexity.Finding 12: On average, DOM-related faults get triaged more promptly thannon-DOM-related faults (26 days vs. 37 days); however, DOM-related faultstake longer to fix than non-DOM-related faults (71 days vs. 52 days).38Not a type fault Type faultType FaultsPercent of all bug reports020406080100AllOnly DOM-Related(a)PEPA NuENuA NcENcA PENuA PENcA NuEPA NuENcA NcEPA NcENuA CENuAType Fault CategoriesPercent of all type faults020406080100AllOnly DOM-Related(b)Figure 2.6: Bar graphs showing (a) the percentage of bug reports that are type faults versus the percentageof bug reports that are not type faults and (b) the distribution of type fault categories2.4.6 Prevalence of Type FaultsWe now discuss our findings regarding the prevalence of type faults in the bugreports that we analyzed. As discussed in Section 2.3.4, each type fault category isidentified as “ E A”, where the first blank is represented by the abbreviation ofthe expected type category, and the second blank is represented by the abbreviationof the actual type category.The results are shown in Figure 2.6. Overall, as seen in Figure 2.6a, only about33% of the bug reports in our study were classified as type faults. Further, of all thetype faults, 72% belong to the NcENuA category, in which a native “class” type isexpected, but the actual type at runtime is null or undefined (see Figure 2.6b).These results show that in the subject systems, the vast majority of the bugs are nottype faults; thus, programming languages introducing stronger typing systems toJavaScript, such as TypeScript and Dart, as well as type checkers, may not elimi-nate most JavaScript faults. It is worth noting, however, that these languages haveother advantages apart from strong typing, including program comprehension andsupport for class-based object-oriented programming.We also studied the DOM-related faults, to determine how many of them aretype faults. The results for DOM-related faults are also shown in Figure 2.6. Over-all, we found that 38% of DOM-related faults are also type faults. Most of thesetype faults are also of the NcENuA variety; in this case, the expected native “class”type is, for the most part, Element. This finding suggests that stronger typingsystems and type checkers may also not eliminate most DOM-related faults.39We also analyzed the severity of type faults, based on the impact types assignedto each bug report. We found that out of all the Type 4 and Type 5 impact bugreports, which are the highest severity bugs, about 30% are type faults; consideringType 5 impact bug reports alone, about 18% are type faults. Therefore, based onour results, these languages have limited ability in eliminating the majority of thehighest impact JavaScript bugs in web applications.Finding 13: The majority (67%) of JavaScript faults are not type faults, and themajority (62%) of DOM-related faults are also not type faults. In addition, themajority of Type 4 and Type 5 impact bugs are not type faults, at 70%.Temporal Analysis. As seen in Figure 2.4e, the regression line for the percent-age of type faults is relatively flat (with a slight 3% increase from 2004 to 2014).Therefore, despite all the technology that has been developed to eliminate thesetype faults, the percentage of type faults has remained more or less constant overthe years. This may be caused by the fact that most of the type faults we found inour study belong to the NcENuA category, where a native “class” type is expected,but the actual type at runtime is null or undefined. In particular, current typecheckers normally cannot predict if the return value of a native JavaScript methodis null or undefined; for example, they cannot predict if the return value ofgetElementById() is nullwithout looking at the DOM – which almost noneof them do – so that particular type fault will be missed.Finding 14: The percentage of type faults among all JavaScript faults has re-mained constant (with only a 3% increase) over the past ten years.2.4.7 Threats to ValidityAn internal validity threat is that the bug classifications were made by two individu-als, which may introduce inconsistencies and bias, particularly in the classificationof the impacts. In order to mitigate any possibilities of bias, we conducted a review40process in which each person reviews the classifications assigned by the other per-son. Any disagreements were discussed until a consensus on the classification wasreached.Another internal threat is in our analysis of type faults, in which we assumedthat a bug report does not correspond to a type fault if the existence of a statementL from the definition given in Section 2.2.2 could not be established, based onour qualitative reading of the bug report. Hence, the percentage of type faults wepresented in Section 2.4.6 is technically a lower bound on the actual number oftype faults. Nonetheless, the lack of any indication in a bug report that a statementL exists strongly suggests that inconsistencies in types are not an issue with thebug.In terms of external threats, our results are based on bug reports from a limitednumber of experimental subjects, from a limited time duration of ten years, whichcalls into question the representativeness; unfortunately, public bug repositoriesfor web applications are not abundant, as previously mentioned. We mitigatedthis by choosing web applications that are used for different purposes, includingcontent management, webmail, and wiki. Further, prior to 2004, few websites usedJavaScript, since AJAX programming – which is often credited for popularizingJavaScript – did not become widespread until around 2005 when Google Mapswas introduced [159].In addition, our choice of analyzing a maximum of 30 bug reports per appli-cation is an external threat, as they may not represent all types of bugs present ineach of our experimental subjects. To mitigate this, we reported some application-specific data, such as the percentage of DOM-related faults per application (Ta-ble 2.3); the similarity in these percentages suggests that the bugs that occur ineach of our experimental subjects do in fact have common traits (e.g., the preva-lence of DOM-related faults is true not only in aggregate, but also per application).A construct validity threat is that the bug reports may not be fully representativeof the JavaScript faults that occur in web applications. This is because certain kindsof faults – such as non-deterministic faults and faults with low visual impact – maygo unreported. In addition, we focus exclusively on bug reports that were fixed.This decision was made since the root cause would be difficult to determine fromopen reports, which have no corresponding fix. Further, open reports may not be41representative of real bugs, as they are not deemed important enough to fix.For the triage and fix times, we did not account for possible delays in markinga bug report as “assigned” or “fixed”, which may skew the results. In addition, thetriage time is computed as the time until the first developer comment, when thereis no “assigned” marking; although we find this approximation reasonable, the de-veloper may not have started fixing until some days after the first comment wasposted. Finally, the triage and fix times may be influenced by external factors apartfrom the complexity of the bugs (e.g., bug replication, vacations, etc.). These arelikewise construct validity threats. Nonetheless, note that other studies have simi-larly used time to estimate the difficulty of a fix, including Weiss et al. [176] andKim and Whitehead [89]. In the former, the authors point out that the time it takesto fix a bug is indicative of the effort required to fix it; the difference between ourestimation and theirs is that they use the fix time reported by developers assignedto the bug, which is unavailable in the bug repositories we studied.2.5 DiscussionIn this section, we discuss the implications of our findings on web applicationdevelopers, testers, developers of web analysis tools, and designers of web appli-cation development frameworks.Findings 1 and 2 reveal the difficulties that web application developers havein setting up values passed or assigned to native JavaScript methods and proper-ties – particularly DOM methods and properties. Finding 2, in particular, alsoshows that most of the DOM-related faults that occur in web applications are strongDOM-related faults, indicating a mismatch between the programmer’s expectationof the DOM and the actual DOM. Many of these difficulties arise because theasynchronous, event-driven JavaScript code must deal with the highly dynamic na-ture of the DOM. This requirement forces the programmer to have to think abouthow the DOM is structured and what properties its elements possess at certainDOM interaction points in the JavaScript code; doing so can be difficult because(1) the DOM frequently changes at runtime and can have many states, and (2)there are many different ways a user can interact with the web application, whichmeans there are many different orders in which JavaScript event handlers can ex-42ecute. This suggests the need to equip these programmers with appropriate toolsthat would help them reason about the DOM, thereby simplifying these DOM-JavaScript interactions.These first two findings also impact web application testers, as they reveal cer-tain categories of JavaScript faults that users consider important enough to report,and hence, that testers should focus on. Currently, one of the most popular ways totest JavaScript code is through unit testing, in which modules are tested individu-ally; when creating these unit tests, a mock DOM object is usually needed, in orderto allow the DOM API method calls present in the module to function properly.While useful, this approach often does not take into account the changing states ofthe DOM when users are interacting with the web application in real settings, be-cause testers often create these mock DOM objects simply to prevent the calls fromfailing. Finding 2, in contrast, suggests that testers need to be more “DOM-aware”,in that they need to take extra care in ensuring that these mock objects emulate theactual DOM as closely as possible.In addition to unit testing, web application testers also perform end-to-end(E2E) testing; here, user actions (e.g., clicks, hovers, etc.) are automatically ap-plied to individual webpages to assert that certain conditions about the DOM aresatisfied after carrying out these user actions. E2E testing can help detect DOM-related faults; however, the problem is that these tests often require the tester toknow certain properties about the DOM in order to set up the user actions in thetests. As mentioned above, keeping track of these properties of the DOM is difficultto do, judging by the large percentage of strong DOM-related faults we observedin our study. This makes E2E tests themselves susceptible to DOM-related faults ifwritten manually, which motivates the potential usefulness of automated tools forwriting these tests, including record-replay and webpage crawling techniques.With regards to Findings 4, 6, and 12, these results suggest that web applicationtesters should also prioritize emulating DOM-related faults, as most high-impactfaults belong to this category. One possible way to do this is to prioritize thecreation of tests that cover DOM interaction points in the JavaScript code. Bydoing so, testers can immediately find most of the high-impact faults. This earlydetection is useful because, as Finding 4 suggests, DOM-related faults often haveno accompanying error messages and can be more difficult to detect. Further, as43Finding 12 suggests, DOM-related faults take longer to fix on average comparedto non-DOM-related faults.As mentioned previously, the presence of error messages in JavaScript bugscan be useful, as these messages can provide a natural starting point for analysisof these bugs. Indeed, our fault localization tool AUTOFLOX [134] – which wedescribe in detail in Chapter 3 – uses error messages to automatically infer theline of code where the failure takes place – which, in this case, is the same asthe exception point – as well as to determine the backward slice of the null orundefined values that led to the exception. However, as Finding 4 suggests, themajority of DOM-related faults do not lead to exceptions and hence, do not haveaccompanying error messages. This points to the need to devise alternative waysto automatically determine the line of code where the failure takes place whenperforming fault localization. One possibility is to give developers the ability toselect, on the webpage itself, any DOM element that is observed to be incorrect.A static analyzer can then try to guess which lines of JavaScript code were thelatest ones to update the element; these lines will therefore be the starting point forlocalization.As for Findings 7 and 8, these results can be useful for developers of static anal-ysis tools for JavaScript. Many of the current static analysis tools only address syn-tactic issues with the JavaScript code (e.g., JSLint,7 Closure Compiler,8 JSure9),which is useful since a few JavaScript faults occur as a result of syntax errors,as described in Section 2.4.3. However, the majority of JavaScript faults occur be-cause of errors in semantics or logic. Some developers have already started lookinginto building static semantics checkers for JavaScript, including TAJS [78], whichis a JavaScript type analyzer. However, the programming mistakes we encounteredin the bug reports (e.g., erroneous input validations, erroneous CSS selectors, etc.)call for more powerful tools to improve JavaScript reliability.The recurring patterns to which Finding 8 refers can be a starting point for de-vising a taxonomy for common JavaScript errors. This taxonomy can be helpfulin two ways. First, it can facilitate the code review process, as the taxonomy helps7http://www.jslint.com8 developers identify certain “hot spots” in the code that have historically beensusceptible to error. In addition, tools such as FindBugs10 use a taxonomy of com-mon coding patterns and smells to automatically detect errors in Java code usingstatic analysis; in the same vein, a taxonomy for JavaScript errors can help achievethe same automatic error detection task for JavaScript code.While Finding 10 suggests that most JavaScript faults are non-browser specific,we did find a few (mostly IE-specific) faults that are browser-specific. Hence,it is useful to design JavaScript development tools that recognize cross-browserdifferences and alerts the programmer whenever she forgets to account for these.Some Integrated Development Environments (IDEs) for JavaScript have alreadyimplemented this feature, including NetBeans11 and Aptana.12Finding 13 shows that there is a significant number, though a minority, of typefaults encountered in the subject systems, some of which have high impact. Thisprovides motivation for the development of the strongly-typed languages men-tioned earlier, as well as type checkers [51, 66, 78]. However, such tools andlanguages are far from being a panacea, as the vast majority of JavaScript faultsare not type faults, according to our study. Therefore, tool developers should notfocus exclusively on type checking when looking for ways to improve the relia-bility of JavaScript code because type checking, while useful, does not suffice indetecting or eliminating most JavaScript faults.According to Findings 5 and 11, there was a significant decrease in the numberof code-terminating failures and browser specific faults; this suggests that devel-opers of browsers and browser-based tools are heading towards the right directionin terms of facilitating the debugging process for JavaScript faults and ensuringcross-browser compliance of JavaScript code. However, from Findings 3 and 14,we observed that the number of DOM-related faults and type faults has remainedrelatively constant from 2004 to 2014; hence, developers, testers, and tool design-ers must pay more careful attention towards these two kinds of faults, especiallyDOM-related faults, as they constitute almost 70% of all JavaScript faults.Finding 9 shows that the percentage of errors located in the JavaScript code10 increased from 2004 to 2014. This suggests that tools for improving the client-side reliability should consider performing an analysis of the client-side code itself– which is where the majority of JavaScript bugs arise as per Findings 7 and 8 –instead of simply looking at how server-side code generates malformed client-sidecode, as other tools have done [33, 148].Finally, recall that this study focuses on client-side JavaScript; hence, the re-sults may not directly be applicable to JavaScript developers at the server-side whouse Node.js. For example, there is no DOM present at the server-side, which in-dicates that DOM-related faults will not be present there. Nonetheless, many ofthe error and fault patterns we found are not client-side-specific. A recent paperby Hanam et al. [68], for instance, sheds light on some of the pervasive JavaScriptbug patterns that appear at the server-side. One of the bug patterns they found is“Dereferenced Non-Values”, which corresponds to both the “Undefined/Null Vari-able Usage” fault category and the “Forgetting null/undefined check” error pattern.Hence, JavaScript developers at the server-side can still gather important takeawaysfrom our results, even if our study was not specifically targeted towards server-sideJavaScript.2.6 Related WorkThere has been a large number of empirical studies conducted on faults that occurin various types of software applications [34, 38, 50, 97, 179, 189]. Here, we focuson only those studies that pertain to web applications.Server-Side Studies. In the past, researchers have studied the causes of webapplication faults at the server-side using session-based workloads [61], serverlogs [163], and website outage incidents [138]. Further, there have been studieson the control-flow integrity [29] and end-to-end availability [85, 136] of web ap-plications. Finally, studies have been conducted which propose web applicationfault models and taxonomies [46, 102, 156]. Our current study differs from thesepapers in that we focus on web application faults that occur at the client-side, par-ticularly ones that propagate into the JavaScript code.Client-Side Studies. Several empirical studies on the characteristics of client-sideJavaScript have been made. For instance, Ratanaworabhan et al. [142] used their46JSMeter tool to analyze the dynamic behaviour of JavaScript in web applications.Similar work was conducted by Richards et al. [144] and Martinsen et al. [105]. Astudy of parallelism in JavaScript code was also undertaken by Fortuna et al. [53].Finally, there have been empirical studies on the security of JavaScript. Theseinclude empirical studies on cross-site scripting (XSS) sanitization [175], privacy-violating information flows [77], and remote JavaScript inclusions [124, 180]. Un-like our work which studies functional JavaScript faults, these related papers ad-dress non-functional properties such as security and performance.In recent work, Bajaj et al. [20] mined web application-related questions inStackOverflow to determine common difficulties that developers face when writ-ing client-side code. This work is similar to the current one in that it attempts toinfer reliability issues with JavaScript, using developers’ questions as an indica-tor. However, unlike our current work, this study does not make any attempt todetermine the characteristics of JavaScript faults.Our earlier work [128] looked at the characteristics of failures caused by Ja-vaScript faults, based on console logs. However, we did not study the causes orimpact of JavaScript faults, nor did we examine bug reports as we do in this study.To the best of our knowledge, we are the first to perform an empirical study on thecharacteristics of these real-world JavaScript faults, particularly their causes andimpacts.Finally, in very recent work which followed the original paper on which thiscurrent work is based, Pradel et al. [141] and Bae et al. [18] proposed tools for de-tecting type inconsistencies and web API misuses in JavaScript code, respectively.These studies provide examples of common type faults and common web API mis-use patterns. However, they do not establish the prevalence of their respective faultcategories, nor do they identify DOM-related faults as an important subclass ofJavaScript faults.2.7 ConclusionsClient-side JavaScript contains many features that are attractive to web applicationdevelopers and is the basis for modern web applications. However, it is prone toerrors that can impact functionality and user experience. In this chapter, we per-47form an empirical study of over 500 bug reports from various web applications andJavaScript libraries to help us understand the nature of the errors that cause thesefaults, and the failures to which these faults lead. Our results show that (1) around68% of JavaScript faults are DOM-related; (2) most (around 80%) high severityfaults are DOM-related; (3) the vast majority (around 83%) of JavaScript faultsare caused by errors manually introduced by JavaScript code programmers; (4)error patterns exist in JavaScript bug reports; (5) DOM-related faults take longerto fix than non-DOM-related faults; (6) only a small but non-negligible percent-age of JavaScript faults are type faults; and (7) although the percentage of code-terminating failures and browser-specific faults has decreased over the past tenyears, the percentage of DOM-related faults and type faults has remained relativelyconstant.48Chapter 3Automatic JavaScript Fault LocalizationThis chapter describes the technique that we have designed for automatically lo-calizing DOM-related JavaScript faults.13 Therefore, our goal in this chapter is toanswer RQ1B from Chapter 1.1 (How can we accurately and efficiently localizeand repair JavaScript faults that appear in web applications?), focusing in partic-ular on localization. We implemented our technique in a tool called AUTOFLOX,14the details of which we now describe.3.1 IntroductionJavaScript-based applications at the client-side suffer from multiple dependabil-ity problems due to their distributed, dynamic nature, as well as the loosely typedsemantics of JavaScript. A common way of gaining confidence in software de-pendability is through testing. Although testing of modern web applications hasreceived increasing attention in the recent past [15, 101, 109, 137], there has beenlittle work on what happens after a test reveals an error. Debugging of web ap-plications is still an expensive and mostly manual task. Of all debugging activi-ties, locating the faults, or fault localization, is known to be the most expensive[84, 165].The fault localization process usually begins when the developers observe afailure in a web program either spotted manually or through automated testing13The main study in this chapter appeared at the Journal of Software Testing, Verification andReliability (STVR) [134]. The initial conference version appeared at the International Conferenceon Software Testing, Verification and Validation (ICST 2012) [129].14AUTOFLOX stands for “Automatic Fault Localization for Ajax”49techniques. The developers then try to understand the root cause of the failureby looking at the JavaScript code, examining the DOM tree, modifying the code(e.g., with alerts or tracing statements), running the application again, and manuallygoing through the initial series of navigational actions that led to the faulty state orrunning the corresponding test case.Manually isolating a JavaScript fault’s root cause requires considerable timeand effort on the part of the developer. This is partly due to the fact that the lan-guage is not type-safe, and has loose fault-detection semantics. Thus, a fault maypropagate undetected in the application for a long time before finally triggeringan exception. Additionally, faults may arise in third-party code (e.g., libraries,widgets, advertisements) [128], and may be outside the expertise of the web appli-cation’s developer.Further, faults may arise due to subtle asynchronous and dynamic interactionsat runtime between the JavaScript code and the DOM tree, which make it challeng-ing to understand their root causes. Indeed, from our large-scale study of over 500JavaScript bug reports, as described in Chapter 2, we found that faulty interactionsbetween the JavaScript code and the DOM – which are called DOM-related faults– comprise over 68% of all JavaScript bugs [130]. From this same study, we alsofound that these DOM-related faults take longer to fix, on average, compared toall other fault types; hence, these DOM-JavaScript interactions are of particularlygreat concern when localizing faults. For these reasons, this chapter focuses on thelocalization of DOM-related faults.Although fault localization in general has been an active research topic [1, 4,39, 84], automatically localizing web faults has received very limited attentionfrom the research community. To the best of our knowledge, automated fault lo-calization for JavaScript-based web applications has not been addressed in the lit-erature yet.To alleviate the difficulties with manual web fault localization, we propose anautomated technique based on dynamic backward slicing of the web application tolocalize DOM-related JavaScript faults. The proposed fault localization approachis implemented in a tool called AUTOFLOX. In addition, AUTOFLOX has beenempirically evaluated on six open-source web applications and three productionweb applications, along with seven web applications containing real bugs. The50main contributions in this chapter include:• A discussion of the challenges surrounding JavaScript fault localization, high-lighting the real-world relevance of the problem and identifying DOM-relatedJavaScript faults as an important sub-class of problems in this space;• A fully automated technique for localizing DOM-related JavaScript faults,based on dynamic analysis and backward slicing of JavaScript code. Ourtechnique can localize faults in the presence of the eval function, anony-mous functions, and minified JavaScript code. In addition, our technique iscapable of localizing multiple faults;• An open-source tool, called AUTOFLOX, implementing the fault localiza-tion technique. AUTOFLOX has been implemented both as a stand-aloneprogram that runs with the CRAWLJAX tool, as well as an Eclipse plugin;• An empirical study to validate the proposed technique, demonstrating itsefficacy and real-world relevance. The results of this study show that theproposed approach is capable of successfully localizing DOM-related faultswith a high degree of accuracy (over 96%) and no false positives. In addition,AUTOFLOX is able to localize JavaScript faults in production websites, aswell as 20 actual, reported bugs from seven real-world web applications.3.2 Challenges and MotivationThis section describes how JavaScript differs from other traditional programminglanguages and discusses the challenges involved in localizing faults in JavaScriptcode. First, a JavaScript code fragment that is used as a running example through-out this chapter is presented.3.2.1 Running ExampleFigure 3.1 presents an example JavaScript code fragment to illustrate some of thechallenges in JavaScript fault localization. This code fragment is based on a faultin a real-world web application.1515 https://www.tumblr.com511 function changeBanner(bannerID) {2 clearTimeout(changeTimer);3 changeTimer = setTimeout(changeBanner, 5000);5 prefix = "banner_";6 currBannerElem = document.getElementById(prefix + currentBannerID);7 bannerToChange = document.getElementById(prefix + bannerID);8 currBannerElem.removeClassName("active");9 bannerToChange.addClassName("active");10 currentBannerID = bannerID;11 }12 currentBannerID = 1;13 changeTimer = setTimeout(changeBanner, 5000);Figure 3.1: Example JavaScript code fragment based on web application pertaining to the code fragment in Figure 3.1 consists of abanner at the top of the page. The image shown on the banner cycles through fourimages periodically (every 5000 milliseconds). The four images are each wrappedin div elements with DOM IDs banner 1 through banner 4. The div el-ement wrapping the image being shown is identified as “active” via its classattribute.In the above code, the changeBanner function (Lines 1 to 10) updates thebanner image to the next one in the sequence by updating the DOM. Lines 12and 13 which are outside the function are executed at load time. Line 12 setsthe value of variable currentBannerID to 1, indicating that the current imagebeing shown is banner 1. Line 13 sets a timer that will asynchronously call thechangeBanner function after 5 seconds (i.e., 5000 milliseconds). After eachexecution of the changeBanner function, the timeout function is cleared andreset so that the image is changed again after 5 seconds.The JavaScript code in Figure 3.1 will throw a null exception in Line 9 whenexecuted. Specifically, in the setTimeout calls, changeBanner is invokedwithout being passed a parameter, even though the function is expecting an ar-gument, referenced by bannerID. Omitting the argument will not lead to aninterpretation-time exception; rather the bannerID will be set to undefinedwhen changeBanner executes. As a result, the second getElementByIdcall will look for the ID “banner undefined” in the DOM; since this ID doesnot exist, a null will be returned. Hence, accessing the addClassName method52via bannerToChange in Line 9 will lead to a null exception.Note that this error arises due to the loose typing and permissive error seman-tics of JavaScript. Further, to understand the root cause of the error, one needs toanalyze the execution of both the JavaScript code and the DOM. However, oncethe fault has been identified, the fix is relatively straightforward, viz. modify thesetTimeout call in Line 13 to pass a valid value to the changeBanner func-tion.3.2.2 JavaScript Fault LocalizationAlthough JavaScript is syntactically similar to languages such as Java and C++, itdiffers from them in two important ways, which makes fault localization challeng-ing.Asynchronous Execution: JavaScript code is executed asynchronously, and istriggered by the occurrence of user-triggered events (e.g., click, mouseover), loadevents, or events resulting from asynchronous function calls. These events mayoccur in different orders; although JavaScript follows a sequential execution model,it does not provide deterministic ordering. In Figure 3.1, the execution of the linesoutside the changeBanner function is triggered by the load event, while theexecution of the changeBanner itself is triggered asynchronously by a timeoutevent via the setTimeout call. Thus, each of these events triggered the executionof two different sequences of JavaScript code. In particular, the execution sequencecorresponding to the load event is Line 12→ Line 13, while the execution sequencecorresponding to the asynchronous event is Line 2→ Line 3→ Line 5→ Line 6→ Line 7→ Line 8→ Line 9.In traditional programming languages, the goal of fault localization is to findthe erroneous lines of code. For JavaScript, its asynchronous characteristic presentsan additional challenge. The programmer will not only need to find the erroneouslines, but she will also have to map each executed sequence to the event that trig-gered their execution in order to understand the root cause of the fault. In addition,event handlers may overlap, as a particular piece of JavaScript code may be used bymultiple event handlers. Thus, manual fault localization in client-side JavaScript isa tedious process, especially when many events are triggered.53DOM Interactions: In a web application, JavaScript code frequently interactswith the DOM, which characterizes the dynamic HTML structure and elementspresent in the web page. As a result, the origin of a JavaScript fault is not limitedto the JavaScript code; the JavaScript fault may also result from an error in theDOM. With regards to fault localization, the notion of an “erroneous line” of codemay not apply to JavaScript because it is possible that the error is in the DOMrather than the code. This is particularly true for DOM-related faults, which lead toeither exceptions or incorrect DOM element outputs as a result of a DOM access orupdate. As a result, for such faults, one needs to formulate the goal of fault local-ization to isolate the first line of JavaScript code containing a call to a DOM accessfunction (e.g., getAttribute(), getElementById()) or a DOM updatefunction/property (e.g., setAttribute(), innerHTML) that directly causesJavaScript code to throw an exception, or to update a DOM element incorrectly.This line is referred to as the direct DOM interaction.For the example in Figure 3.1, the JavaScript exception occurs in Line 9,when the addClassName function is called on bannerToChange, which isnull. The null value originated from Line 7, when the DOM access func-tion getElementById returned null; thus, the direct DOM interaction is actu-ally at Line 7. Note that even though this direct DOM interaction does not rep-resent the actual “erroneous” lines which contain the missing parameter to thechangeBanner function (Lines 3 and 13), knowing that getElementByIdin Line 7 returned null provides a hint that the value of either “prefix” or“bannerID” (or both) is incorrect. Using this knowledge, the programmer canisolate the erroneous line of code as she has to track the values of only these twovariables. While in this simple example, the direct DOM interaction line is rela-tively easy to find, in more complex code the null value could propagate to manymore locations and the number of DOM interactions to consider could be muchhigher, making it challenging to identify the direct DOM interaction. This is thechallenge addressed in this chapter.543.2.3 Challenges in Analyzing JavaScript CodeIn addition to the challenges described in the previous subsection, JavaScript alsocontains several features that complicate the process of analyzing JavaScript codefor fault localization. These are described below.Eval: JavaScript allows programmers to dynamically create code through theuse of the eval method. This method takes a string value as a parameter, wherethe string evaluates into JavaScript code. Although alternatives to certain uses ofeval have been introduced in the language (e.g., the JSON API), studies showthat eval use remains pervasive among web developers [145].The presence of eval poses a challenge to both manual and automatic analy-sis of JavaScript code. The reason is twofold. First, the string parameter to evalis typically not just a simple string literal, but rather, a concatenation of multiplestring values whose value cannot be determined based on a simple source-codelevel inspection; hence, it is difficult to infer the JavaScript code generated byeval at runtime. Second, the scope of variables introduced in eval code is di-rectly linked to the scope in which the eval call is made; hence, eval code cannotbe analyzed in isolation, but must be analyzed in relation to where the eval callis made. These, in turn, make fault localization more difficult, since the developercannot easily keep track of the values that are created or modified through eval.Anonymous Functions: Since JavaScript treats functions as first-class citi-zens in the form of Function literals, programmers can define functions withoutproviding them with a name; these unnamed functions are known as anonymousfunctions. Hence, when tracking the propagation of a JavaScript fault, it does notsuffice to identify the lines of code involved in the propagation solely based onthe function name, particularly if a JavaScript fault originates from or propagatesthrough an anonymous function.Minified Code: Before deploying a web application, it is common practice forweb developers to minify their JavaScript code, which compresses the code intoone line. While this minification process reduces the size of JavaScript files, italso makes JavaScript code more difficult to read and analyze. This makes it verydifficult to localize faults in minified code, as the developer will have a hard timekeeping track of the relevant lines of code.553.3 Scope of this ChapterIn the bug report study described in Chapter 2, we found that over 68% of JavaScriptfaults experienced by web applications are DOM-related faults. Recall that a faultis considered DOM-related if the corresponding error propagates into the parame-ter value of a DOM API method, such as getElementById and querySelector.In addition, these DOM-related faults comprise 80% of the highest impact JavaScriptfaults, according to the same study. Due to their prominence and severity, we focuson the study of DOM-related faults in this chapter.Recall from Chapter 2 that DOM-related faults can be further divided into twoclasses, based on their failure characteristics, listed below:1. Code-terminating DOM-related JavaScript faults: A DOM-related faultthat leads to a code-terminating failure. In other words, a DOM access func-tion returns a null, undefined, or incorrect value, which then propagatesinto several variables and eventually causes an exception.2. Output DOM-related JavaScript faults: A DOM-related fault that leadsto an output-related failure. In other words, a DOM update function sets thevalue of a DOM element property to an incorrect value without causing thecode to halt.The fault localization approach described in this chapter can localize code-terminating DOM-related JavaScript faults automatically, requiring only the URLof the web application and the DOM elements needed to reproduce the failureas input from the user. Hence, in the sections that follow, it is assumed that thefault being localized leads to a code-terminating failure. However, note that theproposed approach can also support output DOM-related JavaScript faults, but theapproach would only be semi-automatic, as the user must also provide the locationof the failing line of code to initiate the localization process.For code-terminating DOM-related JavaScript faults, the direct DOM interac-tion is the DOM access function that returned the null, undefined, or incorrectvalue, and is referred to as the direct DOM access.56(1) Intercept/Instrument JavaScript code(2)Run Web Application(3) Generate Traces(6) Analyze Backward Slice(4)Partition Trace into Sequences(5) Extract Relevant Sequences JavaScript Execution TraceWeb ApplicationDirect DOM accessTrace CollectionTrace AnalysisFigure 3.2: Block diagram illustrating the proposed fault localization approach.3.4 ApproachOur proposed fault localization approach consists of two phases: (1) trace collec-tion, and (2) trace analysis. The trace collection phase involves crawling the webapplication and gathering traces of executed JavaScript statements until the occur-rence of the failure that halts the execution. After the traces are collected, they areparsed in the trace analysis phase to find the direct DOM access. The two phasesare described in detail in this section. A block diagram of the approach is shown inFigure 3.2. We first describe the usage model of the proposed approach.3.4.1 Usage ModelBecause the focus is on fault localization, we assume that the failure whose cor-responding fault needs to be localized has been detected before the deployment ofthe proposed technique. Further, we also assume that the user is able to replicatethe failure during the localization process, either through a test case, or by knowingthe sequence of user events that would trigger the failure.The approach is designed to automate the fault localization process. The onlymanual intervention required from the user is at the very beginning, where the userwould have to specify which elements in the web application to click (during thetrace collection phase) in order for the failure to occur.57The output of the approach is the direct DOM access corresponding to the faultbeing localized and specifies, (1) the function containing the direct DOM access,(2) the line number of the direct DOM access relative to this function, and (3) theJavaScript file containing the direct DOM access.3.4.2 Trace CollectionIn the trace collection phase, the web application is crawled (by systematically em-ulating the user actions and page loads) to collect the trace of executed JavaScriptstatements that eventually lead to the failure. This trace is generated through on-the-fly instrumentation of each line of client-side JavaScript code before it is passedon to and loaded by the browser (box 1, Figure 3.2). Thus, for every line l ofJavaScript code executed, the following information is written to the trace: (1) thefunction containing the line, (2) the line number relative to the function to whichit belongs, (3) the names and scopes (global or local) of all the variables withinthe scope of the function, and (4) the values of these variables prior to the execu-tion of the line. In the example in Figure 3.1, the order of the first execution is asfollows: Line 12 → Line 13 → Line 2 → Line 3 → Line 5 → Line 6 → Line 7→ Line 8→ Line 9. Thus, each of these executed lines will have an entry in thetrace corresponding to it. The trace record for Line 5 is shown in Figure 3.3. Notethat in this figure, the trace record prefix contains the name of the function andthe line number relative to this function; the variable names, scopes, and valuesare also shown, and other variables which have not been assigned values up to thecurrent line are marked with “none”. In the figure, bannerID’s value is recordedas “none” because this parameter is unspecified in the setTimeout call.In addition to the trace entries corresponding to the executed lines of JavaScriptcode, three special markers, called ERROR, ASYNCCALL and ASYNC, are addedto the trace. The ERROR marker is used in the trace analysis phase to determine atwhich line of JavaScript code the exception was thrown; if a line l is marked withthe ERROR marker, then the value l. f ailure is set to true. The ASYNCCALLand ASYNC markers address the asynchronous nature of JavaScript execution asdescribed in Section 3.2. In particular, these two markers are used to determine thepoints in the program where asynchronous function calls have been made, thereby581 Trace Record Prefix:2 changeBanner:::43 Variables:4 currentBannerID (global): 15 changeTimer (global): 26 bannerID (local): none7 prefix (local): none8 currBannerElem (local): none9 bannerToChange (local): noneFigure 3.3: Example trace record for Line 5 of the running example from Figure 3.1.simplifying the process of mapping each execution trace to its corresponding event.If a line l is marked with the ASYNC or ASYNCCALL marker, then the valuesl.async or l.asynccall, respectively, are set to true.The ERROR marker is added when a failure is detected (the mechanism todetect failures is discussed in Section 3.5). It contains information about the ex-ception thrown and its characteristics. In the example in Figure 3.1, the ERRORmarker is placed in the trace after the entry corresponding to Line 9, as the nullexception is thrown at this line.The second marker, ASYNCCALL, is placed after an asynchronous call to afunction (e.g., via the setTimeout function). Each ASYNCCALL marker con-tains information about the caller function and a unique identifier that distinguishesit from other asynchronous calls. Every ASYNCCALL marker also has a corre-sponding ASYNC marker, which is placed at the beginning of the asynchronousfunction’s execution, and contains the name of the function as well as the iden-tifier of the asynchronous call. In the example in Figure 3.1, an ASYNCCALLmarker is placed in the trace after the execution of Line 13, which has an asyn-chronous call to changeBanner. The corresponding ASYNC marker is placedbefore the execution of Line 2, at the beginning of the asynchronously called func-tion changeBanner.To insert the ASYNCCALL and ASYNC markers, the known asynchronousfunctions in JavaScript are overridden by a trampoline function that sets up andwrites the ASYNCCALL marker to the trace. The trampoline function then callsthe original function with an additional parameter indicating the identifier of theasynchronous call. This parameter is written to the trace within the called function59along with the ASYNC marker to uniquely identify the asynchronous call.3.4.3 Trace AnalysisOnce the trace of executed statements has been collected, the trace analysis phasebegins. The goal of this phase is to analyze the trace entries and find the directDOM access responsible for the JavaScript failure. First, the approach partitionsthe trace into sequences, where a sequence (l1, l2, ..., ln) represents the series ofJavaScript statements l1, l2, ..., ln that were triggered by the same event (e.g., a pageload). Each sequence corresponds to exactly one event. This step corresponds tobox 4 in Figure 3.2. As mentioned in the previous section, the executed JavaScriptprogram in the example in Figure 3.1 consists of two sequences: one correspondingto the load event, and the other corresponding to the timeout event.After partitioning the trace into sequences, the algorithm looks for the sequencethat contains the direct DOM access (box 5 in Figure 3.2). This is called the rel-evant sequence. The relevant sequence ρ is initially chosen to be the sequencethat contains the ERROR marker,16 that is, at the beginning of the algorithm, ρ isinitialized as follows:ρ ← (l1, l2, ..., ln) ⇐⇒ ∃li ∈ {l1, l2, ..., ln}, li. f ailure = true (3.1)This marker will always be the last element of the relevant sequence, since theexecution of the sequence must have halted once the failure occurred; hence, itsuffices to check if ln. f ailure = true in Expression (3.1). The direct DOM ac-cess will be found within the initial relevant sequence provided the sequence wasnot triggered by an asynchronous function call but rather by the page load or user-triggered event. However, if the relevant sequence was triggered asynchronously,i.e., it begins with an ASYNC marker, then the sequence containing the correspond-ing asynchronous call (i.e., with the ASYNCCALL marker) is prepended to the rel-evant sequence to create the new relevant sequence. This process is continuedrecursively until the top of the trace is reached or the sequence does not begin withan ASYNC marker.16For output-related DOM-related JavaScript faults, the ERROR marker is replaced by an analo-gous marker that represents the failure line identified by the user.601 Sequence 1:2 root:::12 (Line 12)3 root:::13 (Line 13)4 root:::ASYNC_CALL - ID = 15 Sequence 2:6 changeBanner:::ASYNC - ID = 17 changeBanner:::1 (Line 2)8 changeBanner:::2 (Line 3)9 changeBanner:::4 (Line 5)10 changeBanner:::5 (Line 6)11 changeBanner:::6 (Line 7)12 changeBanner:::7 (Line 8)13 changeBanner:::8 (Line 9) - FAILURE14 Relevant Sequence:15 root:::12 (Line 12)16 root:::13 (Line 13)17 changeBanner:::1 (Line 2)18 changeBanner:::2 (Line 3)19 changeBanner:::4 (Line 5)20 changeBanner:::5 (Line 6)21 changeBanner:::6 (Line 7) **22 changeBanner:::7 (Line 8)23 changeBanner:::8 (Line 9) - FAILUREFigure 3.4: Abridged execution trace for the running example showing the two sequences and the relevantsequence. Each trace record is appended with either a marker or the line number relative to the function.Numbers in parentheses refer to the line numbers relative to the entire JavaScript file. root refers tocode outside a function. The line marked with a (**) is the direct DOM access, and the goal of thisdesign is to correctly identify this line as the direct DOM access.In the running example, the relevant sequence is initially set to the one cor-responding to the timeout event and consists of (Line 2, Line 3, Line 5, Line 6,Line 7, Line 8, Line 9) (see Sequence 2 in Figure 3.4). Because the relevant se-quence begins with an ASYNC marker, the sequence containing the asynchronouscall (see Sequence 1 in Figure 3.4) is prepended to it to create the new, final rel-evant sequence. However, there are no more sequences left in the trace and theprocess terminates. Although in this example, the relevant sequence consists of allexecuted statements, this will not always be the case, especially in complex webapplications where many events are triggered.Once the relevant sequence has been found, the algorithm starts locating thedirect DOM access within that sequence (box 6 in Figure 3.2). To do so, it analyzesthe backward slice of the variable in the line marked with the ERROR marker, i.e.,the line l such that l. f ailure = true. If the line l itself contains the direct DOM61access, the process is halted and the line is identified as the direct DOM access.If not, a variable called null var is introduced to keep track of the most recentvariable to have held the null value.The initial value of null var is inferred from the error message containedin the ERROR marker. The message is typically of the form x is null, where x isthe identifier of a variable; in this case, the initial value of null var is set tothe identifier x. The relevant sequence is traversed backward and null var isupdated based on the statement encountered:1. If the statement is an assignment of the form null var = new var,null var is set to the identifier of new var.2. If it is a return statement of the form return ret var;, where the re-turn value is assigned to the current null var in the calling function,null var is set to the identifier of ret var.3. If it is a function call of the form foo(..., arg var ,...) wherefoo() is a function with arg var as one of the values passed, and thecurrent null var is the parameter to which arg var corresponds in thedeclaration of foo(), null var is set to the identifier of arg var.If the line does not fall into any of the above three forms, it is ignored andthe algorithm moves to the previous line. Note that although syntactically valid,an assignment of the form null var = new var1 op new var2 op ...,where op is a binary operator, makes little semantic sense as these operations arenot usually performed on DOM element nodes (for instance, it makes no sense toadd two DOM element nodes together). Hence, it is assumed that such assignmentswill not appear in the JavaScript code. Therefore, at every statement in the code,null var takes a unique value. In addition, this implies that there can only beone possible direct DOM access along the null propagation path.The algorithm ends when new var, ret var, or arg var is a call to a DOMaccess function. The line containing this DOM access is then identified as the directDOM access.In the example in Figure 3.1, the null var is initialized to bannerToChange.The trace analyzer begins at Line 9 where the ERROR marker is placed; this is also62the last line in the relevant sequence, as seen in Figure 3.4. Because this line doesnot contain any DOM access functions, the algorithm moves to the previous linein the relevant sequence, which is Line 8. It then determines that Line 8 does nottake on any of the above three forms and moves to Line 7. The algorithm thendetermines that Line 7 is of the first form listed above. It checks the new varexpression and finds that it is a DOM access function. Therefore, the algorithmterminates and identifies Line 7 as the direct DOM access.3.4.4 Support for Challenging CasesAs explained in Section 3.2.3, programmers typically use features of the JavaScriptlanguage that complicate the process of analyzing JavaScript code. This subsectiondescribes how the approach was extended to handle these features.EvalAs described in Section 3.4.2, the approach instruments each line of JavaScriptcode to retrieve three pieces of information, namely the containing function, theline number, and an array of the names and values of all in-scope variables. Thefunction that is responsible for adding this information to the execution trace is asfollows, where the parameters correspond to the retrieved information.send(functionName, lineNo, variableArray) (3.2)The send() function is included prior to every line of JavaScript code, which isuseful for retrieving trace records for statically loaded code; however, the send()function does not collect trace records for JavaScript code generated through eval.A naı¨ve approach for extending the approach to handle eval would be to sim-ply add a call to send() prior to every line in the string passed to the eval call.The problem with this approach is that the eval parameter is not necessarily astring literal; hence, its value may not be known until the eval call is made atruntime. To make the approach more general, every call to eval in the JavaScriptcode is replaced with a call to a wrapper function called processEval(). Thisfunction first evaluates the string value of the expression passed to eval. There-after, the function parses this string value and adds a call to the send() function63...str = "var a; a = 1138;"eval(str);...Modify eval callprocessEval(str);Evaluate String"var a; a = 1138;"Addsend() callssend(...);var a;send(...);a = 1138;Figure 3.5: Example illustrating the approach for supporting eval.prior to each expression statement in the parsed string; this generates a new string,which comprises of the original parameter to eval, but with a call to send()prior to each statement. Finally, this new string is passed to eval in order for thecorresponding code to execute. The approach described is illustrated in Figure 3.5.Note that the string value passed to eval can always be resolved, since the stringis processed dynamically at runtime.New Variables. Note that it is possible for new variables to be declared in evalcode. Hence, the approach needs a way to update the array of variables that ispassed to the send() function. To do this, an array object is created at the be-ginning of every function in the JavaScript code. The array is initialized with thenames and scopes of all the variables declared within the function, and is updatedwith new variables whenever a call to processEval() is made.Since the string value passed to eval is parsed separately from the rest ofthe JavaScript code (i.e., the code outside eval), processEval() may inaccu-rately label the scope of a variable defined in the eval code as “global”. In orderto make sure the variables declared in eval are marked with the correct scope(i.e., local or global), a scope marker is passed to processEval() as a parame-ter. This scope marker contains the value root if the eval call is made outside afunction; otherwise, the scope marker is assigned the name of the function. Thus,if the scope marker has value root, the “global” markings are retained for the newvariables; otherwise, the “global” markings are changed to “local”. This ensuresthat variable scopes are accurately recorded.64Anonymous FunctionsThe initial approach relied on the names of the functions containing the line num-bers to determine the dynamic backward slice. More specifically, during the tracecollection phase, the technique records the name of the function, which is thenincluded in the corresponding trace record. During the trace analysis phase, thetechnique then fetches the name of the function from the trace record so that itknows where to find the line of JavaScript code that needs to be analyzed.Unfortunately, this approach does not work for anonymous functions, sincethey are not given a name. To account for this limitation, the trace collectionscheme was modified so that it assigns a unique name for every anonymous func-tion encountered during the crawling. In particular, each anonymous function isassigned a name of the form anonymous-file-script-line, where file is the file name;script is the index of the script tag containing the function (i.e., the order that thescript tag appears in the file); and line is the line number of the function relativeto the script tag in which it is defined. Note that the script tag is only applicableto JavaScript code embedded in .html files. If the code is included in a .js file,script is simply assigned 0.In the trace analysis phase, if a trace whose corresponding function name is ofthe form anonymous-file-script-line is encountered, then the trace must correspondto a line of code located in an anonymous function. The location of the line of codeis determined by taking the file, script, and line portions of the function name, andthe line of code is fetched once found.Minified CodeIn order to handle minified code, the approach first “beautifies” (i.e., unminifies)this code. The trace collection and trace analysis phases will then both proceed asbefore, but this time, operating on the beautified version of the code. The problemis that by operating on the beautified version, the approach will output the line num-ber of the direct DOM access in the beautified version; since this beautified versionis transparent to the developer, this line number will not be very meaningful.Therefore, the challenge, in this case, is in mapping every line number in thebeautified version with the column number in the minified version. In most cases,65...x += 5;if (y > 0)   x += 5;......x += 5;if (y > 0) x += 5;...Index 0Index 1Figure 3.6: Example illustrating the mapping from the beautified version to the minified version, in theapproach for supporting minified code.this mapping is achieved by performing a string match with the minified version,and identifying the starting column of the matching string. However, this approachwill not always work because of the possibility of identical lines; for instance, if therunning example were originally minified, and Figure 3.1 is the beautified version,then lines 3 and 13 – which are identical – will lead to multiple matches with theminified version. To account for this possibility, a regular expression is used toidentify all the identical lines in the beautified version. The identical lines in thebeautified version are then sequentially assigned an index, and a regular expressionis used to find all the matches in the minified version. In this case, the line withthe nth index is mapped to the column where the nth match is located. This isillustrated in Figure AssumptionsThe described approach makes a few simplifying assumptions, listed below. In theevaluation described in Section 3.6, the correctness of the approach will be assessedon various open-source web applications, thus evaluating the reasonableness ofthese assumptions in the real world.1. The JavaScript error is manifested in a null exception, where the null valueis originated from a call to a DOM access function.2. There are no calls to recursive functions in the relevant sequence. Morespecifically, the approach relies on the (name, file, script) tuple – where nameis either the function name or, in the case of anonymous functions, someuniquely assigned name – to distinguish functions from each other. Sincetraces that point to a recursive function map to the same tuple, the approach66cannot distinguish between calls to the same line from different recursionlevels.3. There are no object property accesses in the null propagation path. In otherwords, the approach assumes that null var will only be a single iden-tifier, and not a series of identifiers connected by the dot operator (e.g.,, this.x, etc.)3.5 Tool ImplementationThe approach described in Section 3.4 has been implemented in an automated toolcalled AUTOFLOX17 using the Java programming language. In addition, a numberof existing tools are used to assist in the trace collection phase, including RHINO[119] for parsing and instrumenting the JavaScript code, and jsbeautifier [99] forbeautifying minified code.AUTOFLOX has been implemented in two different interfaces.CRAWLJAX Interface. In the first interface, AUTOFLOX prompts the user forthe URL of the web application containing the fault, and crawls this applicationto perform trace collection. Here, the CRAWLJAX [110] tool is used to system-atically crawl the web application and trigger the execution of JavaScript codecorresponding to user events. Other tools such as WaRR [10], Mugshot [111], andSelenium [75] can aid in the reproduction phase. However, those tools requiremanually or programatically interacting with the web application at hand. Thus,CRAWLJAX was used because of the level of automation and flexibility it provides.Prior to crawling the web application, the AUTOFLOX user can specify whichelements in the web application the crawler should examine during the crawlingprocess (otherwise the default settings are used). These elements should be chosenso that the JavaScript error is highly likely to be reproduced.18 In this mode, onlyone fault can be localized at a time.Eclipse Interface. In the second interface, AUTOFLOX runs as an Eclipse IDE [44]plugin. Here, the programmer can develop her web application project on Eclipse;17 While non-deterministic errors can be localized with AUTOFLOX, they may require multiple runs to repro-duce the error (i.e., until the error appears)67with the project files open, she can subsequently click the “Run AUTOFLOX” but-ton to run the tool. Doing so will open the Firefox web browser, which allowsthe user to replicate the fault by either interacting with the application, or runninga test case (e.g., a Selenium test case). Where applicable, AUTOFLOX will thenoutput the direct DOM access each time a null exception is thrown. Note that inthis interface, AUTOFLOX is able to localize multiple faults by assigning a uniqueID to each exception encountered.The JavaScript code instrumentation and tracing technique used in the pro-posed approach is based on an extension of the INVARSCOPE [63] plugin to CRAWL-JAX. The following modifications were made to INVARSCOPE in order to facilitatethe trace collection process:1. While the original INVARSCOPE tool only collects traces at the functionentry and exit points, the modified version collects traces at every line ofJavaScript code to ensure that the complete execution history can be ana-lyzed in the trace analysis phase.2. The original INVARSCOPE does not place information on the scope of eachvariable in the trace; thus, it has been modified to retrieve this informationand include it in the trace.3. The modifications allow asynchronous function calls to be overridden, andto place extra instrumentation at the beginning of each function to keep trackof asynchronous calls (i.e., to write the ASYNCCALL and ASYNC markersin the trace).4. Finally, Try-Catch handlers are placed around each function call in theJavaScript code in order to catch exceptions and write ERROR markers tothe trace in the event of an exception.Note that the tool allows the user to exclude specific JavaScript files from beinginstrumented. This can speed up the trace collection process, especially if the useris certain that the code in those files does not contain the direct DOM access.Finally, the trace analysis phase has also been added as a part of the AUT-OFLOX tool implementation, and requires no other external tools.683.6 Empirical Evaluation3.6.1 Goals and Research QuestionsWe conducted an empirical study to evaluate the accuracy and real-world relevanceof the proposed fault localization approach.The research questions that are answered in the evaluation are as follows:RQ1: What is the fault localization accuracy of AUTOFLOX? Are the implemen-tation assumptions reasonable?RQ2: Is AUTOFLOX capable of localizing bugs from real-world web applica-tions?RQ3: What is the performance overhead of AUTOFLOX on real-world web ap-plications?3.6.2 MethodologyThe subsections that follow address each of the above questions. An overview ofthe evaluation methodology used to answer each research question is shown below.To answer RQ1, AUTOFLOX is run on six open-source web applications andthree production websites. DOM-related JavaScript faults are injected into the ap-plications and AUTOFLOX is run to localize the direct DOM accesses correspond-ing to the faults.To address RQ2, AUTOFLOX is subjected to 20 bugs (which satisfy the faultmodel) that have previously been observed and reported for seven open-source webapplications. Most of these bugs come from the bug report study in Chapter 2.The performance (RQ3) is measured by calculating the overhead incurred bythe instrumentation and the time it takes for the tool to find the direct DOM access.Note that the experiments were performed on an Ubuntu 12.04 platform usingthe Firefox v. 33.0.2 web browser. The machine used was a 2.66 GHz Intel Core 2Duo, with 4 GB of RAM.693.6.3 Accuracy of AUTOFLOXTo answer RQ1, a fault injection experiment was performed on six open-sourceweb applications, shown in Table 3.1. As seen in this table, the applications consistof thousands of lines of JavaScript code each. Fault injection was used to establishthe ground truth for measurement of the accuracy of AUTOFLOX. However, thefault injection process was not automated. Rather, a search was first made for callsto DOM access functions – either from the DOM API or from popular JavaScriptlibraries – that return null, such as getElementById(), getAttribute()and $(). The faults were then manually injected by mutating the parameter of theDOM access function into some garbage value; this parameter mutation will en-sure that the call to the function will return a null value, thereby leading to a nullexception in a later usage, and emulating DOM-related faults. In order to demon-strate the effect of adding support for anonymous functions, eval, and minifiedcode, the fault injection experiment was run twice – once with these new modulesenabled, and once with the modules disabled.Only one mutation is performed in each run of the application to ensure control-lability. For each injection, the direct DOM access is the mutated line of JavaScriptcode. Thus, the goal is for AUTOFLOX to successfully identify this mutated lineas the direct DOM access, based on the message printed due to the exception.Furthermore, localization was performed on injected faults rather than actualfaults because no known code-terminating DOM-related faults existed in these webapplications at the time of the experiment. However, AUTOFLOX is also used tolocalize real faults that appear in seven other web applications, which is describedin further detail in Section 3.1: Results of the experiment on open-source web applications, assessing the accuracy of AUTOFLOX.JavaScript Web Lines of # # eval Anon. PercentageApplications JS of direct DOM accesses Support Support identifiedcode mutations identified Increase IncreaseTASKFREAK 3044 39 39 (38) +1 – 100% (97.4%)TUDU 11653 9 9 (0) – +9 100% (0%)WORDPRESS 8366 17 14 (9) – +5 82.4% (52.9%)CHATJAVASCRIPT 1372 10 10 (10) – – 100% (100%)JSSCRAMBLE 131 6 6 (6) – – 100% (100%)JS TODO 241 2 2 (1) – +1 100% (50%)OVERALL 83 80 (64) +1 +15 96.4% (77.1%)71Table 3.1 shows the results of the experiments; the results for the case wherethe new modules are disabled are shown in parentheses. As shown in the table,with the new modules enabled, AUTOFLOX was able to identify the direct DOMaccess for all mutations performed in five of the six applications, garnering 100%accuracy in these applications; when all six applications are considered, the overallaccuracy was 96.4%. In contrast, when the new modules are disabled, only two ofthe applications (CHATJAVASCRIPT and JSSCRAMBLE) had perfect accuracies,and the overall accuracy was significantly lower, at 77.1%.Taking a closer look at the unsuccessful cases when the new modules are dis-abled, it was found that AUTOFLOX was not able to accurately pinpoint the directDOM access because in these cases, the dynamic backward slice included linesfrom anonymous function code and eval code. In particular, 15 of the unsuccess-ful cases resulted from the presence of anonymous function code, while 4 of theunsuccessful cases resulted from the presence of eval code. This result demon-strates that these features are commonly used in JavaScript code, and it is thereforeimportant to add support for them, as has been done.For the case where the new modules are enabled, the only application for whichAUTOFLOX had imperfect accuracy was WORDPRESS, where it failed to detectthree direct DOM access lines; in all three cases, AUTOFLOX generated an errormessage stating that the direct DOM access could not be found. Further analysis ofthe JavaScript code in WORDPRESS revealed that in these three unsuccessful cases,the dynamic backward slice included calls to the setTimeout() method, wherethe first parameter passed to the method is a function literal; this is currently notsupported by AUTOFLOX.19 Note, however, that this is an implementation issuewhich does not fundamentally limit the design; one possible way to overcome thisproblem is by integrating the parameter of setTimeout() as part of the eval-handling module.Overall, this experiment demonstrates that AUTOFLOX has an accuracy of96.4% across the six open-source web applications. Note that AUTOFLOX hadno false positives, i.e., there were no cases where the tool incorrectly localized afault or said that a fault had been localized when it had not.19 Note that in the running example, the setTimeout() call is passed a function identifier, which is differentfrom a function literal.72Table 3.2: Results of the experiment on production websites, assessing the robustness of AUTOFLOX (inparticular, how well it works in production settings).Production Total Number Number of Anonymous MinifiedWebsite of direct DOM accesses Support Supportfaults identified Increase 1 1 (0) +1 –Hacker News 2 2 (2) – –W3C School 1 1 (0) – +1OVERALL 4 4 (2) +1 +1Production website: AUTOFLOX was also used to localize faults on three pro-duction websites, listed in Table 3.2. For these websites, the faults are injected inthe homepage, in a way similar to what was done in the fault injection experimentdescribed earlier.20 As before, the experiment was run twice, enabling the newmodules in one case, and disabling them in the other.As Table 3.2 shows, AUTOFLOX was able to identify all four of the directDOM accesses, with the new modules enabled. In contrast, with the new mod-ules disabled (whose results are shown in parentheses in Table 3.2), the tool onlyidentified two of the four direct DOM accesses. One of the unsuccessful cases( resulted from the presence of an anonymous function, while theother unsuccessful case (W3C School) resulted from the presence of minified code(which is common practice in many production websites).The overall implication of this study is that the assumptions made by AUT-OFLOX are reasonable as they were followed both in the open-source applicationsand in the production websites. Further, the new features added to AUTOFLOXsignificantly boosted its accuracy. Later in Section 3.7, the broader implications ofthe assumptions made by AUTOFLOX are discussed.3.6.4 Real BugsIn order to assess how well AUTOFLOX works on actual bugs that have appearedin real-world web applications, we collected 19 bug reports from the applications20 Since we did not have write access to the JavaScript source code for these websites, the mutation wasperformed by manually modifying the code intercepted by the proxy.73in our bug report study (Chapter 2) and subjected AUTOFLOX to them to see if itcan successfully identify the direct DOM access. In addition, we found a real bugin the production website ‘tumblr’, which was also included in the experiment; thisis the running example described earlier. These 20 bugs come from seven open-source web applications and libraries, shown in Table 3.3, and have all been fixedby their respective developers. Here, ground truth is established by comparingthe direct DOM access output by AUTOFLOX with the actual direct DOM access,which is identified by the developers in the corresponding bug report (or, in thecase of tumblr, by analyzing the code to determine the fix).In the end, AUTOFLOX was able to successfully identify the direct DOM ac-cess for all 20 of the bugs. Table 3.3 also shows the distance of the backwardslice from the direct DOM access to the line where the null exception takes place(second-last column), and the number of functions spanned by the backward slice(last column). Note that finding the direct DOM access is non-trivial for some ofthese bugs. For example, one of the bugs in Moodle (MD2) had a dynamic back-ward slice that spanned multiple lines within the same function, and in fact thevariable to which the DOM element is being assigned is constantly being reusedin that function to refer to other elements. In addition, the dynamic backward slicefor a bug in WordPress (WP1) spanned over 40 lines, spread out across multiplefunctions; manually tracing through these lines and functions can evidently be atime-consuming process. This demonstrates the usefulness of AUTOFLOX, as wellas its robustness, as it is capable of localizing real bugs in real web applications.74Table 3.3: Web applications and libraries in which the real bugs to which AUTOFLOX is subjected appear.Application/Library Number of Bugs Bug Identified by Backward Slice Number of FunctionsIdentifier AUTOFLOX? Length (LOC) in Backward SliceJoomla 3 JM1 3 2 1JM2 3 2 1JM3 3 8 2Moodle 6 MD1 3 2 1MD2 3 5 1MD3 3 7 2MD4 3 8 2MD5 3 2 1MD6 3 1 1MooTools 2 MT1 3 1 1MT2 3 1 1Prototype 1 PT1 3 1 1Tumblr 1 TB1 3 1 1WikiMedia 4 WM1 3 2 1WM2 3 1 1WM3 3 2 1WM4 3 1 1WordPress 3 WP1 3 44 4WP2 3 5 1WP3 3 10 275Table 3.4: Performance resultsProduction Trace Collection TotalWebsite Overhead Time (seconds) 63.3% 50.3Hacker News 30.9% 28.8W3C School 119.3% 56.8Tumblr 35.0% PerformanceThe performance overhead of AUTOFLOX is reported in this section. The mea-surements are performed on the production websites because production code ismore complex than development code (such as the ones in the open-source webapplications tested above), and hence incurs higher performance overheads. Thefollowing metrics are measured: (1) performance overhead due to instrumentationin the trace collection phase, and (2) time taken by the trace analyzer to find thedirect DOM access. To measure (1), the production websites are crawled usingCRAWLJAX both with instrumentation and without instrumentation; the baseline isthe case where the web application is run only with CRAWLJAX. For measuring(2), AUTOFLOX was run on the collected trace. Note that the Eclipse interfaceexperiences similar overheads.Table 3.4 shows the performance measurements. As the table shows, the over-head incurred by the trace collection phase (average of three runs) ranges from30.9% in Hacker News to 119.3% in W3C School. Also, on average, the traceanalysis phase ran for 0.1s in all four websites. Note that AUTOFLOX’s trace col-lection module is only intended to be turned on when a fault needs to be localized –when interacting with a website as normal, the module will be off; hence, the highoverheads in some websites (e.g., W3C School) are not expected to be problematic.Indeed, AUTOFLOX does not run for more than a minute in any of the websites,from trace collection to fault localization.763.7 DiscussionSome issues relating to the limitations of AUTOFLOX and some threats to the va-lidity of the evaluation are now discussed.3.7.1 LimitationsCurrently, AUTOFLOX requires the user to specify the elements that will be clickedduring the web application run to replicate the failure. This process can be tediousfor the programmer if she is not aware of all the DOM elements (and their corre-sponding IDs) present in the web application, and will often require the program-mer to search for these elements in the source code of the web application. TheEclipse plugin version of AUTOFLOX mitigates this problem to a certain extent,by asking the user to replicate the failure by manually interacting with the webapplication; however, doing this for a large set of bugs may be tedious, and waysto automate this process without sacrificing accuracy are currently being explored.One way to simplify the above task and effectively automate the process ofidentifying all the DOM IDs is to do a preliminary run of the web application thatdetects all the DOM elements — where all elements are considered clickable —and present this list of DOM elements to the user. However, this approach wouldhave the disadvantage of having to run the web application multiple times, whichwould slow down the fault localization process. In addition, this approach may notbe able to detect DOM elements created dynamically by the JavaScript code if onlya subset of the web application is crawled.As seen from the accuracy results in the evaluation, although AUTOFLOX is ca-pable of handling calls to eval (or eval-like functions such as setTimeout)where a string (or a concatenation of strings) is passed as the parameter, it cur-rently does not support the case where a function literal is passed as a parameterto this function. Based on the applications that were evaluated, passing functionliterals to eval-like functions does not seem to be common practice among devel-opers. Most parameters to setTimeout, for instance, are in the form of functionidentifiers, similar to the running example.773.7.2 Threats to ValidityAn external threat to the validity of the evaluation is that only a limited number ofweb applications are considered to assess the correctness of AUTOFLOX. However,these applications have been chosen as they contain many lines of JavaScript code,thereby allowing multiple fault injections to be performed per application.In terms of internal validity, a fault injection approach was used to emulate theDOM-related faults in the evaluation. The threat here is that the faults injectedmay not be completely representative of the types of faults that happen in the realworld. Nonetheless, the bug report described in Chapter 2 provides supportive evi-dence that the bugs that were injected are prominent and must therefore be consid-ered. Further, AUTOFLOX was also tested on real bugs in one of the experiments,demonstrating its applicability in more realistic settings.Finally, while AUTOFLOX is openly available, and the fault injection exper-iment on the six open-source web applications is replicable, the experiment onproduction websites is not guaranteed to be replicable, as the source code of thesewebsites may change over time, and we do not have access to prior versions of thewebsite.3.8 Related WorkHere, related work is classified into two broad categories: web application reliabil-ity and fault localization.3.8.1 Web Application ReliabilityWeb applications have been an active area of research for the past decade. Thework described in this chapter focuses on reliability techniques that pertain toJavaScript-based web applications, which are a more recent phenomenon.Static analysis. There have been numerous studies to find errors and vul-nerabilities in web applications through static analysis [18, 64, 65, 186]. BecauseJavaScript is a difficult language to analyze statically, these techniques typically re-strict themselves to a safe subset of the language. In particular, they do not modelthe DOM, or they oversimplify the DOM, which can lead to both false positivesand false negatives. Jensen et al. [79] model the DOM as a set of abstract JavaScript78objects. However, they acknowledge that there are substantial gaps in their staticanalysis, which can result in false-positives. In contrast, the proposed technique isbased on dynamic execution, and as a result, does not suffer from false positives.Testing and replay. Automated testing of JavaScript-based web applications isan active area of research [15, 109, 115, 137]. ATUSA [109] is an automated tech-nique for enumerating the state space of a JavaScript-based web application andfinding errors or invariant violations specified by the programmer. JSart [113] andDODOM [137] dynamically derive invariants for the JavaScript code and the DOMrespectively. Finally, MUTANDIS [114] determines the adequacy of JavaScript testcases using mutation testing. However, none of these techniques focus on faultlocalization. Alimadadi et al. recently introduced a program comprehension toolcalled CLEMATIS [8], which maps user events to JavaScript code; although thistool can help the developer narrow down the list of JavaScript lines to consider, itdoes not pinpoint a precise fault location, as AUTOFLOX does.WaRR [10], Mugshot [111], and Jalangi [151] – among others [30, 178] –replay a web application’s execution after a failure in order to reproduce the eventsthat led to the failure. However, they do not provide any support for localizing thefault, and leave it to the programmer to do so. As shown in Section 3.2, this is oftena challenging task.Finally, tools such as Firefox’s Firebug [70] plug-in exist to help JavaScriptprogrammers debug their code. However, such tools are useful only for the bugidentification phase of the debugging process, and not the fault localization phase.3.8.2 Fault LocalizationFault localization techniques isolate the root cause of a fault based on the dy-namic execution of the application. They can be classified into Spectrum-basedand Slicing-based.Spectrum-based fault localization techniques [24, 154, 188] include Pin-point [35], Tarantula [84], Whither [143], and MLNDebugger [188]. Additionally,MUSE [117] and FIFL [183] also perform spectrum-based fault localization basedon injected mutants in traditional programs, focusing primarily on regression bugs.These techniques execute the application with multiple inputs and gather the dy-79namic execution profile of the application for each input. They assume that theexecutions are classified as success or failure, and look for differences in the pro-file between successful and failing runs. Based on the differences, they isolatethe parts of the application, which are likely responsible for the failure. However,spectrum-based techniques are difficult to adapt to web applications, as web appli-cations are rarely deterministic, and hence they may incur false positives. Also, it isnot straightforward to classify a web application’s execution as success or failure,as the results depend on its usage [42].Slicing-based fault localization techniques have been proposed by Agrarwalet al. [4] and Zhang et al. [185]. These techniques isolate the fault based on thedynamic backward slice of the faulty statement in the code. AUTOFLOX is simi-lar to this body of work in that it also extracts the dynamic backward slice of theJavaScript statement that throws an exception. However, it differs in two ways.First, it focuses on errors in the DOM-JavaScript interaction. The DOM is uniqueto web applications and hence the other fault-localization techniques do not con-sider it. Second, JavaScript code is often executed asynchronously in responseto events such as mouse clicks and timeouts, and does not follow a deterministiccontrol-flow (see Section 3.2.2 for more details).Web Fault localization. As a complementary tool to AUTOFLOX, we de-veloped VEJOVIS [131], which is a JavaScript fault repair suggestion tool, and isdescribed in more detail in Chapter 4. This tool is similar to AUTOFLOX in that italso targets DOM-related faults and uses a backward slicing approach. However,unlike AUTOFLOX, which starts with the line of code that leads to the failure andtries to find the direct DOM access by examining the dynamic backward slice, VE-JOVIS starts with the direct DOM access, and examines its parameters to see howit can be fixed to match the DOM.To the best of our knowledge, the only papers apart from the current work thathas explored fault localization in the context of web applications are those by Artziet al. [14] and Samimi et al. [148]. Like AUTOFLOX, the goal of the tools proposedin these papers is to automatically localize web application faults, achieving highaccuracies. However, their work differs from the current one in various aspects:(1) they focus on the server-side code, i.e., PHP, while the current work focuseson the client-side and (2) they localize HTML validation errors, while the current80work’s proposed approach localizes JavaScript faults. In addition, Artzi et al. haveopted for a spectrum-based approach based on Tarantula, while AUTOFLOX is adynamic slicing-based approach. To the best of our knowledge, automated faultlocalization for JavaScript-based web applications has not been addressed in theliterature.3.9 Conclusions and Future WorkIn this chapter, we introduced a fault-localization approach for JavaScript-basedweb applications. The approach is based on dynamic slicing, and addresses thetwo main problems that inhibit JavaScript fault localization, namely asynchronousexecution and DOM interactions. Here, the focus is on DOM-related JavaScriptfaults, which is the most prominent class of JavaScript faults. The proposed ap-proach has been implemented as an automated tool, called AUTOFLOX, which isevaluated using six open-source web applications and four production websites.The results indicate that AUTOFLOX can successfully localize over 96% of thefaults, with no false positives.There are several ways in which the work outlined in this chapter will be ex-tended. First, the current work focuses on code-terminating JavaScript faults, i.e.,faults that lead to an exception thrown by the web application. However, notall DOM-related faults belong to this category. The design will therefore be ex-tended to include a more automated technique for localizing non-code terminatingJavaScript faults. In addition, the empirical evaluation will be extended to performuser studies of the AUTOFLOX tool, in order to measure its ease of use and efficacyin localizing faults. This is also an avenue for future work.81Chapter 4Automatic JavaScript Fault RepairThis chapter describes our technique for automatically suggesting repairs for DOM-related faults.21 Our goal in this chapter is once again to answer RQ1B fromChapter 1.1, but this time, the focus is on its latter half (i.e., fault repair). Weimplemented our repair technique in a tool called VEJOVIS,22 which we describebelow.4.1 IntroductionJavaScript is used extensively in modern web applications to manipulate the con-tents of the webpage displayed on the browser and to retrieve information from theserver by using HTTP requests. To alter the contents of the webpage, JavaScriptcode manipulates the DOM, as discussed in the previous chapters. In particular,through the use of DOM API methods, the JavaScript code is capable of retriev-ing elements from the DOM, as well as changing the DOM by modifying elementproperties, adding elements, and deleting elements.Due to the dynamic nature of the JavaScript language and the interaction withthe DOM, JavaScript-based web applications are prone to errors, and indeed, theresults of our bug report study in Chapter 2 point to the prevalence, impact, andcomplexity of DOM-related JavaScript faults in web applications. Therefore, inthis chapter, our goal is to facilitate the process of fixing DOM-related JavaScriptfaults, by providing suggestions to the programmer during web application testing21The main study in this chapter appeared at the International Conference on Software Engineering(ICSE 2014) [131].22VEJOVIS is named after the Roman god of healing82and debugging tasks. To this end, we first perform a study of real-world JavaScriptfaults to understand the common patterns in how programmers fix such faults.Then, based on these common fix patterns, we propose an automatic approachfor suggesting repairs. In this chapter, we use the term repair to encompass bothfixes and workarounds for the fault, similar to other related work [121, 148].Our approach starts from the wrong DOM API method/property, and uses acombination of static and dynamic analysis to identify the lines of code in thebackward slice of the parameters or assignment values of DOM methods/proper-ties. Once these lines are localized, it uses a string solver to find candidate replace-ment DOM elements, and propagates the candidate values along the backward sliceto find the fix.We implement our approach in an open-source tool called VEJOVIS. VEJOVISis deployed on a web application after the occurrence and subsequent localizationof a JavaScript fault [129]. It requires neither specifications/annotations from theprogrammer nor any changes to the JavaScript/DOM interaction model, and canhence be deployed on unmodified web applications.Prior work on suggesting repairs for web application faults has focused onserver-side code (e.g., PHP) [121, 148], including workarounds for web API calls [33].Other work [80] automatically transforms unsafe eval calls in JavaScript codeto safe alternatives. None of these techniques, however, deal with DOM-relatedJavaScript faults. To the best of our knowledge, VEJOVIS is the first technique toautomatically suggest repairs for DOM-related JavaScript faults in web applica-tions.We make the following contributions in this chapter:• We categorize common fixes applied by web developers to DOM-relatedJavaScript faults, based on an analysis of 190 online bug reports. We findthat fixes involving modifications of DOM method/property parameters orassignment values into valid replacement values (i.e., values consistent withthe DOM) are the most common, followed by those involving validation ofDOM elements before their use;• Based on the above study, we present an algorithm for finding valid replace-ment parameters passed to DOM API methods, which can potentially be83used to replace the original (and possibly erroneous) parameter used to re-trieve elements from the DOM. The replacements are found based on theCSS selector grammar, and using information on the DOM state at the timethe JavaScript code executed. The aim is to suggest replacements that arevalid in the current DOM, and to suggest as few replacements as possible;• We present an algorithm for suggesting repairs in the form of actionable mes-sages to the programmer based on code-context. The actionable messagescontain detailed directions prompting the programmer to make modificationsto the code so as to eliminate the “symptoms” observed during the program’sfailure run;• We describe the implementation, called VEJOVIS, which integrates the previ-ous two contributions. We evaluate our technique on 22 real-world JavaScriptbugs from 11 web applications. In our case study, VEJOVIS was able to pro-vide correct repair suggestions for 20 of the 22 bugs. Further, the correct fixwas ranked first in the list of repairs for 13 of the 20 bugs. We also found thatlimiting the suggestions to those that are within an edit distance of 5 relativeto the original selector can decrease the number of suggestions in the other7 bugs to 3 per bug, while reducing the number of correct fixes to 16 (from20).4.2 Background and ChallengesClient-side JavaScript is used primarily to interact with i.e., to access, traverse, ormanipulate the DOM. In most modern web applications, these interactions are usedto incrementally update the browser display with client-side state changes withoutinitiating a page load. Note that this is different from what happens during URL-based page transitions where the entire DOM is repopulated with a new HTMLpage from the server.As we saw in Chapter 3, JavaScript provides DOM API methods and proper-ties that allow direct and easy retrieval of DOM elements. For instance, elementscan be accessed based on their tag name, ID, and class names, and the DOM canbe traversed using the parentNode and childNodes properties. In addition,modern browsers provide APIs such as querySelector for retrieving DOM el-84Table 4.1: List of commonly used CSS selector components.Component DescriptionTag Name The name of the tag associated with an element. Exam-ples: div, span, table, etc.ID The id associated with an element. This is prefixed withthe # symbol. Example: If a div element has an id of“myID”, a CSS selector that can retrieve this element isdiv#myID.Class Name The name of a class to which an element belongs. Thisis prefixed with a period. Example: If a span elementbelongs to the “myClass” class, a CSS selector that canretrieve this element is span.myClass.Is Descendant A space character indicating that the element describedby the right selector is a descendant of the elementdescribed by the left selector. Example: To find alltable elements belonging to class “myClass” that aredescendants of a div element, we use the selector divtable.myClass.Is Child The “>” character, which indicates that the element de-scribed by the right selector is a child of the elementdescribed by the left selector. Example: To find all trelements that are children of the table element with id“myID”, we use the selector table#myID > tr.Is Next Sibling The “+” character, which indicates that the element de-scribed by the right selector is the next sibling of theelement described by the left selector. Example: To findall tr elements that follow another tr element, we usethe selector tr + tr.ements using patterns called CSS selectors. CSS selectors follow a well-definedgrammar [167], and serve as a unified way of retrieving DOM elements; for exam-ple, retrieving a DIV DOM element with ID ”news” translates to "DIV#news"in CSS selector syntax. Table 4.1 shows some of the commonly used componentsthat make up a CSS selector.Once an element is retrieved using the CSS selector, JavaScript code can usethe reference to that element to access its properties, add new or remove/modifyexisting properties, or add/remove elements to/from the DOM tree.Running Example. Here, we describe the running example that we will be usingthroughout this chapter to simplify the description of our design. The runningexample is based on a bug in Drupal involving jQuery’s Autopager [92] extension,which automatically appends new page content to a programmer-specified DOM-element A snippet of the simplified JavaScript code is shown in Figure 4.1. Inthe pagerSetup() function, the programmer has set the display ID suffix to851 function pagerSetup() {2 var display = "catalog_view";3 var content = "p.pages span";4 appendTo(display, content);5 }7 function appendTo(display, content) {8 var view_selector = "div#view-display-id-" + display;9 var content_selector = view_selector + " > " + content;10 var pageToAdd = "<div>New Content</div>";11 var pages = $(content_selector);12 var oldContent = pages0.innerHTML;13 pages0.innerHTML = oldContent + pageToAdd;14 }Figure 4.1: JavaScript code of the running example.“catalog view” (line 2), and the DOM element where the page is added as “p.pagesspan” (line 3). These inputs are passed to the appendTo() function, which setsup the full CSS selector describing where to add the new page through a series ofstring concatenations (lines 8-9). In this case, the full CSS selector ends up being“div#view-display-id-catalog view > p.pages span”.The above JavaScript code runs when the DOM state is as shown in Figure 4.2.Note that in this case, the CSS selector will not match any element in the DOMstate. As a result, $() returns an empty set in line 11; hence, when retrievingthe old content of the first matching element via innerHTML (line 12), an “un-defined” exception is thrown. The undefined exception prevents the Autopager tosuccessfully append the contents of the new page (line 10) to the specified elementin line 13.For this particular bug, the fix applied by the programmer was to change thestring literal “div#view-display-id-” to “div#view-id-” in line 8. This, in turn,changes the full CSS selector to “div#view-id-catalog view > p.pages span”, whichis valid in the DOM in Figure 4.2.Challenges. The interaction between two separate languages (i.e., the DOM andthe JavaScript code) makes web applications highly error-prone – something es-tablished in both our study of unhandled JavaScript exceptions [128] and our bugreport study in Chapter 2. Since most JavaScript faults are DOM-related as per theresults of the latter study, the majority of JavaScript faults originate from errors that86bodydivID = "view-id-catalog_view"pclass="pages"h1divID = "view-display-id-catalog_page"pclass = "pages"divclass="container"spanh1divclass="container"spanFigure 4.2: The DOM state during execution of the JavaScript code in the running example. For simplic-ity, only the elements under body are shown.eventually propagate into the parameter or assignment value of a DOM method-/property. The running example provides an example of a DOM-related fault, asthe error present in the example eventually propagates into the DOM method $(),which retrieves DOM elements through CSS selectors.DOM-related faults, by definition, involve the propagation of the error to aDOM method’s parameter, or a DOM property’s assignment value. Therefore,the fix likely involves altering the code responsible for setting up this parameter orassignment value. The challenge, of course, is in answering the following question:how should the code be modified to repair the fault? Answering this questionrequires knowledge of (1) the location in the code that needs to be altered; and (2)the specific modification that needs to be applied to that location. For the first task,the origins (i.e., the backward slice) of the parameter or assignment value mustbe traced. For the second task, the specific replacement parameter or replacementassignment value must be inferred, and the way in which this replacement shouldbe incorporated into the code must be determined.87While these challenges are difficult and sometimes impossible to tackle with ar-bitrary method parameters or assignment values (because they require programmerintent to be inferred), parameters and assignment values to DOM methods/prop-erties – as well as the DOM itself – are more structured. Hence, for these kindsof parameters or assignment values, the problem of inferring programmer intentreduces to finding replacements that satisfy this structure. Therefore, we studycommon patterns in how programmers fix DOM-related faults, to prioritize the fixsuggestions we infer. This study is presented in the next section.4.3 Common Developer FixesTo better understand how programmers implement fixes to DOM-related JavaScriptfaults, we analyze 190 fixed bug reports representing DOM-related faults and an-alyze the developer fixes applied to them. Our goal is to answer the followingresearch questions:RQ1 (Fix Categories): What are the common fix types applied by programmersto fix DOM-related JavaScript faults?RQ2 (Application of Fixes): What modifications do programmers make to Java-Script code to implement a fix and eliminate DOM-related faults?The above questions will help us determine how programmers typically dealwith DOM-related faults. This understanding will guide us to design our automatedrepair algorithm.4.3.1 MethodologyWe perform our analysis on 190 fixed JavaScript bug reports from eight web ap-plications and four JavaScript libraries (see Table 4.2). Note that these bug reportsare a subset of the bug reports we explored in Chapter 2. However, the analysisconducted here is new and is not part of that bug report study.In the initial version of our bug report study, we analyzed a total of 317 fixedJavaScript bug reports; note that the conference paper on which this current chapteris based was written before we extended our bug report study from 317 to 502bug reports, which is why we were only able to look at a subset of bugs (31788Table 4.2: List of applications used in our study of common fixes.Application DescriptionMoodle Learning Management SystemJoomla Content Management SystemWordPress BloggingDrupal Content Management SystemRoundcube WebmailWikiMedia Wiki SoftwareTYPO3 Content Management SystemTaskFreak Task OrganizerjQuery JavaScript LibraryPrototype.js JavaScript LibraryMooTools JavaScript LibraryEmber.js JavaScript Libraryout of 502) from our study. Of these 317 bug reports, about 65%, or 206 of thebugs were DOM-related JavaScript faults. Further, we found that in 92% of theseDOM-related JavaScript faults (or 190 bugs), the fix involved a modification of theclient-side JavaScript code. We consider only these 190 bugs in this study, as ourgoal is to find repairs for DOM-related JavaScript faults that involve the JavaScriptcode.To answer the research questions, we perform a qualitative analysis of the fixesapplied by the programmers to each bug report. To do so, we manually read theportions of each bug report documenting the fix applied (e.g., developer comments,discussions, initial report descriptions, fix descriptions, patches, etc.). Based onthis analysis, we devise a classification scheme for the bug report fixes so that wecan group the fixes into different, well-defined categories, to answer RQ1. Ouranalysis of the code patches and/or fix descriptions helps us answer RQ2.4.3.2 ResultsFix Categories. We found that the fixes that programmers apply to DOM-relatedJavaScript faults fall into the following categories.Parameter Modification, where a value that is eventually used in the concatena-tion of a DOM method/property parameter or assignment value is modified.This is done either by directly modifying the value in the code, or by addingcalls to modifier methods (e.g., adding a call to replace() so that the89string value of a variable gets modified). This category makes up 27.2% ofthe fixes.DOM Element Validation, where a check is added so that the value of a DOMelement or its property is compared with an expected value before beingused. This category makes up 25.7% of the fixes.Method/Property Modification, where a call to a DOM API method (or prop-erty) is either added, removed, or modified in the JavaScript code. Here,modification refers to changing the method (or property) originally called,not the parameter (e.g., instead of calling getElementsByClassName,the method getElementsByTagName is called instead). This categorymakes up 24.6% of the fixes.Major Refactoring, where significantly large portions of the JavaScript code aremodified and restructured to implement the fix. This category makes up10.5% of the fixes.Other/Uncategorized, which make up 12% of the fixes.As seen in the above fix categories, the most prominent categories are Parame-ter Modification and DOM Element Validation, which make up over half (52.9%)of the fixes. Therefore, we focus on these categories in our work. Although wedo not consider Method/Property Modifications in our repair approach, our algo-rithm can be adapted to include this class of errors, at the cost of increasing itscomplexity (see Section 4.7).Application of Fixes. We next describe how programmers modify the JavaScriptcode to apply the fixes. We discuss our findings for the three most prominent fixcategories – Parameter Modification, DOM Element Validation, and Method/Prop-erty Modification.Parameter Modification: We found that 67.3% of fixes belonging to the Pa-rameter Modification fix category involve the modification of string values. Thevast majority (around 70%) of these string value modifications were direct mod-ifications of string literals in the JavaScript code. However, we also found caseswhere the string value modification was applied by adding a call to string modifi-cation methods such as replace().We also analyzed the DOM methods/properties whose parameters are affected90by the modified values. For string value modifications, the methods/properties in-volved in multiple bug report fixes are getElementById(), $() and jQuery();together, fixes involving these methods comprise 51.4% of all string value modifi-cations. For non-string value modifications, fixes involved modification of the nu-merical values assigned to elements’ style properties, particularly their alignmentand scroll position.DOM Element Validation: 75.5% of fixes belonging to this category are ap-plied by simply wrapping the code using the pertinent DOM element within an ifstatement that performs the necessary validation (so that the code only executes ifthe check passes). Other modifications include (1) adding a check before the DOMelement is used so that the method returns if the check fails; (2) adding a checkbefore the DOM element is used such that the value of the DOM element or itsproperty is updated if the check fails; (3) encapsulating the code using the DOMelement in an if-else statement so that a backup value can be used in case thecheck fails; and finally (4) encapsulating the code in a try-catch statement.The most prevalent checks are null/undefined checks, i.e., the code has beenmodified to check if the DOM element is null or undefined before it is used,which constitutes 38.8% of the fixes in the DOM Element Validation category.Method/Property Modification: 53.2% of these fixes involve changing theDOM method or property being called/assigned; the rest involve either the removalof the method call or the property assignment (e.g., remove a setAttribute callthat changes the class to which an element belongs), or the inclusion of such a callor assignment (e.g., add a call to blur() to unfocus a particular DOM element).Of the fixes where the DOM method/property was changed, around 44% involvechanging the event handler to which a function is being assigned (e.g., instead ofassigning a particular method to onsubmit, it is assigned to onclick instead).Summary of Findings. Our study shows that the most prominent fix categoriesare Parameter Modification and DOM Element Validation. Our analysis also showsthe prevalence of string value modifications and null/undefined checks whenapplying fixes. In addition, most parameter modifications are for values eventuallyused in DOM methods that retrieve elements from the DOM, particularly the $(),jQuery() and getElementById() methods. These results motivate our faultmodel choice in Section 4.4 as well as our choice of possible sickness classes in91Section Fault ModelIn this work, we focus on DOM API methods that retrieve an element from theDOM using CSS selectors, IDs, tag names, or class names, as we found thatthese were the common sources of mistakes made by programmers (Section 4.3).These DOM API methods include getElementById(), getElementsB-yTagName(), getElementsByClassName(), querySelector(), andquerySelectorAll(). We also support DOM API wrapper methods madeavailable by commonly used JavaScript libraries including those in jQuery (e.g.,$() and jQuery()); Prototype (e.g., $$() and $()); and tinyMCE (e.g., get()),among others. For simplicity, we will refer to all these DOM API methods as thedirect DOM access, which is in line with terminology we used in Chapter 3.We further focus on DOM-related faults that lead to code-terminating failures,which means the DOM API method returns null, undefined, or an empty setof elements, eventually leading to a null or an undefined exception (thereby termi-nating JavaScript execution). However, our design can also be extended to applyto DOM-related faults that lead to output-related failures, i.e., those that lead toincorrect output manifested on the DOM. Such faults would require the program-mer to manually specify the direct DOM access. In contrast, with code-terminatingDOM-related faults, the direct DOM access can be determined automatically us-ing the AUTOFLOX tool proposed in Chapter 3. Thus we focus on this category offaults in this work.The running example introduced in Section 4.2 is an example of a fault that isencompassed by the fault model described above. Here, the direct DOM access isthe call to the $() method in line 11, which returns an empty set of elements. It iscode-terminating because the fault leads to an undefined exception in line 12.4.5 ApproachIn this section, we describe our approach for assisting web developers in repairingDOM-related faults satisfying the fault model described in the previous section.Figure 4.3 shows a block diagram of our design, which consists of three main92Data Collector(box a)Direct DOM Access Web Application URLSymptom Analyzer(box b)TreatmentSuggester(box c)SupplementaryInformationSymptomsDataPossible SicknessesList of WorkaroundSuggestionsFigure 4.3: High-level block diagram of our design.components: (1) the data collector; (2) the symptom analyzer; and (3) the treatmentsuggester. These components are described in Sections 4.5.1–4.5.3.Our approach assumes the parameter (or the array index) of the direct DOMaccess is incorrect. This is inspired by the results presented in Section 4.3, whichdemonstrated the prevalence of Parameter Modification fixes. As such, our ap-proach attempts to find valid replacements for the original parameter or array in-dex, where a valid replacement is a parameter that matches at least one element inthe DOM. Once the valid replacements are found, our approach analyzes the codecontext to determine what actionable message to suggest as a potential repair tothe programmer.934.5.1 Data CollectorThe main purpose of the data collector module is to gather dynamic data that mayreveal the symptoms present in the web application. In general, symptoms are de-fined as any indications of abnormalities in the intended behaviour of the programwith regard to DOM accesses. We consider the following as symptoms in ourdesign based on our fault model:• Symptom 1: The direct DOM access is returning null, undefined, oran empty set of elements. This leads to a “null” or “undefined” exceptioneventually.• Symptom 2: The index used to access an element in the list of elementsreturned by the direct DOM access is out of bounds. This is only applicableto DOM methods that retrieve a list of elements (e.g., getElementsBy-TagName(), $(), etc.). This eventually leads to an “undefined” exception.The data collector collects the direct DOM access’ line number, and the nameof the function containing it. This data is provided by the user (manually) or gath-ered automatically using a tool such as AUTOFLOX [134]. The data collector mod-ule also collects the following supplementary information, which can help infer thecontext under which a particular symptom is appearing:• The dynamic execution trace of the JavaScript program, with each trace itemcontaining the line number of the executed line, the name of the functioncontaining the line, and the names and values of all in-scope variables at thatline. It also includes the lines in the body of a loop, and a list of for loopiterator variables (if any). The data describing which lines are part of a loopare used by the treatment suggester to infer code context, to determine whatactionable repair message to provide to the programmer; more details are inSection 4.5.3.• The state of the DOM when the direct DOM access line is executed. Forinstance, in the running example, the DOM state in Figure 4.2 is retrieved;The DOM state will be used by the symptom analyzer to determine possiblereplacements for the direct DOM access parameter (if any); in particular, ifthe direct DOM access is returning null or undefined (i.e., Symptom 1),94this means that the parameter to the direct DOM access does not correspondto any element in the current DOM state, so our technique can look at thecurrent DOM state to see if there are any reasonable replacements that domatch an element (or a set of elements) in the DOM.4.5.2 Analyzing SymptomsThe symptom analyzer (Figure 4.3, box b) uses the data gathered by the data col-lector to come up with a list of possible sicknesses that the web application mayhave. Each possible sickness belongs to one of the following classes:• String: A [variable | expression | string literal] has a string value of X, butit should probably have string value Y. This sickness triggers Symptom 1.• Index: An array index has a numerical value of X, but it should fall withinthe allowed range [Y--Z]. This sickness triggers Symptom 2.• Null/Undefined: A line of code X accessing a property/method of the DOMelement returned by the direct DOM access should not execute if the DOMelement is [null | undefined]. This sickness can trigger both Symptoms1 and 2.These classes are based on the results of our study of common bug report fixes.In particular, the “String” and “Null/Undefined” classes account for ParameterModification and DOM Element Validation fixes, respectively. The “Index” classis included because in some cases, an undefined exception occurs not because ofretrieving the incorrect element, but because of using an out-of-bounds index onthe returned array of DOM elements.The symptom analyzer takes different actions depending on the symptom inSection 4.5.1 as follows:1. String Replacement: Assume that the program suffers from Symptom 1.This implies that the string parameter being passed to the direct DOM accessdoes not match any element in the DOM – i.e., the program may be sufferingfrom the “String” sickness class, as described above. Our design will lookfor potential replacements for these parameters, where the replacements are95determined based on the current DOM state. Each potential replacementrepresents a possible sickness belonging to the “String” class.2. Index Replacement: Assume that the program suffers from Symptom 2.This implies that the program may be suffering from the “Index” sicknessclass, as described above. This step is only taken if the direct DOM accesscorresponds to a method that returns a set of DOM elements. Our approachwill determine the allowed range of indices, representing a possible sicknessbelonging to the “Index” class.3. Null/Undefined Checks: By default, our design additionally assumes a pos-sible sickness belonging to the “Null/Undefined” class.Each of the above cases will be described in detail. Because CSS selectorsprovide a unified way of retrieving elements from the DOM, we will only describehow the possible sicknesses are determined for the case where the parameter to thedirect DOM access is a CSS selector (as in the case of the running example).Case 1: String Replacement. The main assumption here is that the string pa-rameter passed to the direct DOM access is incorrect; we call this parameter theerroneous selector. Hence, the goal is to (1) look for potential replacement pa-rameters that match an element (or a set of elements) in the current DOM state(i.e., are valid replacements), and (2) suggest only the most viable replacements soas to not overwhelm the programmer; therefore our approach assumes that the re-placement will be relatively close to the original, erroneous selector (i.e., only onecomponent of the original selector is assumed incorrect by any given replacement).Algorithm 1 shows the pseudocode for this step. The sub-steps are described belowin more detail.Dividing Components: The first step is to divide the erroneous selector into itsconstituent components, represented by C (line 1). In essence, C is an ordered set,where each element ci corresponds to a selector component (ci.comp); its matchingcomponent type (ci.type; see Table 4.1); and its level in the selector, where eachlevel is separated by a white space or a > character (ci.level). The erroneousselector itself is retrieved from the direct DOM access (dda) which is input to thealgorithm. For example, consider the erroneous selector in the running example:“div#view-display-id-catalog view > p.pages span”. This selector contains the96Algorithm 1: Parameter ReplacementInput: trace: The dynamic execution traceInput: dda: The direct DOM accessInput: dom: The current DOM stateOutput: listOfpossibleSicknesses: A list of possible sicknesses1 C← {c1, c2, ..., cN};2 GSS← {(s1, l1), (s2, l2), ..., (sK , lK )};3 foreach ci ∈ C do4 LSSi ← match(ci, GSS);5 end6 VS← /0;7 foreach ci ∈ C do8 PVE← {dom.root};9 for j← 0 to ci.level do10 nextElems← /0;11 foreach e ∈ PVE do12 all← e.getElementsByTagName(“*”);13 foreach f ∈ all do14 if f matches level j of erroneous selector then15 nextElems.add(f );16 end17 end18 end19 PVE← nextElems;20 end21 foreach e ∈ PVE do22 newElems← /0;23 if level after ci.level is the “descendant” then24 newElems← getAllDescendants(e);25 end26 else if level after ci.level is the “child” then27 newElems← getAllChildren(e);28 end29 else if level after ci.level is the “next sibling” then30 newElems← getNextSibling(e);31 end32 foreach f ∈ newElems do33 if f has ci.type then34 newSelector← dda.erroneousSelector.replace(ci.comp, ci.type of f );35 VS.add(newSelector);36 end37 end38 end39 foreach selector ∈ VS do40 if e← selector(dom) /∈ dom then41 VS.remove(selector);42 end43 end44 end45 PR← replacementsFinder(VS, LSS1, LSS2, ..., LSSN );46 foreach rep ∈ PR do47 possibleSickness← craftPossibleSickness(rep);48 listOfPossibleSicknesses.add(possibleSickness);49 end97following components: (1) the tag name “div”; (2) the “has ID” identifier “#”;(3) the ID name “view-display-id-catalog view”; (4) a “>” characterindicating that the next component is a child of the previous; (5) the tag name“p”; (6) the “has class” identifier “.” (i.e., a dot character); (7) the class name“pages”; (8) whitespace indicating that the next component is a descendant ofthe previous one; and (9) the tag name “span”.Finding the Global String Set: The next step is to determine the string setcorresponding to each component (lines 2-5). The string set refers to the list oflocations, in the JavaScript code, of the origins of all the parts that make up aparticular string value. For instance, consider the erroneous selector in the runningexample, whose final string value is “div#view-display-id-catalog view > p.pagesspan”. This entire string is made up of a concatenation of the following strings:(1) “div#view-display-id-” in Figure 4.1, line 8; (2) “catalog view”in line 2; (3) “ > ” in line 9; and (4) “p.pages span” in line 3.The algorithm first determines the global string set, which refers to the stringset of the entire erroneous selector; in Algorithm 1, this is represented by GSS (line2). The global string set is found by recursively extracting the dynamic backwardslice of each concatenated string value that makes up the erroneous selector (usingthe dynamic execution trace) until all the string literals that make up the erroneousselector have been included in the string set. Note that the slice extraction processis a dynamic one, and is hence precise. However, it may be unable to resolve theorigin of every variable in the code e.g., because a variable gets its value from anexternal XML file. Unresolved portions of the erroneous selector are left as “gaps”in the string set.The GSS consists of an ordered set of tuples of the form (si, li), where si isa string value and li is the location in the JavaScript code where that value orig-inated (i.e., line number and enclosing function). Each tuple represents an ele-ment in the string set. In the running example, given the string set of the er-roneous selector just described above, the ordered set of tuples will be as fol-lows: {(“div#view-display-id-”, 8), (“catalog view”, 2), (“>”, 9),(“p.pages span”, 3)}.23 Note that a gap in the string set is likewise represented23For simplicity, we omit the enclosing functions.98as a tuple; the string value si is retained, but the location li is left undefined, and aspecial variable is used to store the earliest expression from which the unresolvedstring value originated.Finding the Local String Sets: Once the global string set is found, the localstring set of each component – represented by LSSi – is inferred (lines 3-5). Inessence, this procedure matches each erroneous selector component ci with thecorresponding elements in the global string set (line 4). For example, considerthe id name component “view-display-id-catalog view” in the runningexample. If startIndex and endIndex refer to the index range of the characters fromthe global string set element that belong to the local string set, then the string set ofthis component is {((“div#view-display-id-”, 8), startIndex: 4, endIndex:19), ((“catalog view”, 2), startIndex: 0, endIndex: 11)}.Finding Valid Selectors: Lines 6-44 of Algorithm 1 looks for valid selectors(VS) in the current DOM state. This portion of the algorithm iterates through eachcomponent ci of the erroneous selector and assumes that ci is incorrect; it then tra-verses the current DOM state’s tree to see if it can find new CSS selectors (i.e.,those in which the component assumed to be erroneous is replaced by a differentvalue) that match an element in the current DOM state. This procedure is carriedout for each component of the erroneous selector; hence, by the end of this pro-cedure, each component will have a corresponding set of CSS selectors (may beempty).Precisely, to find the valid selectors, the algorithm first looks for possibly validelements, represented by PVE (lines 8-20). These are the elements that match theoriginal selector up to and including the the selector level ci.level, neglecting thecomponent being assumed erroneous. For instance, suppose in the running exam-ple, the tag component “p” of the erroneous selector is assumed as incorrect by ourdesign. This component is found in level 2 of the erroneous selector. Hence, our de-sign traverses the DOM to look for elements that match the selector up to level 2 ne-glecting “p” – i.e., elements that match “div#view-display-id-catalogview > .pages”.Once PVE is found, the algorithm (lines 21-38) checks if the element doesindeed contain a corresponding replacement for the component that was assumedto be incorrect (e.g., if an ID is being replaced, the element must have an ID) (line9932-37). In our example, “p” – which is a tag component – was assumed incorrectso the verification will pass for all elements in PVE because every element hasa tag name. It also checks if the element contains any descendants, children, orsiblings, depending on the structure of the erroneous selector (lines 22-31). Again,in the running example, the next level (level 3) of the erroneous selector must be the“descendant” of the first two levels, because of the whitespace between the level 2components and the level 3 components; hence, the check will pass for an elementif it contains any descendants. If both checks pass, the corresponding componentis used to create a new selector; each new selector is stored in VS. Finally, for eachnew selector, a final verification step is carried out to ensure that the new selectoris indeed valid in dom (lines 39-43).In summary, for the running example, our design looks for matching selec-tors of the form “div#view-display-id-catalog view ><NEW-TAG>.pages span”.Similarly, if the ID component “view-display-id-catalog view” were assumed in-correct, the algorithm looks for matching selectors of the form “div#<NEW-ID>> p.pages span”. In the latter case, two matching valid selectors are found: “div#view-id-catalog view > p.pages span” and “div#view-display-id-catalog page > p.pagesspan”Inferring Possible Replacements: To determine the possible sickness, our de-sign determines if any element of the local string set of each component (LSS1,LSS2, ..., LSSN) can be replaced to match one of the valid selectors in VS. Thisis accomplished by the replacementsFinder() function (line 45). The basicidea is as follows: for each component string set element, assume that this elementis incorrect, then determine if any of the valid selectors can provide a replacementstring value for that element. We accomplish this matching with the valid selectorsthrough the use of a string constraint solver (see Section 4.5.4).Let us go back to the running example. Suppose the design is currently con-sidering the “view-display-id-catalog view” component, whose localstring set was found earlier. Also, as mentioned, two valid replacement selectorswere found for this component. Our design goes through each element in the localstring set to look for possible replacements. First, it assumes that the first string setelement – namely ((“div#view-display-id-”, 8), startIndex: 4, endIndex:19) – is incorrect; hence, it checks if any of the valid selectors is of the form100“div#<NEW-STRING>catalog view > p.pages span” – i.e., the er-roneous selector with the string “view-display-id-” replaced. In this case, the con-straint solver will find one matching selector: “div#view-id-catalog view> p.pages span”. Next, our design will move on to the second local string setelement and perform the same procedure to find the following matching selector:“div#view-display-id-catalog page > p.pages span”.Case 2: Index Replacement. In this step, our design assumes that the index usedto access the list of elements returned by the direct DOM access is incorrect. Tocheck whether this assumption holds, our approach records the size of the arrayreturned by the direct DOM access; this is determined based on the value of aninstrumented variable added to the JavaScript code to keep track of the size. Theerroneous array index used, if any, is also recorded.The erroneous array index is compared with the size to see if it falls within theallowed range of indices (i.e., [0–size-1]). If not, our approach will package thefollowing as a possible sickness (belonging to the “Index” sickness class), to beadded to the list of possible sicknesses: “An array index has a numerical value ofX that does not fall within the range [0-Z]”; here, X is the erroneous array index,and Z is size-1.Case 3: Null/Undefined Checks. By default, our design packages a possible sick-ness belonging to the “Null/Undefined” class to account for cases where the repairis a DOM Element Validation. In essence, this means the line of code must bewrapped in an if statement that checks whether the DOM element being used isnull or undefined. If code termination was caused by a null exception (orundefined exception), the following is packaged and added to the list of possiblesicknesses: “A line of code X accessing a property/method of the DOM element re-turned by the direct DOM access should (probably) not execute if the DOM elementis null (or undefined)”.4.5.3 Suggesting TreatmentsOnce the symptom analyzer has found a list of possible sicknesses, each of thesepossible sicknesses is examined by the treatment suggester (Figure 4.3, box c).The goal of the treatment suggester is as follows: given a possible sickness, cre-101ate an actionable repair message that would prompt the programmer to modify theJavaScript code in such a way that the symptom represented by the possible sick-ness would disappear. In order to accomplish this, the code context, as inferredfrom the supplementary information retrieved by the data collector, is analyzed.This module handles each possible sickness class separately, as described below.String class. Possible sicknesses belonging to the “String” class require the stringvalue X at some line in the JavaScript code to be replaced by another string value Y.If applied correctly, this would cause the parameter at the direct DOM access to bevalid, so the direct DOM access would no longer output null, undefined, noran empty set of elements (i.e., Symptom 1 disappears). As we discovered in Sec-tion 4.3.2, most Parameter Replacement fixes are direct string literal replacements;hence, at first, it may seem straightforward to simply output a message promptingthe programmer to directly perform the replacement. However, there are severalcases that may make this suggestion invalid, for example:1. The string value is not represented by a string literal, but rather, by a variableor an expression. Recall that when calculating the string set, gaps may existin this string set due to string values originating from sources external to theJavaScript code, or due to values not originating from string literals. Hence,a simple “replace” message would be inappropriate to give out as a suggestedrepair;2. The string value may be in a line that is part of a loop. Here, a “replace”message may also be inappropriate, since the replacement would affect other(possibly unrelated) iterations of the loop, thereby possibly causing unwantedside effects.To account for these cases, before outputting a repair message, our approachfirst examines (a) the string set element type (i.e., is it a variable, expression, orstring literal?), and (b) the location (i.e., inside a loop?). Through this analysis,the treatment suggester can provide a suggested repair message. The algorithmessentially performs a “background check” on the code suffering from the bug todetermine what message to output. For example, if our design finds that a string setelement is in a line inside a loop, and this line executed multiple times a message102Table 4.3: List of output messages.Type DescriptionREPLACE Replace the string literal X with Y in line LREPLACE AT ITER-ATIONWrap line L in an if statement so that the stringliteral X instead has value Y at iteration IOFF BY ONE AT BE-GINNINGChange the lower bound of the for loop con-taining line L so that the original first iterationdoes not executeOFF BY ONE ATENDChange the upper bound of the for loop con-taining line L so that the original last iterationdoes not executeMODIFY UPPERBOUNDChange the upper bound of the for loop con-taining line L so that the loop only iterates upto iteration I (inclusive)EXCLUDE ITERA-TIONSkip iteration I of the for loop containing lineL by adding a ‘continue’ENSURE Ensure that the string value at line L has valueY instead of X. This is a fall back messagewhich is given if a precise modification tothe code cannot be inferred by the suggester.Thus, our suggester is conservative in that itonly provides a particular suggestion if it iscertain that the suggestion will lead to a cor-rect replacement.such as “replace at iteration” or “off by one” – will be given. The complete list ofmessages is presented in Table 4.3.When the running example is subjected to the treatment suggester algorithm,the possible sicknesses found by the symptom analyzer will lead to two REPLACEmessages being suggested, one of which is the fix described in Section 4.2: Replacethe string literal “div#view-display-id-” with “div#view-id-” in line 8. The othermessage is a spurious suggestion: Replace the string literal “catalog view” with“catalog page” in line 2.Index and Null/Undefined class. For the “Index” class, the suggestion is alwaysas follows: Modify the array index in line L to some number within the range [0–Z].For the “Null/Undefined” class, the suggestion depends on whether the exceptionwas a null exception or an undefined exception. If the exception is a null excep-tion, the following message is given: Wrap line L in an if statement that checks ifexpression E is null. Here the expression E is inferred directly from the error mes-sage, which specifies which expression caused the null exception. An analogousmessage is given if the exception is “undefined”.1034.5.4 Implementation: VejovisWe implemented our approach in a tool called VEJOVIS, which is freely availablefor download [126].The data collector is implemented by instrumenting the JavaScript code us-ing RHINO and running the instrumented application using CRAWLJAX [110].For the symptom analyzer, we use the string constraint solver HAMPI [88] forreplacementFinder() (see Algorithm 1, line 45), which looks for viable re-placements among the valid parameters found. The symptom analyzer treats thevalid parameters found as defining the context-free grammar (CFG).In keeping with our goal of providing as few suggestions as possible, VEJOVISallows the users to modify a parameter called the edit distance bound. The editdistance bound is a cutoff value that limits the suggested replacement strings toonly those whose edit distance with the original string is within the specified value.We use Berico’s [26] implementation of the Damerau-Levenshtein algorithm tocalculate the edit distance.4.6 EvaluationTo evaluate the efficacy of VEJOVIS in suggesting repairs for DOM-related faults,we answer the following research questions.RQ3 (Accuracy): What is the accuracy of VEJOVIS in suggesting a correctrepair?RQ4 (Performance): How quickly can VEJOVIS determine possible replace-ments? What is its performance overhead?We perform a case study in which we run VEJOVIS on real-world bugs fromeleven web applications. To determine accuracy (RQ3), we measure both the pre-cision and recall of our tool. To calculate the performance (RQ4), we compare theruntimes of VEJOVIS with and without instrumentation.4.6.1 Subject SystemsThe bugs to which we subject VEJOVIS come from eleven open-source web appli-cations, studied also in Section 4.3; hence, these bugs represent real-world DOM-related faults that occurred in the subject systems. We choose two bug reports104randomly from the set of bugs that satisfy our fault model, for each of the elevenweb applications, for a total of 22 bugs. Note that TaskFreak is not included amongthe applications studied, as we only found 6 JavaScript bugs from that application,none of which fit the fault model described in Section 4.4. Descriptions of thebugs and their corresponding fixes (henceforth called the actual fixes) can be foundonline [126]. It took programmers an average of 47 days to repair these bugs af-ter being triaged, indicating that they are not trivial to fix. We had to restrict thenumber of bugs to two per application as the process of deploying the applicationsand replicating the bugs was a time and effort intensive one. In particular, mostof the bugs were present in older versions of the web applications. This presenteddifficulties in installation and deployment as some of these earlier versions are nolonger supported.4.6.2 MethodologyAccuracy. We measure accuracy based on both recall and precision. In the contextof this experiment, recall refers to whether our tool was able to suggest the “correctfix” – that is, whether one of the suggestions provided by VEJOVIS matches theactual developer fix described in the corresponding bug report. Hence, in thiscase, recall is a binary metric (i.e., either 0% or 100%), because the actual fixeither appears in the list of suggestions, or it does not. Note that in some cases, thesuggested fix is not an exact match of the applied fix, but is semantically equivalentto it, and is considered a match. Precision refers to the number of suggestions thatmatch the actual fix divided by the number of suggestions provided by VEJOVIS.Again, since there is only one matching fix, precision will be either 0 (if the correctfix is not suggested), or 1#Suggestions . This metric is a measure of how well VEJOVISprunes out irrelevant/incorrect fixes.To measure the above metrics, we first replicated the bug, and ran VEJOVISwith the URL of the buggy application and the direct DOM access information(i.e., line number and enclosing function) as input; for the libraries, the bugs arereplicated by using the test applications described in the bug reports. VEJOVIS out-puts a list of suggestions for the bug, which we compare with the actual developerfix to see if there is a match. Based on this comparison, we calculated the recall and105precision for that particular attempt. In our experimental setup, the suggestions aresorted according to the edit distance of the replacement string with respect to theoriginal string, where replacements with smaller edit distances are ranked higher.Suggestions for “null” or “undefined” checks are placed between suggestions withedit distance 5 and those with edit distance 6. In the event of a tie, we assumethe worst case i.e., the correct fix is ranked lowest among the suggestions with thesame edit distance.To test our assumption that the replacement parameter closely resembles theoriginal parameter, we control the edit distance bound (defined in Section 4.5.4)for VEJOVIS. We first run our experiments with an edit distance bound of infinity,which, means the suggestions given do not have to be within any particular editdistance relative to the original string being replaced (i.e., no edit distance boundassigned). Then, to observe how this bound affects VEJOVIS’ accuracy, we re-runour experiments with a smaller edit distance bound of 5. We choose the value 5based on a pilot study.Performance. For each bug, we measure the performance overhead introduced byVEJOVIS’ instrumentation by comparing the corresponding web application withand without instrumentation. This evaluates the performance of the data collectionphase of VEJOVIS. We also measure the time it takes for VEJOVIS to generate therepair suggestions. This evaluates the performance of the symptom analysis andtreatment suggestion phases of VEJOVIS.4.6.3 ResultsAccuracy. Table 4.4 shows the results of our experiments when the edit distancebound is set to infinity i.e., no bound is assigned (numbers outside parentheses).The “Accurate” column of Table 4.4 indicates, for each bug, whether the actual fixappears among the list of repairs suggested by VEJOVIS (i.e., the recall was 100%).As the results show, assigning no bound causes VEJOVIS to accurately suggest re-pairs for 20 out of the 22 bugs, for an overall recall of 91%. The only unsuccessfulcases are the second bugs in Roundcube, where, the correct replacement selec-tor is “:focus:not(body)”, and jQuery, where the correct replacement selector is“[id=“nid”]”; VEJOVIS does not currently support these CSS selector syntax .106Table 4.4: Accuracy results, with edit distance bound set to infinity i.e., no bound assigned. BR1 refers tothe first bug report, and BR2, the second bug report (from each application). Data in parentheses are theresults for when the edit distance bound is set to 5.Application Accurate? PrecisionBR1 BR2 BR1 BR2Drupal 3 (7) 3 (7) 3% (0%) 25% (0%)Ember.js 3 (3) 3 (3) 50% (50%) 33% (100%)Joomla 3 (3) 3 (3) 1% (25%) 1% (100%)jQuery 3 (3) 7 (7) 1% (25%) 0% (0%)Moodle 3 (3) 3 (3) 3% (33%) 3% (100%)MooTools 3 (7) 3 (3) 50% (0%) 50% (50%)Prototype 3 (3) 3 (3) 17% (50%) 50% (50%)Roundcube 3 (3) 7 (7) 1% (25%) 0% (0%)TYPO3 3 (3) 3 (3) 1% (100%) 100% (100%)WikiMedia 3 (7) 3 (3) 4% (0%) 1% (100%)WordPress 3 (3) 3 (3) 3% (7%) 1% (50%)Note that in three of the successful cases, the repair suggestion does not exactlymatch the actual fix, but rather is equivalent to (or close to) the actual fix.First, in the second TYPO3 bug, the actual fix documented in the bug report isto add a check to ensure that the NodeList valueObj, which is populated by a di-rect DOM access call to getElementsByName(), has a length greater than 0, therebypreventing the use of valueObj[0].value from throwing an undefined excep-tion. VEJOVIS, in contrast, suggested an alternate but equivalent fix with no sideeffects, namely adding a check to see if the expression valueObj[0] is unde-fined before trying to access one of its properties.Second, in both the first Moodle bug and the second Prototype bug, VEJO-VIS provides the fallback “ENSURE” suggestion. In the Moodle bug, VEJOVISsuggests the following: Ensure the value of variable itemid is “id itemname”instead of “itemname”; this is because the string literal “itemname” originatedfrom an anonymous function, which our implementation currently does not sup-port, leaving a gap in the string set. Nonetheless, this simplifies the debuggingtask for the programmer, as it points her directly to the problem – i.e., the string“itemname”, located somewhere in the JavaScript code, needs to be changed to“id itemname”. Similarly, in the Prototype bug, VEJOVIS suggests the follow-ing: Ensure the expression id.replace(/[\.:]/g, ‘‘\\$0’’) has value“outer.div” instead of “outer\\$0div”. Again, while VEJOVIS is not able to pro-vide the exact fix, it points the programmer to the relevance of the replace() method107Table 4.5: Rank of the correct fix when suggestions are sorted by edit distance. The denominator refers tothe total number of suggestions. Top ranked suggestions are in bold.Application RankBR1 BR2Drupal 31 / 40 01 / 04Ember.js 01 / 02 01 / 03Joomla 01 / 88 01 / 88jQuery 02 / 108 –Moodle 02 / 37 01 / 37Moodle 02 / 02 01 / 02Prototype 01 / 06 01 / 02Roundcube 04 / 79 –TYPO3 01 / 187 01 / 01WikiMedia 06 / 24 01 / 71WordPress 13 / 30 01 / 170in the fix. These results show that even in cases when VEJOVIS is unable to fullyresolve the origins of the erroneous selector’s string values, it still provides mean-ingful suggestions that facilitate debugging and are hence useful to the program-mer.Among the successful cases, the average precision (“Precision” column of Ta-ble 4.4) is approximately 2%; on average, this translates to VEJOVIS providing 49suggestions for each bug, with a maximum of 187 total suggestions for the firstTYPO3 bug. The high number of suggestions motivated us to implement the sim-ple ranking scheme based on edit distance, as described in Section 4.6.2.Table 4.5 shows, for each bug, the rank of the actual fix among the list ofsuggestions provided by VEJOVIS; only the cases where the actual fix appearsamong the list of suggestions are considered. As shown in the table, the correctfix appears as the first suggestion in 13 out of the 20 cases, and as the secondsuggestion in three more cases. In fact, for the WordPress bug, the correct fix istied for first place among 13 bugs; we listed its rank as 13 because we considerthe worst case. Hence, despite providing a large number of suggestions on averagewhen the edit distance bound is set to infinity, our simple ranking scheme based onedit distance ranked most of the actual fixes near the top of the list of suggestions.As mentioned, the above results were obtained with an edit distance boundof infinity. To quantify the effects of using a finite bound, we re-ran the aboveaccuracy experiment with an edit distance bound of 5. The results are shown inparentheses in Table 4.4. As the results show, assigning a bound of 5 decreases108Table 4.6: Performance results.Application Crawl Crawl AverageTime w/o Time with TreatmentInstrumentation (s) Instrumentation (s) Time (s)Drupal 28.5 49.6 1.8Ember.js 10.8 16.3 0.8Joomla 57.0 85.0 6.1jQuery 12.0 19.0 4.1Moodle 46.9 59.6 7.4MooTools 13.3 19.1 0.9Prototype 12.0 17.4 8.6Roundcube 25.1 34.7 3.4TYPO3 39.9 72.8 6.5WikiMedia 15.9 24.8 5.4WordPress 20.2 33.8 7.0Average - 39.3 4.7the number of successful cases from 20 to 16, where four additional cases becameunsuccessful because the actual fix required replacing the original parameter withanother parameter that is more than an edit distance of 5 away. However, theprecision jumps dramatically to 36% with this bound, which translates to around 3suggestions given for each bug on average. Hence, assigning a finite edit distancebound can significantly decrease the number of suggestions, which makes the listof suggestions more manageable; however, this comes at the cost of lower recall of73% (as compared to 91%).Performance. There are two sources of performance overhead in VEJOVIS: (1) in-strumentation overhead, and (2) symptom analysis and treatment suggestion over-head. Table 4.6 shows the results. The time taken with and without instrumen-tation during the data collection phase of VEJOVIS are shown in the second andthird columns of the table. The time varies per application, ranging from 16.3 to85.0 seconds, for an average of 39.3 seconds. The time in the symptom analysisand treatment suggestion phases is shown in the last column. The average timefor these phases is 4.7 seconds, ranging from 0.8 to 7 seconds. Thus, on average,VEJOVIS takes less than one minute (44 seconds) to find the correct fix, with aworst-case time of 91.1 seconds for Joomla.1094.7 DiscussionExtensions. First, VEJOVIS suggests treatments belonging to the Parameter Mod-ification and DOM Element Validation categories as mentioned in our empiri-cal study of common fixes in Section 4.3. While these together constitute morethan half of the fix types we found in the study, another common fix category isMethod/Property Modification, in which a DOM API method or property is added,removed or replaced with another method/property. We do not incorporate thisfix category in our design; however VEJOVIS can be extended to account for thiscategory. For instance, it is possible, in some cases, to reduce the problem ofreplacing DOM methods to replacing CSS selectors. As an example, replacinggetElementById(‘‘str’’) with getElementsByClassName(‘‘st-r’’) can be thought of as replacing the CSS selector “#str” with “.str”.Second, the results of our evaluation show that while VEJOVIS accurately pre-dicts the actual fix in almost all of the bug reports analyzed, the number of sugges-tions provided can be large, thereby lowering its precision. In our evaluation, weshowed that ranking the fixes based on edit distance makes the actual fix rank highin many cases. We are currently exploring more intelligent ways to perform thisranking; for example, based on the textual patterns of the strings.Threats to Validity. An external validity threat is that the bugs we analyzed comefrom only 11 web applications. However, the systems considered were developedfor different purposes and hence, represent a reasonable variety. Further, the cor-responding bug reports have been fixed by the developers, and are therefore repre-sentative of the issues encountered in practice.An internal threat to validity is that we have assumed the fixes described in thebug reports are correct as many experienced developers are typically involved withproviding patches for these bugs. Nonetheless, to mitigate this threat, we carefullyanalyzed the patches provided in the bug reports and have tested the fixes on ourown platforms to see if they are sound.Additionally, the bugs we considered in our evaluation were taken from the bugreport study in Section 4.3, which may be a potential source of bias. This threat canbe mitigated by considering other applications, which we plan to do in the future.As for repeatability, VEJOVIS is available [126], and the experimental subjects are110open source, making our case study fully repeatable.4.8 Related WorkProgram Repair. Program repair refers to the act of fixing bugs through auto-mated techniques. Perhaps the best-known application of program repair is to datastructures. Demsky et al. [41] use formal specifications to suggest fixes for datastructures. Elkareblieh et al. [47] use programmer-specified assertions for datastructure repair. However, these techniques are limited to repairing data-structures,and do not fix the underlying defect that produced the erroneous structure. Whilethe DOM can be considered a data-structure, VEJOVIS goes beyond the DOM andactually can suggest ways to modify the JavaScript code based on the defectiveDOM access.Generating fixes at the source code level has gained attention recently [13, 172,174]. Weimer et al. [174] propose the use of genetic algorithms for repairing C pro-grams. The main idea is to copy other parts of the program to the faulty portion ofthe program and check if the modified program passes the existing test cases. How-ever, it is not clear how this technique could be applied to web applications, wherethe code base includes different languages such as JavaScript and HTML/DOM.In recent work, Zhang et al. [184] propose FlowFixer, a technique to repairbroken workflows in Java-based GUI applications. Similar to VEJOVIS, Flow-Fixer attempts to find repairs for errors that arise due to a mismatch between thecode and the GUI state. However, there are two main differences between VEJO-VIS and FlowFixer. First, FlowFixer is concerned with correcting the sequence ofuser actions applied to the GUI; in contrast, VEJOVIS is concerned with correct-ing the code that drives the functionality of the application. Second, FlowFixeruses random testing to find replacements; VEJOVIS is different in that it performsa systematic traversal of the DOM to find valid replacement selectors.Web Application Repair. There has been limited work on exploring fault repairfor web applications. Carzaniga et al. [33] propose automatic workarounds for webapplications that experience errors in using APIs by replacing the buggy API callsequence with a functionally equivalent, but correct sequence. Samimi et al. [148]have proposed a technique for PHP code to fix errors that result in the generation111of webpages with malformed HTML; similar work has been done by Zheng etal. [187]. Neither of these techniques consider JavaScript code, nor do they applyto DOM-related JavaScript faults. In recent work, Jensen et al. [80] and Meawadet al. [107] introduce techniques to transform unsafe eval calls in JavaScript codeto functionally equivalent, but safe constructs. This is more of a prevention thanrepair technique. However, they do not consider JavaScript errors, and in particularerrors that lead to DOM-related faults.4.9 ConclusionJavaScript interacts extensively with the DOM to create responsive applications;yet, such interactions are prone to faults. In this chapter, we attempt to understandcommon fixes applied by programmers to DOM-related faults. Based on thesefindings, we propose an automated technique for providing repair suggestions forDOM-related JavaScript faults. Our technique, implemented in a tool called VE-JOVIS, is evaluated through a case study of 22 bugs based on real-life bug reports.We find that VEJOVIS can accurately predict the repair in 20 out of the 22 bugs,and that the correct fix appears first in the list of fix suggestions for 13 of the 20bugs.112Chapter 5Detecting Inconsistencies in JavaScript MVC Ap-plicationsThis chapter discusses our approach for automatically detecting identifier and typeinconsistencies in applications written using JavaScript Model-View-Controllerframeworks.24 In essence, this is our first attempt to answer RQ1C from Chap-ter 1.1 (Can we create a general technique that detects JavaScript faults in thepresence of JavaScript frameworks? If so, how?). Our approach has been im-plemented in a tool called AUREBESH25, which we describe in the sections thatfollow.5.1 IntroductionDOM-related faults – which constitute the majority of JavaScript faults as wehave established in Chapter 2 – often result from a developer’s incomplete orerroneous understanding of the relationship between the JavaScript code and theDOM. Partly in response to this issue, JavaScript libraries known as MVC frame-works have recently been developed. MVC frameworks such as AngularJS [58],BackboneJS [16], and Ember.js [87] use the well-known Model-View-Controller(MVC) pattern to simplify JavaScript development in a way that abstracts out DOMmethod calls. This is accomplished by giving programmers the ability to definemodel objects, which are then directly embedded in the HTML code (typically via24The main study in this chapter appeared at the International Conference on Software Engineering(ICSE 2015) [132].25AUREBESH is named after a writing system used in the Star Wars film series.113a double curly brace notation) such that any changes in these objects’ values willautomatically be reflected in the DOM, and vice versa – a process known as “two-way data binding”. The frameworks thus eliminate the need for web programmersto explicitly set up DOM interactions in JavaScript.Unfortunately, despite the apparent advantages, MVC frameworks are still sus-ceptible to consistency issues akin to DOM-JavaScript interactions [90]. In par-ticular, these frameworks rely on the use of identifiers to represent model objectsand controller methods; definitions and uses of these identifiers are expected tobe consistent across associated models, views, and controllers. Moreover, due toJavaScript’s loose typing, which is retained in these MVC frameworks, the pro-grammer must ensure that the values assigned to model objects and returned bycontroller methods are consistent with their expected types, depending on howthey are used. Since model objects and controller methods are primarily used torepresent major functionalities of the web application, any inconsistencies betweenthese identifiers and types can potentially lead to a significant loss in functionality;hence, avoiding these inconsistencies is crucial. In addition, these inconsistenciesare often difficult to detect, because (1) multiple model-view-controller groupingsexist in the application, and (2) no exceptions are thrown or warnings provided inthe event of an inconsistency.To tackle this problem, we introduce an approach to automatically detect incon-sistencies between identifiers in web applications developed using JavaScript MVCframeworks. Our design conducts static analysis to separate the three main com-ponents (model, view, and controller) in these applications; find the identifiers de-fined or used in these components; infer the types associated with these identifiers;and compare the collected identifiers and type information to determine any incon-sistencies. We implement our approach in a tool called AUREBESH, which findsinconsistencies in AngularJS [58] applications, the most popular [153] JavaScriptMVC framework used in practice.Since MVC frameworks for JavaScript are fairly new, few papers have ex-plored their characteristics. For the most part, prior work in this area does notinclude observations on the properties of existing MVC frameworks, but rather,proposes new MVC frameworks fitted towards a specific goal [54, 72, 162]. Otherpapers analyse existing JavaScript MVC frameworks, with particular focus on their114maintainability [23, 86]. To the best of our knowledge, our work in this chapter isthe first to identify the consistency issues in JavaScript applications using MVCframeworks,26 and the first to propose a design for automatically detecting incon-sistencies in such applications.We list the following as our main contributions:• We identify consistency issues pertinent to identifiers and types that arepresent in JavaScript MVC applications. These consistency issues point topotential problems within the application;• We devise a general formal model for MVC applications. This model helpsus reason about the way variables and functions are used and defined through-out the application, which, in turn, allows us to more clearly define whatconstitutes an inconsistency among them in the application;• We introduce an automatic approach to detect identifier and type inconsis-tencies in MVC applications. This approach uses static analysis, and onlyrequires the application’s client-side source code;• We implement our design in an open-source tool called AUREBESH, whichworks for AngularJS applications; and• We perform a systematic fault injection experiment on AUREBESH to test itsaccuracy, and we subject AUREBESH to real-world applications to assess itsability to find real bugs. We find that AUREBESH is accurate (96.1% recalland 100% precision), and can find bugs in real MVC applications (15 bugsin 22 applications, five of which were acknowledged by the developers).5.2 Running ExampleThe traditional application of MVC in web applications is to provide a clear sep-aration between the application data and the HTML output that represents themon the server side. Recent JavaScript MVC frameworks represent the next logical26For simplicity, we will refer to such applications as MVC applications or JavaScript MVC ap-plications.1151 <input type="text" ng-model="userName" placeholder="Type Username" />2 <button ng-click="searchUser()">3 List User's Favourite Movies4 </button>Figure 5.1: HTML code of the “Search” view (search.html)step, i.e., applying the MVC model to the client-side to separate JavaScript (i.e.,data and controls) from the DOM (i.e., the output).Some popular MVC frameworks include AngularJS, BackboneJS, and Em-ber.js. Of these, AngularJS is the most widely used [153], with four times as manythird-party modules and GitHub contributors, and over 20 times as many Chromeextension users, compared to the closest competitor, BackboneJS. Interest in An-gularJS has also increased significantly since 2012, with around 50,000 questionsin StackOverflow and 75,000 related YouTube videos. This is more than the cor-responding items for the other two frameworks combined. For these reasons, wefocus on AngularJS in this work.We introduce the running example that we will be using throughout the chap-ter. This example is inspired by real-world bugs encountered by developers ofAngularJS applications [83, 146]. The application – which we will refer to asMovieSearch – initially takes the user to the “Search” page (Figure 5.1), where theuser can input the name of a user, via the input element. Clicking on the “ListUser’s Favourite Movies” button leads to the “Results” page (Figure 5.2), whichdisplays the list of movies that corresponds to the user name that has been input,as well as the number of movies in the list. In addition, clicking on the “WhichUser?” button in the “Results” page would display the current user name in analert box; for example, if the user name is “CountSolo”, the alert would display themessage, “The user is CountSolo”.The code for this application contains two views – one corresponding to the“Search” page (Figure 5.1) and the other corresponding to the “Results” page (Fig-ure 5.2) – implemented in HTML. It also contains two models and two controllersimplemented in JavaScript, shown in Figure 5.3.An MVC application consists of model variables, controller functions, andgroupings. Figure 5.5 shows how model variables and controller functions are1161 <h3 ng-if="userData.display">2 {{userData.intro}}3 </h3>4 <ul>5 <li ng-repeat="movie in userData">6 {{}}7 </li>8 </ul>9 <div id="movieCount">10 <ng-pluralize count="userData.count" when="movieForms"></ng-pluralize←↩>11 </div>12 <br />13 <button ng-click="alertUserName()">14 Which User?15 </button>Figure 5.2: HTML code of the “Results” view (results.html)defined and used, in relation to the models, views, and controllers. It also showshow these models, views, and controllers form groupings.Model Variables. Model variables refer to the objects where the model data isstored, and are represented by identifiers defined within the scope of a particularmodel. These model variables are defined in models, and are used (either polled orupdated) by associated views and controllers. For instance, the Search model inthe running example defines one model variable, namely userName (Figure 5.3,line 4); further, the associated controller (SearchCtrl) and view (search.html)use this same variable in Figure 5.3, line 8, and Figure 5.1, line 1, respectively.Similarly, the Results model (Figure 5.3, lines 17-26) contains two model vari-ables: userData and movieForms; these are used by the associated view (re-sults.html) in various lines in Figure 5.2.Controller Functions. Controller functions, as the name suggests, are functionsdefined in the controller. These controller functions are used in the view by at-taching the function as an event handler to a view element. As an example, theSearchCtrl controller in Figure 5.3, lines 7-12 defines one controller func-tion – searchUser() – which is subsequently used in the corresponding view(search.html) by setting it as the event handler of a button (Figure 5.1, line 2).Also, the ResultsCtrl controller in Figure 5.3, lines 29-31 defines the con-troller function alertUserName(), which is used in the corresponding view1171 var searchApp = angular.module('searchApp', ’ngRoute’);2 searchApp.controller('SearchCtrl', function($scope, $location) {3 //MODEL - Search4 $scope.userName = "";6 //CONTROLLER - SearchCtrl7 $scope.searchUser = function() {8 var id = getUserId($scope.userName);9 if (id >= 0) {10 $location.path('/results/' + id);11 }12 }13 });15 searchApp.controller('ResultsCtrl', function($scope, $routeParams) {16 //MODEL - Results17 $scope.userData = {18 movieList: getList($routeParams.userId),19 intro: "Welcome User #" + $routeParams.userId,20 display: true,21 count: "two"22 };23 $scope.movieForms = {24 one: '{} movie',25 other: '{} movies'26 };28 //CONTROLLER - ResultsCtrl29 $scope.alertUserName = function() {30 alert("The user is " + $scope.userName);31 };32 });Figure 5.3: JavaScript code of the models and controllers1 searchApp.config(function($routeProvider) {2 $routeProvider3 .when('/', {4 controller: 'SearchCtrl',5 templateUrl: 'search.html'6 })7 .when('/results/:userId', {8 controller: 'ResultsCtrl',9 templateUrl: 'results.html'10 })11 .otherwise({12 redirectTo: '/'13 });14 });Figure 5.4: JavaScript code of the routes118Model Variables Controller Functionsm2Modelsv2Viewsc2Controllersm1 v1 c1v3Figure 5.5: Block diagram of the def-use and grouping model for MVC framework identifiers. Solidarrows indicate a “defines” relation, while dashed arrows indicate a “uses” relation. Models, views,and controllers connected with the same line types form a grouping.(results.html) in Figure 5.2, line 13.Groupings. Due to the dynamic property of web applications, an MVC applica-tion can consist of multiple models, views, and controllers; hence, the program-mer must specify which of these models, views, and controllers are associatedwith each other. Current MVC frameworks allow the programmer to specify these(model, view, controller) groupings by embedding the name of the model and con-troller in the view. These groupings can also be specified using routers, as in thecase of the running example (Figure 5.4), which links the Search model andSearchCtrl controller with the search.html view (lines 3-6), and the Resultsmodel and ResultsCtrl controller with the results.html view (lines 7-10).5.3 Consistency IssuesWe now describe two types of consistency issues observed in MVC applications,namely identifier consistency and type consistency. We focus on these issues inthis chapter.Identifier Consistency. Model variables and controller functions are representedby identifiers in MVC applications. These identifiers are written both in the Ja-vaScript code, when they are defined or used in the model or controller, and inthe HTML code, when they are used in the view. To ensure correct operation,119(1) model variable identifiers used in the controller or view must be defined in themodel, and (2) controller function identifiers used in the view must be defined in thecontroller. While this seems straightforward to enforce at first sight, the followingfactors complicate the process of maintaining this consistency.• An identifier is repeatedly used in both the HTML code and the JavaScriptcode. Even though DOM interactions are abstracted out by MVC frame-works, this repeated usage of identifiers across separate languages makes theapplication susceptible to identifier inconsistencies. Further, the commonpractice of implementing models, views, and controllers in separate files –sometimes maintained by separate programmers in collaborative projects –increases the chances of such inconsistencies.• An application typically contains multiple models, views and controllersgrouped together. Hence, the programmer must ensure the consistency notjust of one (model, view, controller) grouping, but of several groupings.Also, these groupings must be set up correctly, e.g., via the routers, or elsean inconsistency may occur.For instance, the MovieSearch application contains two identifier inconsis-tencies. First, the ResultsCtrl controller uses the model variable identifieruserName in Figure 5.3, line 30, but this identifier is not defined in the Resultsmodel (it is only defined in the Search model, which is not grouped with Resu-ltsCtrl); this causes the alert box to display “The user is undefined” after click-ing on the “Which User?” button. Second, since the li element in the results.htmlview loops over the userData model variable (Figure 5.2, lines 5-7) instead ofuserData.movieList, the reference to in Figure 5.2, line 6will be undefined with respect to the Results model; this causes blank bulletpoints to be displayed.Type Consistency. In many cases, the programmer will also need to ensure thatthe value that is assigned to a model variable – or the value returned by a controllerfunction – has a type consistent with that variable or function’s use in the view.For example, in AngularJS, the ng-if attribute in the view must be assigned a120Boolean value; a type inconsistency occurs if a model variable that contains a non-Boolean value or a controller function that returns a non-Boolean value is attachedto the attribute. Ensuring this consistency is complicated by the fact that JavaScriptis a loosely typed language.MovieSearch contains one such type inconsistency. In Figure 5.2, line 10, theuserData.count model variable is attached to the count attribute, which ex-pects to be assigned a value of type Number; however, userData.count isassigned a String in the corresponding Results model (Figure 5.3, line 21). Thisleads to the disappearance of the message that shows the number of movies, insidethe div element with ID movieCount in Figure Formal Model of MVC ApplicationsWe propose a more formal, abstract model for MVC-based web applications toclearly delineate all the consistency properties of such applications. This modelalso helps us describe our approach for automatically detecting inconsistencies.Definition 3 (MVC Application) An MVC application is a tuple <M ,V ,C ,Ω ,Γ ,ωM ,ωV ,ωC ,γC ,γV ,φ> where M is the set of models; V is the set of views;C is the set of controllers; Ω is the set of model variables; and Γ is the set ofcontroller functions. Additionally, we define the following functions.• ωM :M → 2Ω indicates what model variables are defined in a model;• ωV : V → 2Ω indicates what model variables are used in a view;• ωC : C → 2Ω indicates what model variables are used in a controller;• γC : C → 2Γ indicates what controller functions are defined in a controller;• γV : V → 2Γ indicates what controller functions are used in a view;• φ :M ×V ×C →{true, f alse} indicates the model-view-controller group-ings.Further, a model variable in Ω and a controller function in Γ are representedby a tuple <id, ty>, where id refers to the identifier, and ty refers to the type (for121controller functions, this pertains to the return type). The function I() projects theid portion of these model variables and controller functions onto a set.An MVC web application is consistent if and only if for every element (m,v,c)∈M ×V ×C such that φ(m,v,c) = true, the following properties hold:Property 1. The view and controller only use model variables that are defined inthe model:(∀µ)(µ.id ∈ I(ωC (c))∪ I(ωV (v)) =⇒ µ.id ∈ I(ωM (m)))Property 2. The view only uses controller functions that are defined in the con-troller:(∀κ)(κ.id ∈ I(γV (v)) =⇒ κ.id ∈ I(γC (c)))Property 3. The expected types of corresponding model variables in the viewmatch the assigned types in the model or controller:(∀µ,ρ)(µ.id ∈ I(ωV (v))∧ρ.id ∈ I(ωM (m))∪ I(ωC (c))∧ µ.id = ρ.id =⇒ µ.ty = ρ.ty)Property 4. The expected and returned types of corresponding controller functionsmatch in the view and controller.(∀κ,τ)(κ.id ∈ I(γV (v))∧ τ.id ∈ I(γC (c))∧κ.id = τ.id=⇒ κ.ty = τ.ty)5.5 ApproachTo alleviate the consistency issues described, we propose a static analysis approachfor automatically detecting identifier and type inconsistencies in MVC applica-tions. We opt for a static instead of a dynamic approach for several reasons. First,static analysis is more lightweight than dynamic analysis, in that the applicationdoes not need to execute in order to detect the inconsistencies; this is especially122useful during the development phase, where quick relay of information about thecode, such as error messages, is preferred.Second, dynamic analysis requires user input – i.e., a sequence of user events– and it is not always clear how to choose these inputs. A dynamic approach maybe suitable for tools that target specific bugs – such as AUTOFLOX (Chapter 3) andVEJOVIS (Chapter 4) – since the steps to reproduce the bug are known; in contrast,our detector is not targeting a specific bug known to exist in the program, but rather,looking for these bugs, without prior knowledge of how to reproduce them. Thisis the same reason an inconsistency detector is preferred over a mechanism thatsimply displays an error message when an inconsistency is encountered duringexecution.There are also several challenges in designing the above detector, namely,• C1: Model variables are often defined as nested objects (e.g., see Figure 5.3,lines 17-22), and the variables defined inside these objects, along with theirtypes, also need to be recorded, thereby complicating the static analysis;• C2: Sometimes, aliases are used in the HTML code to represent model vari-ables defined in the JavaScript code (e.g., the movie variable in Figure 5.2,line 5 is an alias for userData.movieList, userData.intro, etc.).The design needs to be capable of handling these aliases;• C3: Since MVC applications can contain multiple models, views, and con-trollers, the design needs to infer all the possible groupings to be checked; asimple comparison of all identifiers and types collected does not suffice.Finally, our approach assumes that the code does not contain any instances ofeval. This assumption is reasonable, as JavaScript MVC frameworks encourageprogrammers to write in a more declarative style; thus, features used in “vanilla”JavaScript such as eval are rarely seen in MVC applications.5.5.1 OverviewThe goal of our automatic inconsistency detector is to find all instances that violateProperties 1–4 in Section 5.4. The block diagram in Figure 5.6 shows an overview123DOMExtractor ASTExtractorHTMLCodeJavaScriptCodeFindControllersFindViews FindModelsFindInconsistenciesControllersModelsViewsInconsistenciesDOM ASTFigure 5.6: Block diagram of our approach.of our approach. As the figure depicts, the approach expects two inputs, namelythe HTML (template) and the JavaScript code. The DOMExtractor converts theHTML template into its DOM representation, which is used to simplify analysisof the HTML elements and attributes. Similarly, the ASTExtractor converts theJavaScript code into its AST representation.The modules FindModels, FindViews, and FindControllers statically analyzethe DOM and the AST to populate the setsM , V , and C , respectively. In our ap-proach, we chose to represent a model m ∈M as a tuple of the form <name,ast>,where name is a unique identifier for the model and ast is the subtree of the com-plete AST extracted earlier, containing only the nodes and edges pertinent to themodel; for example, the value of ast for the Results model in Figure 5.3 wouldbe the AST representing lines 17-26. Similarly, a view v∈V and a controller c∈Care represented by <name,dom> and <name,ast>, respectively. Section 5.5.2 de-scribes in more detail how the above sets are populated.OnceM , V , and C are all populated, these sets, along with the complete DOMand AST, are input into the FindInconsistencies module (see Algorithm 2). Theoutput of this algorithm is a list of inconsistencies Q. It starts by initializing the setsφ , as well as the “identifier inclusion functions” ωM , ωV , ωC , γC , and γV with thecontents of M , V , and C (lines 1-7); here, all the models, views, and controllersinitially map to the empty set, since the model variables and controller functionsare still not known. These mappings are updated as identifiers are discovered bythe findIdentifiers() function, described in Section 5.5.3. Likewise, the mappings124Algorithm 2: FindInconsistenciesInput: M : The set of modelsInput: V : The set of viewsInput: C : The set of controllersInput: DOM: The complete DOMInput: AST: The complete ASTOutput: Q: List of inconsistencies1 Q← /0;2 φ ←{((m,v,c), f alse) | m ∈M ∧ v ∈ V ∧ c ∈ C };3 ωM ←{(m, /0) | m ∈M };4 ωV ←{(v, /0) | v ∈ V };5 ωC ←{(c, /0) | c ∈ C };6 γC ←{(c, /0) | c ∈ C };7 γV ←{(v, /0) | v ∈ V };8 f indIdenti f iers(M ,V ,C ,ωM ,ωV ,ωC ,γC ,γV );9 φ ← f indMVCGroupings(M ,V ,C ,DOM,AST );10 foreach (m,v,c) ∈ {(m,v,c) | φ(m,v,c) = true} do11 foreach mv ∈ ωV (v)∪ωC (c) do12 if /∈ I(ωM (m)) then13 Q← Q∪{idMismatch(mv)};14 end15 else if !matchingType(mv,ωM (m)) then16 Q← Q∪{typeMismatch(mv)};17 end18 end19 foreach c f ∈ γV (v) do20 if c f .id /∈ I(γC (c)) then21 Q← Q∪{idMismatch(c f )};22 end23 else if !matchingType(c f ,γC (c)) then24 Q← Q∪{typeMismatch(c f )};25 end26 end27 endin φ are updated, in line 9, by the findMVCGroupings() function (Section 5.5.4).Lines 10-27 are responsible for detecting the identifier and type mismatches, andare described in detail in Section Finding the Models, Views and ControllersThe FindModels, FindViews, and FindControllers modules in Figure 5.6 populateM , V , and C , respectively, by locating the corresponding structures or blocks inthe HTML and JavaScript code. For example, in AngularJS, models and controllersare added as the body of the function passed to the .controller() method as aparameter (see Figure 5.3). Hence, in this case, locating the models and controllers125Add α to ωM(m)α is null?α = findNextModelVariable(m)Fetch model m from MDone updating ωM(m)YesNoFigure 5.7: Portion of findIdentifiers that updates ωM for every model. The other “identifier inclusionfunctions” are updated in a similar way.involves finding the subtrees in the AST that are rooted at a CallExpressionfor the method .controller(), and parsing the body of the function parameter.Similarly, views are normally saved as separate HTML files, so in most cases,finding them is tantamount to identifying these separate files.5.5.3 Inferring IdentifiersThe goal of the findIdentifiers module, which is invoked in Algorithm 2, line 8,is to find the model variables and controller functions that are defined or usedin every model, view, and controller that were found earlier (see Section 5.5.2),thereby updating the mappings in the “identifier inclusion functions”. Figure 5.7illustrates how findIdentifiers looks for model variables defined in every model. Asimilar algorithm is used to find the model variables and controller functions usedor defined in other entities.The functions findNextModelVariable and findNextControllerFunction analyzethe DOM and the AST according to the syntactic styles imposed by the MVCframework being used. In AngularJS, model variables are defined as a property ofthe $scope variable in an assignment expression (see Figure 5.3, lines 4, 17, and23); controller functions are defined similarly, albeit the right side of the expressionis a Function object (e.g., Figure 5.3, lines 7 and 29). Finding identifiers used inviews, however, is trickier – although identifiers appear as attribute values of DOM126elements in many cases, they also typically appear in double curly brace notationas part of a text element (e.g., Figure 5.2, lines 2 and 6); hence, text elements in theview’s DOM are also parsed since these may contain references to identifiers.Type Inference. To find the type assigned to a model variable in the model orcontroller, our approach looks at the right-hand side of the assignment expressionand infers the assigned type based on the AST node (e.g., if the right-hand sideis a StringLiteral node, the inferred type is String). If the expression istoo complicated and the assigned type cannot be inferred, the type is recorded asunknown for that identifier, so our type inference algorithm is conservative. Thesimplification we made for type inference requires the assigned expression to bea literal.27 Although this may seem to be a significant limitation, note that sim-ple assignments are commonplace in MVC applications, perhaps because MVCframeworks are designed such that applications can be programmed in a “declara-tive” way [59]; hence, we believe our simplification is justified (we further validatethis claim empirically in Section 5.7). Note that type inference works similarly forcontroller functions, except that the return value expressions are parsed instead ofassignments.To infer the expected type of a model variable or controller function used in aview, our approach examines the attribute to which the identifier is assigned anddetermines if this attribute expects values belonging to a specific type. For instance,the count attribute in AngularJS expects a Number, so this is recorded as theexpected type for userData.count in Figure 5.2, line 10. If the identifier hasno expected types, its type is simply recorded as ⊥, which matches all types.There are two special cases that our algorithm must handle, namely, nestedobjects and aliases.Nested Objects. To address challenge C1 (see beginning of Section 5.5), we alsomodel nested objects such as the userData and movieForms variables in Fig-ure 5.3. Our approach represents nested objects as a tree. Each node in the treerepresents an identifier, with an assigned type. The trees for the userData andmovieForms variables are joined together in one root as they both belong to thesame model. This is shown in Figure 5.8.27Our design also performs some “smart parsing”, e.g., it can detect concatenations of strings.127rootObjectuserDataObjectmovieFormsunknownmovieListStringintroBooleandisplayStringcountStringoneStringotherFigure 5.8: Tree representing the model variables in the Results model of MovieSearch, including allthe nested objects. The identifiers are shown at the top of each node, while the types are shown at thebottom.Analogously, if a model variable in the view uses the dot notation, then it isrepresented as a sequence of identifiers. For example, userData.count in Fig-ure 5.2, line 10 is represented as root→ userData→ count, with expectedtype Number.Aliases. An example of the use of aliases (challenge C2) is the ng-repeatdirective in AngularJS, which replicates the associated HTML element in the DOMfor each element of some specified collection. This directive is assigned a stringvalue of the form “<alias> in <collection>”, where <collection> is an array oran object, and <alias> is an identifier that represents each member of the array (oreach property of the object) in each replication of the HTML element in the DOM.Figure 5.2, line 5 shows an example, in which the collection is the userDataobject and the alias is movie. Therefore, the alias movie refers to every prop-erty of userData, namely userData.movieList, userData.intro, u-serData.display, and userData.count. Subsequently, the reference in Figure 5.2, line 6, translates to each of these four identifiers,followed by “.name”. These four sequences are therefore included as model vari-ables in the results.html view – i.e., they are all added in the list that maps to theresults.html view when updating ωV .1285.5.4 Discovering MVC GroupingsAt this point, the model variables and controller functions have been discovered,and have been mapped to their associated models, views, and controllers. Our ap-proach must now find all model-view-controller combinations that can potentiallyappear in the application, to address challenge C3. More formally, our approachmust find all (m,v,c) ∈M xV xC such that φ(m,v,c) = true, thereby updating φ inthe process.This procedure is carried out by the findMVCGroupings() function; as seen inAlgorithm 2, line 9, this function takes M , V , and C as inputs, along with thecomplete AST and DOM. The reason the full AST and DOM are needed is thatfindMVCGroupings() will look for information in the DOM that explicitly maps aspecific model or controller to a view via an HTML attribute, as well as routinginformation in the AST that does the same. This information, coupled with thename part of each model, view, and controller, allows our approach to determineall the valid groupings.Take, for example, the router for MovieSearch in Figure 5.4. The first route(lines 3-7) groups the Results model, the ResultsCtrl controller, and theresults.html view. Thus, φ is updated so that the model, view, and controllerobjects with these respective identifiers together map to true. In other words, = Results, = results.html and = ResultsCtrl,then the design sets φ(m,v,c) = true. This process is repeated for all other group-ings discovered.5.5.5 Detecting InconsistenciesThe final step in our approach is to compare the model variables and controllerfunctions within the same grouping and detect any potential inconsistencies. Thepseudocode for this procedure is shown in Algorithm 2, lines 10-27.The algorithm begins by looking for inconsistencies related to model variables(lines 11-18). Line 11 loops through every model variable used in either the viewor the controller. For all such model variables mv, the id is checked to see if it alsoexists among the model variables defined in the corresponding model (line 12). Ifnot, this means that Property 1 is violated and there is an identifier inconsistency,129so this inconsistency is included in Q. However, if the id does exist in the model,the matching model variable in the model is compared with mv to see if they havethe same type. If the types do not match, then Property 3 is violated and thereis a type inconsistency; this inconsistency is then included in Q. The algorithmfor finding inconsistencies in controller functions (lines 19-26) is similar. Notethat model variables with unknown types are assumed to match all types. Theremaining question, then, is, how are the identifier and type comparisons made?For controller functions, the answer is straightforward – identifiers are comparedbased on a string comparison, and types are compared based on the assigned andreturned types that were previously inferred in Section 5.5.3.Model variables are, however, more challenging because of the possibility ofnested objects. In this case, the sequence representation of mv is used to traverse thetree representing the model variables defined in the corresponding model. Take, forinstance, the sequence root→ userName, which is used in the ResultsCtrlcontroller, as per Figure 5.3, line 30. If this sequence is used to traverse the treerepresenting the model variables defined in Results (see Figure 5.8) startingfrom the root, our design will discover that the given sequence does not exist in thetree, and therefore, there is an identifier inconsistency. In addition, the sequenceroot→ userData→ count is used in the results.html view, as per Figure 5.2,line 10, and has an expected type of Number since it is assigned to the countattribute in ng-pluralize. If this sequence is used to traverse the same tree, thetraversal will be successful, since the sequence exists in the tree. However, notethat the expected type of the terminating node in the traversal (count) is String,which does not match the expected type of Number. Thus, a type inconsistency isrecorded for this sequence. Finally, if root→ userData→ intro→ name –which is one of the sequences the alias translated to as described inSection 5.5.3 – is used to traverse the tree, the traversal will fail, since the sequencedoes not exist in the tree. As a result, another identifier inconsistency will berecorded. In summary, our design is able to detect all the three inconsistencies inthe MovieSearch running example.1305.6 ImplementationWe implemented our approach in a tool called AUREBESH. It is built on top of theAce Editor, which is the editor used for the Cloud9 IDE [7]. We have embeddedAce Editor as part of a web application that can be accessed in our website [127].AUREBESH is implemented entirely using JavaScript, and currently supports MVCapplications written with AngularJS.To invoke the detector, we added a “Find Inconsistencies” button to the IDE,which the user must click. For every inconsistency found by the detector, an er-ror message is highlighted on the line of code containing the inconsistency in theIDE. The user can then click on these error messages to get more details about theinconsistencies.5.7 EvaluationTo assess the efficacy and real-world relevance of our approach, we address thefollowing research questions:RQ1 (Real Bugs): Can AUREBESH help developers to find bugs in real-worldMVC applications?RQ2 (Accuracy): What is the accuracy of AUREBESH in detecting identifier andtype inconsistencies in MVC applications?RQ3 (Performance): How quickly can AUREBESH perform the inconsistency de-tection analysis?5.7.1 Subject SystemsIn total, we consider 20 open-source AngularJS applications for our experiments,listed in Table 5.3. These applications were chosen from a list of MVC applica-tions from AngularJS’ GitHub page [60]; in particular, only the applications whosesource code is available and unobfuscated are considered, since AUREBESH, in itscurrent state, is incapable of working with obfuscated code. This is not a fun-damental limitation though as AUREBESH is meant for developers to use before131Table 5.1: Real bugs found. The “Fault Type” column refers to the fault type number, as per Table 5.2.Application FaultTypeError Message SeverityCafe Townsend 2 Undefined model variableemployee.id32 Undefined model variableemployee.id3Cryptography 3 Undefined model variablelastWordCount1Dematerializer 3 Undefined model variableediting3eTuneBook 5 Undefined controller functiondoneTuneSetEditing37 Inconsistent type forcurrentFilter3Flat Todo 4 Undefined model variableshowTaskPanel54 Undefined model variableshowStatusFilter5GQB 7 Inconsistent type fordownload.aggregated3Hackynote 3 Undefined model variabletheme.current.css43 Undefined model variabletransition.current.css4Reddit Reader 3 Undefined model variablepost2Story Navigator 1 Undefined model variableui.columns.status3beLocal 3 Undefined model variablelikeDisabled3Linksupp 3 Undefined model variablestartEating2obfuscating their code. As shown in Table 5.3, the applications cover a variety ofsizes and application types.5.7.2 MethodologyReal Bugs. To answer RQ1, we run AUREBESH on the 20 subject systems. Forthis experiment, we also ran our tool on two additional AngularJS applicationsdeveloped by students for a software engineering course at the University of Victo-ria [181], namely beLocal and Linksupp. We analyze every error message reportedby AUREBESH for these applications to see if it corresponds to a real bug. We re-port any true positives (i.e., error messages that correspond to real bugs) and falsepositives (i.e., spurious error messages) that we find.Accuracy. To measure the accuracy (RQ2), we conduct a fault injection study132on the subject systems. An injection is performed on an application by introduc-ing a mutation to a line of code from one of its source files (either the HTML orJavaScript code), running AUREBESH on this mutated version of the application,and then recording if AUREBESH detects the inconsistency introduced by the mu-tation. If the inconsistency is detected, the result of the injection is marked as“successful”; otherwise, the injection is marked as “failed”.In this experiment, we consider ten types of mutations, as seen in Table 5.2.The “expected behaviour” for a mutation describes the correct error message thatAUREBESH is expected to display when running AUREBESH on an application withthis mutation applied. Each of these mutation types corresponds to a violation ofone of the four properties listed in Section 5.4; hence, the results for a mutation typegive an indication of how well AUREBESH detects violations of the correspondingproperty.For each application, we perform 20 injections per mutation type, which amountsto a total of 200 injections per application. However, note that a mutation type maynot be applicable for certain applications (e.g., not all controllers use model vari-ables, in which case mutation type #2 will not be applicable); this explains whyseveral applications have fewer than 200 injections (see Table 5.3). The specificlocation mutated in the code is chosen uniformly at random from among the linesof code applicable to the current mutation type. For each injection, we record thenumber of successful detections, the number of failed detections, and the numberof spurious error messages introduced by the mutation; this allows us to report boththe recall (the number of successful detections over the total number of injections)and precision (the number of successful detections over the total number of errormessages displayed) of AUREBESH.Performance. We measure performance by running AUREBESH on each subjectsystem, recording the analysis completion time and averaging over multiple trials.133Table 5.2: Types of faults injected. MV refers to “model variable”, and CF refers to “controller function”.Type Description Expected Property# Behaviour Tested1 Modify the name of a MV used in lineN of a viewDetect unde-fined MV inline N12 Modify the name of a MV used in lineN of a controllerDetect unde-fined MV inline N13 For a particular MV used in line Nof a view, delete the definition of thatMV in a corresponding modelDetect unde-fined MV inline N14 For a particular MV used in line N ofa controller, delete the definition ofthat MV in a corresponding modelDetect unde-fined MV inline N15 Modify the name of a CF used in lineN of a viewDetect unde-fined CF inline N26 For a particular CF used in line N of aview, delete the definition of that CFin a corresponding controllerDetect unde-fined CF inline N27 For a particular MV used in the viewthat expects a certain type T1, mod-ify the definition of that MV in line Nof a corresponding model so that thetype is changed to T2Detect typemismatch inline N (T1expected buttype is T2)38 For a particular MV used in the viewthat expects a certain type T1 and de-fined in line N of a correspondingmodel, modify the expected type toT2 by mutating the ng attribute nameDetect typemismatch inline N (T2expected buttype is T1)39 For a particular CF used in the viewthat expects a certain type T1, modifythe return value of that CF in line Nof the controller to a value of type T2Detect typemismatch inline N (T1expected buttype is T2)410 For a particular CF used in the viewthat expects a certain type T1 and re-turns a value in line N of a corre-sponding controller, modify the ex-pected type to T2 by mutating the ngattribute nameDetect typemismatch inline N (T2expected buttype is T1)4134Table 5.3: Fault injection results. The size pertains to the combined lines of HTML and JavaScript code, not including libraries.Application Application Category Size (LOC) Successful Detections Failed Detections Total Injections Recall (%) Precision (%)Angular Tunes Music Player 185 40 0 40 100.00 100.00Balance Projector Finance Tracker 511 140 20 160 87.50 100.00Cafe Townsend Employee Tracker 452 160 0 160 100.00 100.00CodeLab RSS Reader 602 79 1 80 98.75 100.00Cryptography Encoder 523 120 0 120 100.00 100.00Dematerializer Blogging 379 186 14 200 93.00 100.00Dustr Template Compiler 493 80 0 80 100.00 100.00eTuneBook Music Manager 5042 177 23 200 88.50 100.00Flat Todo Todo Organizer 255 107 13 120 89.17 100.00GitHub Contributors Search 459 142 18 160 88.75 100.00GQB Graph Traversal 1170 194 6 200 97.00 100.00Hackynote Slide Maker 236 120 0 120 100.00 100.00Kodigon Encoder 948 120 0 120 100.00 100.00Memory Game Puzzle 181 40 0 40 100.00 100.00Pubnub Chat 134 120 0 120 100.00 100.00Reddit Reader Reader 255 120 0 120 100.00 100.00Shortkeys Shortcut Maker 407 120 0 120 100.00 100.00Sliding Puzzle Puzzle 608 40 0 40 100.00 100.00Story Navigator Test Case Tracker 415 117 3 120 97.50 100.00TwitterSearch Search 357 199 1 200 99.50 100.00OVERALL 2421 99 2520 96.07 100.001355.7.3 ResultsReal Bugs. After running AUREBESH on the original, unaltered versions of thesubject systems, AUREBESH displayed a total of 15 error messages in 11 applica-tions. We reported these error messages to the developers, and five of them (the er-ror messages from Story Navigator, beLocal, Linksupp, and two from Hackynote)were acknowledged as real issues and fixed. The other applications, unfortunately,are no longer maintained by the developers, so our bug reports for those appli-cations remain unacknowledged. Nonetheless, we analyzed the 15 error messagesand found that they are all true positives i.e., they all correspond to real-world bugs.Of the 15 bugs, we found 13 identifier inconsistencies and 2 type inconsisten-cies; as shown in Table 5.1, each of the bugs our tool found maps to one of thefault types in Table 5.2. Note that the two error messages in Cafe Townsend, whileidentical, correspond to two different bugs. With regards to why these faults werecommitted, we identified the following patterns:• Identifier defined elsewhere (7 cases): There are several cases where as-signments representing the model variable definitions are placed not in themodel itself, but inside controller functions. This applies, for example, to themodel variable lastWordCount in Cryptography;• Incorrect identifier (5 cases): In some cases, the inconsistencies arise be-cause the programmer has typed incorrect identifiers. For instance, in Hack-ynote, the identifier given for a property in the nested object theme.cur-rent is src, but the identifier expected by the view is css;• Boolean assigned a string (2 cases): The two type inconsistencies involvedthe programmer assigning a string to a model variable that expects a booleanvalue. For instance, in GQB, the download.aggregated variable waserroneously assigned the string value “true” instead of the boolean valuetrue;• Identifier name not updated (1 case): This occurs in eTuneBook. Uponinspection, it turned out that the undefined controller function doneTun-eSetEditing in eTuneBook was defined in previous versions of the ap-plication, but was replaced with another function with a different name; the136reference to the old name remained in the view. This is an example of aregression bug.Table 5.1 also shows the severity of the bugs, based on our qualitative assess-ment of these bugs; here, we use Bugzilla’s ranking scheme, where 1 representsthe lowest and 5 represents the highest severity. Although some of the bugs arecosmetic (e.g., the bug in Cryptography simply causes one of the labels to displayas “One of possible –word permutations...”, with the number next to “–word” miss-ing), many of the bugs have considerable impact on the application. For example,the first bug in Flat Todo renders the “plus” button – which adds todos in the list– useless. A similar effect takes place in eTuneBook, where the missing controllerfunction makes one of the buttons inoperable, thereby preventing the user from ex-iting edit mode. Also, the two bugs in Hackynote prevented the user from removingthe theme and transition present in the slides.Lastly, AUREBESH displayed only one false positive, in the Linksupp applica-tion. The reason is that the application uses the $rootScope variable to definemodel variables to be within the scope of all models. Our tool assumes that everymodel variable used in a view is defined only via the $scope variable, leadingto the false positive. Nonetheless, the low number of false positives indicates thatthe error messages displayed by our tool are trustworthy, minimizing the effortrequired to filter out any spurious messages.Accuracy. Table 5.3 shows the per-application results for the fault injection ex-periment. As the table shows, AUREBESH is very accurate, yielding an overallrecall of 96.1%, and attains a perfect recall for eleven of the twenty applications.In addition, AUREBESH did not output any spurious messages during any of theinjections; hence, AUREBESH was able to attain an overall precision of 100% inthis experiment.To understand what is causing the failed detections, we divide the results interms of the properties (Section 5.4) being violated by the mutation types. Asseen in Table 5.4, Properties 1, 3 and 4 have imperfect recalls. We analyzed the 27failed detections for Property 1, which represents the consistency of model variableidentifiers, and found that they all result from the usage of “filters” in conjunctionwith model variables in views. These filters are used in AngularJS to customize the137Table 5.4: Fault injection results per property.Property Successful Failed Total RecallDetections Detections Injections (%)1 1293 27 1320 98.02 560 0 560 100.03 268 52 320 83.84 180 20 200 90.0appearance of model variables’ values when displayed by the view; AUREBESHcurrently does not recognize these filters and ignores them when parsing, leadingto the failed detection. Note that this limitation is implementation specific, and canbe overcome by extending the parser.We also analyzed the 72 failed detections for Properties 3 and 4, both of whichrepresent the consistency of types. We found that these are caused by our assump-tion that the values assigned by model variables or returned by controller functionsare either literals or simple expressions, and thus have types that are easy to in-fer. More specifically, in these 72 cases, the values assigned or returned are eithercomplex expressions or retrieved from an external database. This prevented AU-REBESH from inferring the types; since AUREBESH is conservative, it does notreport the type inconsistency. Overall, these cases constitute 14% of the caseswhere type inference was needed. Note that in the remaining 448 cases (86% ofthe cases), the values were literals or simple expressions, which indicates that ourassumption is valid in the majority of cases.Performance. For each subject system, AUREBESH was able to perform the anal-ysis in an average time of 121 milliseconds, with a worst-case of 849 millisecondsfor the largest application, eTuneBook. This indicates that performance is not anissue with our tool.5.8 DiscussionLimitationsThe implementation of our approach for AngularJS has a few limitations. First, asexplained in Section 5.7.3, AUREBESH currently disregards the presence of filtersin views in AngularJS. Also, as mentioned in Section 5.7.3, our tool currentlydisregards the use of the $rootScope variable, which can lead to false positives.138With respect to the approach itself, a limitation is in our type inference algo-rithm, which assumes simple assignments and return values. Our results suggestthat this assumption is reasonable; however, we also found that a considerablenumber (around 14%) of pertinent assignments and return values involve com-plex expressions or external database accesses, so a more advanced type inferencealgorithm is needed. Lastly, AUREBESH does not consider inheritance in MVCapplications, where models are made descendants of other models to allow modelvariables to be inherited. Again, we have not encountered this in practice, but itcan occur.Another limitation of AUREBESH is that it works only on applications writtenusing AngularJS. While AngularJS is the most popular client-side MVC frame-work, our problem formulation (Section 5.3), formal model (Section 5.4) and algo-rithm (Section 5.5) are all fairly generic and can be applied to other MVC frame-works with minimal modifications.Threats to ValidityOne internal validity threat regards the mutation types used in our fault injectionexperiment, and how representative they are of both our inconsistency model andreal-world bugs. To address this issue, we selected the mutation types such thatthey all map to the consistency properties presented in Section 5.4. In addition,each of the 15 real-world bugs that we found in one of our experiments maps to amutation type, as described in Section 5.7.3, giving an indication of the mutationtypes’ representativeness.As with any experiment that considers a limited number of subject systems,the generalizability of our results may be called into question, which is an externalthreat to validity. Unfortunately, since AngularJS is a fairly new framework, appli-cations using this framework are quite scarce. Fortunately, the AngularJS GitHubpage provides a list of web applications using that framework; to mitigate the ex-ternal threat, we chose applications of different types and sizes from this list.Finally, the source code of all the subject systems we considered in our experi-ments are all available online; further, we kept our own records of the source codeof these systems that AUREBESH analyzed. Our tool AUREBESH, is also publicly139available. Hence, our experiments are reproducible.5.9 Related WorkMVC has been applied to various domains, and one of its earliest uses can betraced back to Xerox PARC’s Smalltalk [91]. The pattern has also been applied tothe server-side of web applications [94], where the model and controller are imple-mented on the server and the view is represented by the HTML output on the client.Since the application of MVC to client-side web application programming is afairly recent development, there are only a few papers addressing this topic. Muchof the research in this area has focused on the application of the MVC model toJavaScript development, tailored towards specific application types [54, 72, 162].Studies on JavaScript MVC frameworks’ properties have been limited to an analy-sis of their maintainability [23, 86]. Unlike our present work, these studies do notconsider the presence of consistency issues in JavaScript MVC applications, nordo they propose an approach for analyzing MVC application code.Several papers have analyzed the characteristics of common JavaScript frame-works, such as jQuery [55, 62, 98, 147]. Richards et al. [144] and Ratanaworabhanet al. [142] analyze the effect that different frameworks have on the dynamic be-haviour of JavaScript in web applications. Feldthaus and Møller [52] look at Type-Script interfaces, and propose a tool that checks for the correctness of these inter-faces. In prior work, we also briefly explored the relationship between JavaScriptframeworks and JavaScript faults that occur in production websites [128]. Our cur-rent work differs from these studies in that they consider non-MVC frameworks,which have different usage patterns compared to MVC frameworks.Finally, considerable work has been done on the application of MVC on theserver-side [67, 118, 123, 155], where frameworks such as Spring MVC and JSFare used. Wojciechowski et al. [177] compared different MVC-based design pat-terns on the server-side, and analyzed the frameworks’ characteristics, such as theirsusceptibility to file upload issues. In contrast, our work is concerned with theclient-side of web applications.1405.10 Conclusion and Future WorkIn this chapter, we presented an automated technique and tool called AUREBESH,which statically analyzes applications written using AngularJS – a popular JavaS-cript MVC framework – to detect inconsistencies in the code. Our evaluation ofAUREBESH indicates that it is accurate, with an overall recall of 96.1% and a pre-cision of 100%. We also find that it is useful in finding bugs in MVC applications– in total, we found 15 real-world bugs in 22 AngularJS web applications.141Chapter 6Cross-Language Inconsistency DetectionThis chapter describes another technique for automatically detecting faults in MVCapplications28 While effective, our first fault detector (AUREBESH) only works forone JavaScript framework (AngularJS), and can only detect four types of inconsis-tencies. These limitations led us to ask the following question: Could we design amore general fault detection technique? Answering this question will allow us todesign a technique that works for a larger set of JavaScript frameworks, and candetect a larger set of inconsistencies. Having these considerations in mind, we de-signed a technique called HOLOCRON,29 which provides a more holistic responseto RQ1C in Chapter 1.1.In the sections that follow, we describe the general problem of detecting faultsin JavaScript-based web applications, and we motivate our decision to apply au-tomatic fault detection specifically to MVC applications. Thereafter, we describeour fault detection approach.6.1 IntroductionWith JavaScript’s increase in popularity also comes increased expectations in Java-Script’s reliability. Unfortunately, despite this greater demand for reliability, Java-Script is still notoriously error-prone [128], and, as our bug report study reveals,these errors often lead to high-impact consequences such as data loss and security28The main study in this chapter is in preparation for submission to a software engineering con-ference.29HOLOCRON is named after a continuity database used to ensure the consistency of Star Warscanon.142flaws [130, 145]. To mitigate this problem, web developers rely heavily on testing,and many researchers have developed tools to enhance this testing process [15,101, 109, 116, 137]. While testing is an integral part of the software developmentphase, the large number of states found in web applications often renders testinginsufficient in detecting many bugs in such applications.An alternative to testing is static code analysis, which allows programmers tofind bugs by reasoning about the program, without having to execute it. Severaltechniques have been proposed to automatically detect JavaScript bugs throughcode analysis. For example, JSLint [43] detects syntactic errors in JavaScript pro-grams. Other work by Jensen et al. [78, 79] analyze JavaScript code to find typeinconsistencies. Finally, in Chapter 5, we also proposed AUREBESH [132], whichis a technique for automatically detecting inconsistencies in AngularJS web ap-plications. A common issue with the above techniques is that they detect bugsbased on a predefined list of inconsistency rules or bug patterns. As a result, thebugs they detect will be limited to those encompassed by these hardcoded rules.This is especially problematic for web applications which use a wide variety offrameworks and libraries, each with its own coding rules and conventions. More-over, web frameworks typically evolve very fast, and hence hardcoded rules maybecome obsolete quickly.An alternate approach for finding bugs is anomaly detection, proposed by En-gler et. al. [49] and commercialized as Coverity [161]. Instead of hardcodingrules as the above techniques do, this approach looks for deviant behaviours inthe input application’s code, with these deviations providing an indication of po-tential bugs in the program. This approach has the advantage that it can learnrules from common patterns of behaviour, and hence the rules do not need to beupdated for each framework. Anomaly-based approaches, however, only supportsingle-language bug detection, and hence, will not be able to find bugs resultingfrom cross-language interactions. This makes them not particularly suitable forJavaScript-based web applications, because such applications frequently involvecross-language interactions; for example, HTML code often sets up event han-dlers by embedding JavaScript functions as attribute values, and JavaScript codeoften retrieves HTML elements by using DOM element selectors.In response to the above issues, we introduce a new technique for automatically143detecting web application inconsistencies that is both general-purpose and cross-language. In this context, an inconsistency pertains to two web application codecomponents where one component makes an erroneous assumption about the othercomponent, thereby leading to a bug. Our approach is general because it makesno assumptions about which two code components are inconsistent. Further, itis cross-language because the two incompatible code components involved in theinconsistency can come from different programming languages (i.e., JavaScript andHTML). To the best of our knowledge, we are the first anomaly-based inconsistencydetection approach for web applications that is able to deal with cross-languageinteractions.As with Chapter 5, we continue to focus on detecting inconsistencies in MVCapplications, which, as defined in Chapter 5, are web applications implementedusing JavaScript MVC frameworks such as AngularJS, BackboneJS, and Ember.js.We focus on MVC frameworks due to their rising popularity [160], and the fact thatthey do not interact directly with the DOM, which makes static analysis suitable tounderstand such applications.We make the following contributions in this work:• We demonstrate that there are many inconsistency classes in MVC applica-tions, and that there is no single inconsistency class that particularly domi-nates over the others. Further, many of these inconsistencies span multipleprogramming languages;• We propose a general, cross-language technique for automatically detectinginconsistencies in MVC applications. Unlike prior work, our approach doesnot look for specific inconsistency classes, but instead uses subtree patternmatching to infer these classes. Further, it uses association rule mining tofind the cross-language links between the HTML and the JavaScript code;• We implement our technique in a tool called HOLOCRON, and we evaluatethe tool on 12 JavaScript MVC applications. We find that HOLOCRON canfind a total of 18 bugs in these twelve applications, many of which result fromcross-language inconsistencies and hence cannot be found by other tools.Further, HOLOCRON also finds many code smells in these applications. On144average, one of two inconsistencies reported by HOLOCRON is either a bugor a code smell.6.2 Background and MotivationFor this work, we target a general class of bugs that we call inconsistencies, whichwe formally define shortly. In addition, we focus on detecting inconsistencies inMVC applications statically. Recall from Chapter 5 that an MVC application con-sists of a model, which defines the application data; a controller, which defines thefunctions that manipulate the values of the application data; and a view, which usesthe data and functions defined in the model and controller to define a user interface.Static analysis is often sufficient for MVC applications, as they rely primarilyon JavaScript bindings instead of DOM interactions; hence, even though the DOMstill changes, the JavaScript code interacts primarily with these static bindings in-stead of directly interacting with the DOM itself. In contrast, frameworks such asjQuery rely on direct DOM interactions with JavaScript, which would force us toperform dynamic analysis to keep track of the changing DOM states.6.2.1 DefinitionsWe define a code component to be any contiguous piece of JavaScript or HTMLcode that could span a single line (e.g., function call, HTML text, etc.) or multiplelines (e.g., function definition, view definition, etc.). These code components canbe represented by subtrees of the JavaScript code’s Abstract Syntax Tree (AST)or the HTML code’s DOM representation; we use these subtrees in our design, asdescribed in Section 6.3. We define an inconsistency as follows.Definition 4 (Inconsistency) Two code components compA and compB are incon-sistent if compA makes an erroneous assumption about compB – that is, compAincorrectly assumes that compB possesses a particular property that it does nothave – where the erroneous assumption can be implicitly inferred from the code(e.g., without having to rely on specifications). An inconsistency refers to any pair(compA, compB) that are inconsistent.Therefore, an inconsistency is a bug that can potentially be discovered without145the help of external specifications, which means that detecting these inconsisten-cies lends itself to an auotmated analysis of the web application code. An in-consistency is considered cross-language if compA and compB belong to differentprogramming languages.In our evaluation of AUREBESH in Chapter 5, we found that four classes ofthese inconsistencies occur in MVC applications. For example, we found incon-sistencies in which the view components in the HTML code use variables that areerroneously assumed to be defined in the model components in the JavaScript code.In Section 6.5.3, we demonstrate through another study of bug reports that theseinconsistencies abound in MVC applications, and many of these inconsistencies gomuch beyond the classes found in this prior study. Thus, AUREBESH will not workfor these other classes, which is why a new approach is needed.6.2.2 Motivating ExamplesWe introduce two examples of real inconsistencies found in open-source MVCapplications. The first inconsistency comes from an AngularJS application [164],and the second inconsistency comes from a BackboneJS application [103].AngularJS Example. For this application, the JavaScript code attempts to close amodal instance by calling the close() method, as follows1 $modalInstance.close('close');However, this leads to incorrect application behaviour (i.e., a dialog box be-comes broken), as the $modalInstance service has been replaced in the newerversion of AngularJS being used by the application by $uibModalInstance.In this case, the function call above incorrectly assumes that the service objectbeing dereferenced is valid, thereby leading to the inconsistency. This exampledemonstrates the potential usefulness of a learning-based approach for findingthese inconsistencies, as the evolution of framework APIs often modifies or in-troduces new coding rules.BackboneJS Example. In this example, the JavaScript code is attempting to bindan element from an HTML view template to a layout view object by assigning theel property with an element selector, as shown below.1461 Marionette.LayoutView.extend({2 el: '.some-view',3 ...4 });In this case, the selector ’.some-view’ does not correspond to any elementsin the HTML template, which causes the binding to fail. In other words, the viewincorrectly assumes that a particular element with the class “some-view” is definedin the HTML template.6.2.3 ChallengesOne of the main challenges of this work is that we need to infer programmer intentin order to label code components as inconsistent. For example, in the AngularJSexample above, how do we know that $modalInstance is an incorrect servicename? Usually, this inference is carried out using specifications, but these specifi-cations are typically not available in web applications. One approach is to leveragerepeated appearances of the same code pattern to infer intent. Any deviationsto this pattern are likely to be inconsistencies. Further, the more examples thereare of the same pattern, and the fewer counterexamples there are, the more likelyit is to be an actual pattern. For example, in the AngularJS example, there aremultiple instances of $uibModalInstance.close(...), which is a near-match, albeit the service name is different. This indicates that the service name$modalInstance is likely incorrect.Another challenge is that we have to deal with cross-language inconsistencies,as this forces our design to infer “links” between code components coming fromdifferent programming languages. For instance, in the BackboneJS example above,our design needs to infer that the value of the el property needs to be a selectorfor an element in the HTML template. We can decide to simply hardcode thisrelationship in our detector, but the problem is that this link is specific to the Back-boneJS framework, and will therefore not work for applications written using otherframeworks. Therefore, we need a general approach to discover the link.147JavaScriptHTMLTransform AST and DOM intoCodeTreesFind Code Patterns from SubtreesInfer Consistency RulesDetect Rule ViolationsIntra-PatternRulesLinkRulesInconsistenciesFigure 6.1: Overview of our fault detection approach6.3 ApproachThe block diagram in Figure 6.1 presents an overview of our approach. As thediagram shows, our approach takes the web application’s JavaScript and HTMLcode as input, and transforms these pieces of code into their corresponding ASTand DOM representations, respectively. As explained in Section 6.3.1, the AST andthe DOM trees are transformed into another tree object called a CodeTree, whichallows the approach to perform standardized operations on those trees. In additionto the trees generated from the input web application, our technique also retrievesthe AST and DOM of other web applications that use the same framework; theseweb applications are retrieved from the web.Once the CodeTrees are generated for the input and sample code, the ap-proach analyzes the trees to find commonly repeated patterns in the trees (Sec-tion 6.3.2). To do so, it looks for subtree repeats, which by definition are sub-trees that appear multiple times in the CodeTrees; these subtree repeats representcommon code patterns present in the web application, and will be used to establishthe consistency rules.148After finding the subtree repeats, the approach looks at each code pattern foundin the previous module and formulates consistency rules based on them. Our ap-proach looks at two levels of consistency rules. On the first level, our approachinfers intra-pattern consistency rules, which are consistency rules defined by theindividual code patterns themselves. On the second level, our approach also in-fers inter-pattern consistency rules (i.e., link rules), which are consistency rulesinferred based on pairs of code patterns. As described in Section 6.3.3, these linkrules allow our approach to find consistency rules that have to do with the interac-tion between code written in different programming languages.Finally, once the consistency rules have been inferred, our approach finds de-viations to these rules, based on a comparison between the CodeTree objects(which represent the AST and the DOM) and the inferred consistency rules (Sec-tion 6.3.4).6.3.1 Transforming Code into TreesThe first module of our approach transforms the JavaScript and the HTML code ofthe input web application into their corresponding AST and DOM representations.More specifically, an AST is constructed for each JavaScript file (or JavaScriptcode within the same script tag), and a DOM representation is created for eachHTML file. These transformations are done to simplify analysis, as trees are awell-studied data structure for which many search and comparison algorithms havebeen proposed. It also makes our approach easier to extend to other languages, ascode-level analysis would have required complicated parsing algorithms that relyon knowledge of the syntax of specific languages.In order to standardize the way that our approach operates on the ASTs andthe DOMs, we transform them both into a data structure called the CodeTree. ACodeTree is defined as a tree T (V,E), where V is the set of nodes in the tree, andE is the set of edges. For every node v ∈V , we define the following properties.• v.type: Set to “ast” if v is an AST node, or “dom” if v is a DOM node;• v.label: Set to the node label. If v is an AST node, the label is set to either thenode type (e.g., ExpressionStatement, Identifier, etc.), or the corresponding149identifier or literal value. If v is a DOM node, the label is set to a tag name(if v is an Element node), attribute name (if v is an Attribute node), ortext (if v is a Text node, or an attribute value);In addition to the above properties, for each CodeTree node, we also keeptrack of its parent and childNodes, as well as the lineNumber, columnNumber, andsourceFile.6.3.2 Finding Common PatternsThe goal of our next module is to find patterns of repeating subtrees in the Co-deTrees. These patterns will form the basis for the consistency rules. We firstdefine the following terms.Definition 5 (Subtree Repeats) Let T1,T2, ...,TN be CodeTrees, and let R(Vr,Er)and S(Vs,Es) be two different subtrees of any of these CodeTrees. Then, R andS are defined to be subtree repeats of each other if R and S are isomorphic, wheretwo nodes are considered equal iff they have the same type and label. Hence, eachnode vr ∈ Vr has a corresponding node vs ∈ Vs such that vr.type = vs.type andvr.label = vs.label.Definition 6 (Code Pattern) A code pattern C is defined as a set of subtrees, suchthat for every pair of subtrees R,S ∈C, R and S are subtree repeats.Hence, the goal of this module is to find all the code patterns in the CodeTreesgenerated earlier in the previous module. Our technique for finding these code pat-terns is similar to some prior techniques [120], and in particular, to the approachused by Baxter et al. [25] to detect clones in the source code of a program using theAST. More specifically, our design looks for all full subtrees in each CodeTree,and assigns a hash value to each of these subtrees; note that a full subtree pertainsto a subtree that contains all the descendant nodes from the subtree’s root. All sub-trees that hash to the same value are placed in their own hash bin. Once this processis complete, the subtrees in each hash bin are compared to detect any collisions; ifthere are collisions in a hash bin, the hash bin will be split to resolve the collisions.These hash bins represent the code patterns.150The difference with Baxter et al.’s technique is that when comparing subtrees,our design abstracts out the labels of nodes pertaining to variable and functionidentifiers, as well as attribute values; in other words, our design temporarily treatsthe labels of these nodes as the empty string, so that their original labels will notbe considered in the hashing. Our design also abstracts out any labels that identifythe data type of a literal node (e.g., StringLiteral, NumberLiteral, etc.). Doing sowill enable our design to find intra-pattern consistency rules, as will be describedin Section 6.3.3.Using Code Examples from the Web. In addition to the target application, ourdesign also looks for patterns that are found in example web applications takenfrom the web. The purpose of using these example applications is to allow patternsto appear more frequently, thereby giving our design greater confidence about thevalidity of the pattern found; this potentially decreases the rate of false negatives.Further, using these examples will also allow “non-patterns” to appear less fre-quently, percentage-wise, thereby decreasing the rate of false positives.The example web applications retrieved must use the same framework andframework version as the target web application. To determine this, our design an-alyzes the script tags of the target web application to infer the framework, andthen it looks for example web applications that attach the same framework in theirscript tags. To keep track of the web application, we augment each CodeTreenode to also include the appId, which is a unique ID that identifies which webapplication the node – and hence, the tree – belongs to.6.3.3 Establishing Rules from PatternsOnce the code patterns are found, our design then proceeds to analyze these pat-terns to infer consistency rules. In this case, the design looks for both intra-patternconsistency rules and inter-pattern consistency rules (i.e., link rules).Intra-Pattern Consistency RulesAs previously mentioned, intra-pattern consistency rules are defined by individualcode patterns. Algorithm 3 shows the pseudocode for finding these rules, andreporting violations to them. The main idea behind this algorithm is to concretize –151Algorithm 3: FindIntraPatternInconsistenciesInput: Cset : The set of code patternsInput: t: The threshold for dominant subpatternsOutput: PI: Set of intra-pattern inconsistencies1 PI ← /0, remaining← /0;2 codePatternQueue←{C |C ∈Cset};3 while codePatternQueue is not empty do4 C← codePatternQueue.dequeue();5 preorderNum← getNextNodeToConcretize(C);6 if preorderNum < 1 then7 remaining← remaining∪{C}; continue;8 end9 subPatterns← /0;10 foreach subtree S ∈C do11 node← getPreOrderNode(S, preorderNum);12 markAsConcretized(node);13 if subPatterns.hasKey(node.label) then14 subPatterns[node.label].add(S);15 end16 else17 subPatterns[node.label] = {S};18 end19 end20 D← getDominantPattern(subPatterns);21 if 100 |D||C| >= t then22 expected← getPreOrderNode(D[0], preorderNum);23 foreach code pattern CP ∈ subPatterns do24 if CP 6= D then25 foreach subtree S ∈CP do26 inc← getPreOrderNode(S, preorderNum);27 PI← PI∪{(inc,expected)};28 end29 end30 end31 codePatternQueue.enqueue(D);32 end33 else34 codePatternQueue← codePatternQueue∪ subPatterns;35 end36 end37 Cset ← mergeRemaining(remaining);one by one – the nodes that were abstracted out in the previous module by revealingtheir original labels.The algorithm starts by enqueuing each code pattern in a queue (line 2). Foreach code pattern C in the queue, the design determines the earliest node – in depth-first, pre-order – that is still abstracted out among the subtrees in C. It achievesthis by calling the getNextNodeToConretize() function, which returns the152CallExpressionMemberExpression StringLiteralIdentifier Identifier "close"$modalInstance closeCallExpressionMemberExpression StringLiteralIdentifier Identifier "close"$uibModalInstance close...CallExpressionMemberExpression StringLiteralIdentifier Identifier "close"$uibModalInstance closeCallExpressionMemberExpression StringLiteralIdentifier Identifier "close"$uibModalInstance closeFigure 6.2: Example of an intra-pattern consistency rule violationpre-order number of the earliest node (line 5). Once the pre-order number of theearliest node is determined, the actual nodes in the subtrees in C that correspond tothis pre-order number are compared and marked as concretized (lines 11-12), andthe subtrees are partitioned according to the label of the concretized node (lines13-18). The partitions are included in an associative array called subPatterns (line9).Once the partitions are found, the algorithm looks for the dominant pattern,which represents the largest partition (line 20). If the number of subtrees in thedominant pattern constitutes greater than t% of all the subtrees in the original codepattern C, where t is a user-set threshold, all the subtrees belonging to the non-dominant patterns are considered intra-pattern inconsistencies (lines 22-32) andare discarded; here, an intra-pattern inconsistency is represented by a tuple of theinconsistent node – i.e., the node that was just concretized in the inconsistent sub-tree – and the expected node – i.e., the node that was just concretized in any subtreebelonging to the dominant pattern (line 27). This process is repeated until there areno further nodes to concretize, after which all remaining partitions belonging to thesame original code pattern at the start of the algorithm are merged (line 37).As an example, consider the subtrees in Figure 6.2, which form a code pattern;this code pattern is found in the AngularJS example introduced in Section 6.2.2.Here, the current node being concretized is the left-most leaf node of each subtree,which, in this case, represents the name of the service being dereferenced. Thesubtrees are then partitioned according to the label of this concretized node. Inthis case, there are two partitions – one containing the left-most subtree, with theconcretized node coloured red, and another containing the rest of the subtrees,with the concretized node coloured blue. The latter partition is deemed to be thedominant pattern, so the subtree in the other partition is labeled as inconsistent.153Algorithm 4: FindLinkRulesInput: (C f rom,Cto): Pair of code patternsOutput: L: Set of link rules1 L← /0;2 foreach (S f rom,Sto) ∈C f rom×Cto do3 i← 1;4 node f rom← getPreOrderNode(S f rom, i);5 while node f rom 6= null do6 j← 1;7 nodeto← getPreOrderNode(Sto, j);8 while nodeto 6= null do9 if node f rom 6= nodeto and node f rom.label = nodeto.label then10 lr← (i,S f rom, j,Sto);11 L← L∪{lr}12 end13 nodeto← getPreOrderNode(Sto, ++ j);14 end15 node f rom← getPreOrderNode(S f rom, ++i);16 end17 endLink RulesIn addition to finding the intra-pattern consistency rules, our design also looksfor consistency rules that describe the relationship between code patterns. Thisprocess not only allows our design to find relationships between pieces of code inthe same programming language, but also across languages, i.e., cross-languagerelationships.All link rules are of the following form: The ith pre-order node in Subtree S1is equal to the jth pre-order node in Subtree S2. Our design finds the link rulesfor each pair of code patterns (C f rom,Cto), as shown in Algorithm 4. In this case,the algorithm iterates through every pair of subtrees between the two code patterns(line 2). For each of these pairs of subtrees, the algorithm goes through every pairof nodes between the two subtrees (lines 3-16), and compares the two nodes to seeif they have the same label. If they have the same label, a new link rule is addedto the list, where each link rule is uniquely identified by the subtree pair S f rom andSto, and their respective pre-order indices i and j. The link rules found by runningthis algorithm will then be used to find link rule violations, as described in the nextsubsection.1546.3.4 Detecting ViolationsViolations to the intra-pattern consistency rules are detected in conjunction withfinding those rules, as described in Section 6.3.3. For the link rules, we makea distinction between unconditional link rule violations and conditional link ruleviolations, as we describe below.Unconditional Link Rule ViolationsA link rule violation is unconditional if the link rule is violated by a code com-ponent (represented by a subtree) regardless of where the component is located inthe code. An example of an unconditional violation is the BackboneJS example inSection 6.2.2.To determine whether a link rule lr is violated, our design goes through eachpair of code patterns C f rom and Cto, as before. It then determines which pairs ofsubtrees between C f rom and Cto satisfy lr. There are two ways in which a subtree inC f rom can be marked as an inconsistency in this module. First, if a subtree S f rom ∈C f rom does not satisfy the link rule lr when paired with any subtree Sto ∈Cto, and alarge percentage30 pv% of the other subtrees in C f rom satisfy lr at least once, thenS f rom will be considered an inconsistency.For instance, the left box in Figure 6.3 shows the code pattern to which theinconsistent code in the BackboneJS example (from Section 6.2.2) belongs. Asindicated by the arrows in this figure, almost each subtree in this code pattern cor-responds to a class attribute definition in the HTML code (right box in Figure 6.3);the only subtree that does not have a corresponding class attribute definition is theone with the node highlighted in red (‘‘some-view’’). This subtree is there-fore labeled as an inconsistency, assuming that pv≤ 75%.In addition, if a subtree S f rom ∈ C f rom does not satisfy the link rule lr whenpaired with a specific subtree Sto ∈Cto, and a large percentage of the other subtreesin C f rom satisfy the link rule lr with Sto, then S f rom will also be considered aninconsistency.30This is a parameter chosen by the user.155elProperty"some-view"elProperty"some-region"elProperty"layout"elProperty"main"Attributeclass"some-region"Attributeclass"layout"Attributeclass"main"Code Pattern from the HTML CodeCode Pattern from the JavaScript CodeFigure 6.3: Example of an unconditional link rule violation. The subtrees are slightly altered for simplic-ity.Conditional Link Rule ViolationsA link rule violation is conditional if the link rule is violated given that the codecomponent is located in a specific area in the code. For example, suppose a viewV in the HTML code is associated with a model M in the JavaScript code. Further,suppose that the following link rule has been found: The identifier <x> in thesubtree with pattern ng-model=‘<x>’ is equal to the identifier <y> in the subtreewith pattern $scope.<y>. In this case, if there exists no subtrees in the modelM with pattern $scope.<y> that matches a certain subtree in the view V withpattern ng-model=‘<x>’, then this latter subtree is considered a violation of thelink rule (i.e., V is using an identifier that is never defined in the correspondingmodel M). Note how this link rule violation only occurs given that the subtreesbeing compared are located in M and V . Therefore, this is considered a conditionallink rule violation.To find the conditional link rule violations, we use a well-known data mining156technique called association rule learning [6]. This technique takes a set of trans-actions as input, where each transaction contains a set of items that apply to thattransaction. Based on an analysis of these transactions, the technique looks forrules of the form {a1,a2, ...,an} ⇒ {b1,b2, ...,bm}, where both the left and rightside of the implication are subsets of all the items. In addition, the technique onlyreports rules that match or exceed a particular confidence value, i.e., the percentageof transactions that follow the rule.Hence, when finding the conditional link rule violations between pairs of codepatterns C f rom and Cto, we create a transaction for each subtree pair (S f rom, Sto).The items included in each transaction include all the link rules satisfied by thesubtree pair, as well as the ancestor nodes of the root of each subtree; these ancestornodes dictate which areas in the source code the subtrees are located. We use theapriori algorithm [5] to infer association rules with a confidence value greater thana user-set parameter cv%; we are particularly interested in association rules of thefollowing form{an f rom, anto} ⇒ {lr}where an f rom and anto are ancestor nodes of the subtrees S f rom and Sto, re-spectively, and lr is a link rule. Once these association rules are found, they arecompared against each subtree pair to determine which subtree pairs do not satisfythem; these non-satisfying subtree pairs are then reported as inconsistencies.6.4 ImplementationWe implement our technique in an open-source tool called HOLOCRON.31 Thistool is implemented in JavaScript as a plugin for Brackets, which is an IDE forweb development developed by Adobe [3]. To use the tool, the user only needs tospecify the top folder of the target web application. The output of the tool is a listof the inconsistencies found; each inconsistency is shown to the user as a messageidentifying the inconsistent line of code, and an example of the expected behaviourbased on the consistency rule.31∼frolino/projects/holocron/157The JavaScript code is parsed into an AST using Esprima [71], and the HTMLcode is parsed into its DOM representation using XMLDOM [82]. For findingthe association rules, we adopt an existing implementation of the apriori algo-rithm [152].6.5 EvaluationWe now evaluate the relevance and effectiveness of HOLOCRON by answering thefollowing research questions:RQ1 (Prevalence of Inconsistencies): Do inconsistencies occur in MVC applica-tions and if so, what are the characteristics of these inconsistencies?RQ2 (Real Bugs): Can HOLOCRON be used by developers to detect bugs in real-world MVC applications?RQ3 (Performance): How quickly can HOLOCRON detect inconsistencies?6.5.1 Subject SystemsFor our experiments, we consider four open-source applications for each of thethree main MVC frameworks (AngularJS, BackboneJS, and Ember.js), for a totalof 12 applications. These three frameworks are the most widely used JavaScriptMVC frameworks, experiencing a 538% growth in popularity from January 2013to April 2016 [153] The applications are listed in Table 6.1, with the sizes rangingfrom 6 to 43 KB (185-1659 LOC). These applications were taken from various listsof MVC applications available on GitHub [17, 48, 60]. In particular, the applica-tions we chose were the first four applications from each framework found fromthese lists which also had a GitHub repository, with preference given to those thathad a working demo, as this simplified the task of reproducing any bugs found byour tool.6.5.2 Experimental MethodologyPrevalence of Inconsistencies (RQ1). To answer RQ1, we manually analyze bugreports that have been filed for MVC applications on GitHub. More specifically,158we examine 30 bug reports for applications implemented in each of the three mainMVC frameworks – AngularJS, BackboneJS, and Ember.js – for a total of 90 bugreports. We only consider fixed or closed bugs to prevent spurious reports. To findthe bug reports, we use GitHub’s advanced search feature, searching in particu-lar for GitHub issues that are given the label “bug”, and whose status is “closed”.We perform the same search for each of the three MVC frameworks, using thekeywords “angularjs”, “backbone”, and “emberjs”, respectively. We discard anysearch results that correspond to applications not written in any of these threeframeworks, as well as results that do not pertain to the web application’s client-side code. We then take the first 30 bug reports that satisfy the conditions describedfrom each of the three search results, and use those bug reports for our analysis.Note that we did not confine ourselves to the 12 subject systems for this analysis.For each of the bug reports, we first determine whether the bug corresponds toan inconsistency, as defined in Section 6.2. If so, we determine the bug’s inconsis-tency category, which is based on the inconsistent code components, as well as theerroneous assumption made by one of the components (see Section 6.2.1). Further,we also determine whether the bug is “cross-language” – that is, whether the bugresults from an inconsistency between multiple programming languages.Real Bugs (RQ2). For RQ2, we run HOLOCRON on the subject systems describedin Section 6.5.1 and record all the inconsistencies reported by the tool. We exam-ine each of these reported inconsistencies to determine if it corresponds to a realbug (i.e., it is indicative of an error that leads the application to a failure state).Based on a pilot study, we set the intra-pattern violation threshold t to 90%, the un-conditional link rule violation thresholds pv to 95%, and the conditional link ruleviolation threshold cv to 85%. Finally, we use example code from five open-sourceweb applications to train the analysis with more samples. These five applicationsinclude the other three subject systems using the same framework, as well as twoadditional applications found on GitHub [17, 48, 60]. We report the number ofbugs found by HOLOCRON, as well as its precision (i.e., number of bugs per in-consistency reported). We also measure the number of code smells identified byHOLOCRON.Performance (RQ3). We measure the amount of time it takes for HOLOCRON to159run on each subject application. We report the aggregate times in each run, as wellas the runtime of each module for the slowest run. We run our experiments on aMac OS/X 10.6.6 machine (2.66 GHz Intel Core 2 Duo, with 4 GB of RAM).6.5.3 Prevalence of Inconsistencies (RQ1)Of the 90 bug reports we studied, we found that 70% of these bug reports cor-respond to an inconsistency. These did not need the application’s specificationsto detect, pointing to the promise of a tool such as ours which finds inconsisten-cies. For example, one of the Ember.js applications passed a modal object to thebuildUrl() method, even though this method, which is part of the Ember.jsAPI, expects a string as its first parameter. This inconsistency could be inferredbased on other usages of the method which were correct. The remaining 30% ofthe bugs, however, required prior knowledge of the application’s specifications.For example, one of the bugs was caused by the fact that the programmer did notupdate the display style of an element to ‘block’; in this case, prior specifica-tion was needed to establish that the programmer intended to modify the style to“block” rather than “none”.The per-framework results are summarized in Figure 6.4. As this figure shows,73% of bug reports correspond to inconsistencies for the AngularJS and Ember.jsapplications, and 63% of bug reports correspond to inconsistencies for BackboneJSapplications. These results suggest that inconsistencies are prevalent in web appli-cations created using JavaScript MVC frameworks. We further found that 35% ofthe inconsistencies are cross-language. For example, one of the bugs resulted fromthe programmer erroneously using the data-src attribute instead of the src at-tribute in the HTML code, which led to incorrect bindings with the JavaScript code.Therefore, existing tools that detect bugs based on single languages alone will notbe able to detect a significant percentage of these inconsistencies.Figure 6.5 shows the distribution of inconsistency categories we found; again,an inconsistency category is uniquely identified by the two components that areinconsistent, as well as the incorrect assumption made by one of the components.As this figure illustrates, most inconsistency categories appear only once in thebug reports we studied; for example, 30 categories had only a single inconsistency160AngularJS BackboneJS Ember.js AllNot InconsistencyInconsistencyMVC Framework% of Bug Reports020406080100Figure 6.4: Percentage of bug reports classified as an inconsistency for each MVC framework1 2 3 4 5 6 7FrequencyNumber of Categories051015202530Figure 6.5: Number of inconsistency categories with a particular frequency. Most inconsistency cate-gories have just 1-2 inconsistencies in them.161Table 6.1: Number of real bugs found per application. The size is shown in KB, with lines of code (LOC)in parentheses. The size pertains to both the HTML and JavaScript code in each application, not includ-ing libraries.Framework Application Size (loc) # of Code TotalBugs SmellsAngularJS angular-puzzle 20 (608) 3 3 8projector 19 (569) 3 0 7cryptography 20 (582) 1 0 2twittersearch 10 (357) 1 0 1BackboneJS cocktail-search 10 (396) 2 6 13contact-manager 19 (701) 2 2 5webaudiosequencer 43 (1659) 1 7 15backbone-mobile 9 (240) 1 0 2Ember.js todomvc 8 (299) 0 2 3emberpress 21 (610) 2 11 22giddyup 12 (386) 1 2 9bloggr 6 (185) 1 0 4OVERALL 18 33 91each. The category with the largest share is represented by 7 bug reports, which inthis case corresponds to “Incorrect Parameter Type”, i.e., a JavaScript function callerroneously assumes that one of its arguments is of a particular type (i.e., string,boolean, etc.), despite other calls to the same function passing an argument of thecorrect type. Further, we found a total of 41 different inconsistency categoriesin our experiment. The large number of categories suggests that there are manydifferent rules that are implicitly used by programmers in writing JavaScript MVCbased applications. This is why it makes sense to deploy an approach such as oursthat discovers the rules automatically rather than hard-code them.6.5.4 Real Bugs (RQ2)Table 6.1 shows the result of running HOLOCRON on the subject systems. In total,HOLOCRON was able to detect 18 unreported bugs from 12 MVC applications. Wehave reported these bugs to the developers, with confirmation from the developersstill pending, although we were able to manually reproduce the bugs and confirmthem ourselves. As seen in this table, HOLOCRON was able to find a bug in allof the applications tested, except for one (todomvc). Further, HOLOCRON found12 unconditional link rule violations, 4 conditional link rule violations and 2 intra-pattern consistency rule violations. This result demonstrates that HOLOCRON canbe used by web developers to find bugs representing various types of consistencyrule violations.162Further, out of 18 real bugs found, five were cross-language inconsistencies.The bug in cryptography, for instance, results from an assignment in theJavaScript code incorrectly assuming that an element in the HTML code has anumerical value, even though it is a string. In addition, the bug in twitters-earch resulted from a controller in the JavaScript code assuming that an inputelement in the HTML code has its type attribute defined, which is not the case.Detection of these cross-language inconsistencies is made possible by HOLOCRONlooking for link rules in the applications – something which prior bug detectiontools do not do.In all, HOLOCRON was able to find bugs spanning fifteen inconsistency cat-egories. For example, the bug in giddyup is caused by a property assignmenterroneously assuming that it has a corresponding route assignment in the Ember.jsrouter. In contrast, AUREBESH, which also targets MVC applications, will notbe able to detect this bug because (1) it only considers four pre-determined in-consistency categories, to which this inconsistency found in giddyup does notbelong, and (2) it only works for AngularJS applications, whereas giddyup isdeveloped with Ember.js. Indeed, AUREBESH is only capable of detecting two ofthe inconsistencies that HOLOCRON identified, both of which come from angul-ar-puzzle.Finally, many of the bugs found have potentially severe consequences on theapplication. For example, the bugs in projector – which are caused by anincorrect assumption about the type of value being assigned to an object property– all cause the application to hang. Further, the bug in webaudiosequence-r causes an audio clip to no longer be playable after a sequence of input events.Thus, HOLOCRON finds bugs that can have high impact on the application.Precision. Like most static analysis tools, HOLOCRON incurs false positives (i.e.,inconsistencies that are not bugs). False positives occurred in all but one of the ap-plications (twittersearch) . In total, HOLOCRON reported 91 inconsistencies,18 of which are real bugs, meaning 73 were false positives.To better understand the characteristics of these false positives, we manuallyanalyzed the reasons for the inconsistencies. We found that 22 of these 73 falsepositives occur because of frequent usage of a certain identifier contrasted withinfrequent usage of another identifier. For instance, in angular-puzzle, there163HC (15)UVU (14)MPI (4)Figure 6.6: Categories of code smells found by HOLOCRON. HC stands for Hardcoded Constants, UVUfor Unsafe Value Usage, and MPI stands for Multi-Purpose Identifiersare two main arrays, both of which are accessed through the this identifier –words and grid. While grid is used almost 20 times in the code, words isused only twice, leading to the false positive. Additionally, 18 of the false positivesoccur because of frequent usage of certain kinds of literals, contrasted with infre-quent usage of another kind of literal. For example, in projector, most of theobject method calls take string literals as parameters, but there are three such callsthat take number literals as parameters, which are reported as inconsistencies.We also found that 33 of the false positves correspond to code smells, the de-tection of which could help the developer improve the quality of the code. Thepie chart in Figure 6.6 shows the three categories of code smells discovered by ourtool in these applications. The first category is “Hardcoded Constants” (HC), whereconstant literal values are used in calculations and method calls. This detracts fromapplication maintainability. The second category is “Unsafe Value Usage” (UVU),which means a certain value is dereferenced without accounting for the possibilityof it being null. For instance, in emberpress, one of the inconsistencies oc-cur as a result of the get() method being called directly through a model objectrepeatedly; this is potentially unsafe if the object is null, and it is good practiceto call the method via the Ember object instead (i.e., Ember.get()). The thirdcategory is “Multi-Purpose Identifiers” (MPI), wherein the same identifier is be-ing used for multiple unrelated objects (e.g., in angular-puzzle, the identifier“src” is used both as a class name for a div element in the HTML code and as a164name for a puzzle object in the JavaScript code). This is a code smell as it detractsfrom the readability of the code, and can lead to confusion among programmers.These results indicate that 51 out of the 91 inconsistencies that are reportedby HOLOCRON correspond to either bugs or code smells. Hence, approximately 1out of every 2 reports are potentially useful to the developer in improving the webapplications’ quality.6.5.5 Performance (RQ3)On average, HOLOCRON ran for 1.14 minutes for each of the 12 applications. SinceHOLOCRON will typically be run prior to deployment of the application, this is anacceptable overhead. The worst-case run occurred with webaudiosequenc-er – which is also the largest application – where HOLOCRON ran for almost 8minutes. In this case, most of the time was spent on finding the link rule violations.This is likely because of the fact that in this module, all pairs of subtree classes arecompared with each other, as well as all subtrees within these classes, to find thelink rules.6.5.6 Threats to ValidityAn external threat to validity is that we used a limited number of applications in ourevaluation, which calls into question the generalizability of our results. Nonethe-less, to ensure that our subject systems are representative of real bugs, we choseapplications coming from various frameworks, sizes, and application types, as seenin Table 6.1.In addition, for our study of bug reports in RQ1, we categorize the inconsisten-cies based on a qualitative analysis of the reports, some of which may not be verydescriptive. To mitigate this, we also look at other aspects of the bug, includingpatches and commits associated with the bug.Finally, the bugs that we studied for RQ1 are limited to fixed bugs with thelabel “bug” and with the status “closed”; however, many GitHub developers do notnecessarily label bug reports, which means we may have missed certain bugs inour analysis. Nonetheless, we decided to choose bug reports this way because itfacilitated the search for valid bugs to analyze. We mitigated this issue by simpli-165fying the keywords used to search for the bug reports, so that the search results willinclude a larger representation of the bugs.6.6 DiscussionThe main assumption behind our approach is that there are sufficient examplesof a consistency rule that appear for it to both successfully learn the consistencyrule and detect any violations to that rule. While this is the case for large applica-tions, small applications have very few samples to learn from, and hence may incurlarge numbers of false positives and false negatives. To mitigate this problem, weaugment our code patterns with subtrees found in code examples from other ap-plications, as discussed in Section 6.3.2. Doing so allows our design to be moreconfident about the validity of the consistency rules, as well as “debunk” any con-sistency rules that may lead to false positives. In fact, we found that without theexample code, our false positive rates more than doubled. Using more and betterexamples can likewise bring down the false positive rate. This is a subject of futurework.Furthermore, our main focus in this chapter is in using HOLOCRON to detectinconsistencies in MVC applications. Nonetheless, our design can be run on webapplications using non-MVC JavaScript frameworks, such as jQuery. This maylead to a large number of inaccuracies, as the JavaScript code in these frameworksinteracts directly with the DOM, which undergoes many changes throughout theexecution of the web application. However, HOLOCRON may be able to detectinconsistencies within the JavaScript code, as well as inconsistencies between theJavaScript code and any component of the DOM that does not get modified. Todetect the remaining inconsistencies, we may need to augment our static analysisapproach with dynamic analysis. We plan to explore this direction in the future.As seen in the performance results, HOLOCRON spends most of its time look-ing for link rule violations. This can be mitigated by limiting the number of com-pared subtree pairs. However, this may result in loss of coverage. Another possibleway of improving the runtime is by employing a machine learning approach inwhich the tool would learn the link rules over multiple runs. By employing thisapproach, the tool would no longer have to find all the link rules in each run, as166some of these link rules have already been learned in prior runs. This is also asubject of future research.6.7 Related WorkFault Detection. Considerable work has been done on software fault detectionthrough code analysis [69, 74, 81, 96, 171, 173]. For example, PR-Miner [95] triesto detect violations of implicit programming patterns mined by the tool, similar toour work, although unlike HOLOCRON, it only considers rules derived from piecesof code that frequently appear together. Perhaps the closest analogue to our workin this chapter is Engler et al.’s work on finding bugs based on deviant behaviour,which makes use of the notion of “belief propagation” to infer correct programbehaviour [49]. The main difference with our work is that these prior techniquescannot detect cross-language inconsistencies, as they implicitly assume a single-language model.Further, static analysis techniques such as FindBugs [73] and AUREBESH (Chap-ter 5) detect faults based on hardcoded rules or bug templates. Additionally, dy-namic analysis techniques such as DLint [56] check consistency rules based on“bad coding practices”. As shown in our evaluation (RQ1), this can lead to manymissed bugs, especially for JavaScript MVC applications, as there are no specificinconsistency categories that dominate over the others. Further, the frameworksused in web applications evolve fast – thus, rules that apply today may be obso-leted tomorrow and new rules introduced.Cross-Language Computing. An empirical study conducted by Vetro et al. [166]shows that cross-language interactions could lead to higher defect proneness, par-ticularly for C programs. The significant number of cross-language inconsisten-cies we found during our bug report study strongly suggests that this also holds forJavaScript programs, and in particular, those implemented with MVC frameworks.Thus, it is important for bug-finding tools to consider cross-language inconsisten-cies.Much of the work done on cross-language computing has focused on detect-ing the dependencies between multiple programming languages [139, 140]. Onlya few techniques perform analysis in a cross-language-aware manner, including167XLL [106] and X-Develop [158], both of which perform code refactoring. Further,Nguyen et al. [122] recently proposed a tool to perform cross-language programslicing for web applications, with particular focus on PHP code and its interactionwith client-side code. Unlike HOLOCRON, these proposed techniques do not dealwith potential inconsistencies that occur in cross-language interactions.6.8 ConclusionsWe presented an automatic fault detection technique that finds inconsistencies inJavaScript MVC applications. Our technique analyzes the AST and DOM repre-sentations of the web application code, and it looks for both intra-pattern consis-tency rules and link rules; violations to these rules are thereby reported to the user.We implemented this approach in an open-source tool called HOLOCRON, and inour evaluation of open-source MVC applications, HOLOCRON was able to find 18previously unreported bugs. In addition, while false positives do occur, many ofthese point to smells that can help web developers improve code quality.168Chapter 7Conclusions and Future WorkOur main goal in this dissertation was twofold. First, we wanted to understandJavaScript bugs, in order to determine if impactful client-side JavaScript bugs doindeed plague web applications, and to have a better grasp of the causes and prop-agation characteristics of these bugs. To achieve this first goal, we conducted alarge-scale empirical study of client-side JavaScript bug reports found in the bugrepositories of open-source web applications. We found, in our qualitative analysisof 502 bug reports, that these bugs fall under a small number of fault categories; inparticular, the vast majority (68%) of these bugs can be classified as DOM-relatedfaults. Further, we found that a large number of these bugs have severe impact onthe web application, and most of these severe bugs are themselves DOM-related.Finally, we identified common error patterns among the bugs we studied, and dis-covered that most of these errors are committed in client-side code.Having attained sufficient understanding of JavaScript bugs, our second goalwas to develop automated techniques to mitigate them. As discussed in Chapter 1,we decided to pursue fault detection and repair as our mitigation strategy insteadof error prevention. While we acknowledge the importance of error prevention inpotentially reducing the maintenance costs and the person-hours spent fixing bugs,we believe that fault detection is likewise important, due to the inevitability of bugsnot just in JavaScript programming, but programming in general.To achieve this second goal, we developed various JavaScript fault detectionand repair techniques. More specifically, we first developed AUTOFLOX, whichperforms automatic localization of DOM-related faults, as well as VEJOVIS, whichautomatically suggests repairs for the same fault model. Subsequently, we also169developed automatic fault detection techniques for MVC applications, which donot interact directly with the DOM, but are nonetheless susceptible to inconsis-tencies between their various components. The first fault detection technique –AUREBESH – automatically detects identifier and type inconsistencies, and thesecond technique – HOLOCRON – is a generalized detector that finds both single-language and cross-language inconsistencies. All of these techniques have beenimplemented as open-source tools, which are available online [125].The rest of this chapter discusses the expected impact of this dissertation, aswell as future avenues of research that could be explored to extend this work.7.1 Expected ImpactPerhaps the most important contribution of this dissertation is the fact that it placesthe problem of JavaScript reliability in the limelight of web application research.In the past, researchers and developers focused exclusively on the language’s se-curity and performance because initially, JavaScript was used minimally; hence,they were only concerned with ensuring that the JavaScript code did not provide abackdoor for attacks, and that it did not negatively affect the speed of the web ap-plication. However, with the rising popularity of AJAX, the situation has changeddrastically, with JavaScript playing a much bigger role in the web application’sfunctionality, thereby increasing demand for more reliable JavaScript code; andindeed, as we have demonstrated, JavaScript reliability is a real problem plaguingweb applications. Further, we have demonstrated that solutions do exist for allevi-ating this problem. Our hope, then, is that researchers will continue to recognizethis importance and strive to improve JavaScript reliability.One of our main discoveries in this dissertation is the prevalence of DOM-related faults, many of which had high severity and were difficult to fix, as demon-strated in Chapter 2. This result from our bug report study became the driving forcefor the remainder of the dissertation; it inspired us to develop both automated tech-niques for localizing and repairing these DOM-related faults, as well as automatedtechniques for detecting inconsistencies in MVC applications, many of which re-semble DOM-related faults inasmuch as they are cross-language. We believe theimportance of this result will continue to have an impact on future research related170to the analysis and mitigation of JavaScript faults.In particular, the prevalence of DOM-related faults suggests two importantthings. First, it indicates that programmers have a difficult time understandingthe DOM, or at least keeping track of its many states and nodes. As discussedin Chapter 2, this is essentially a call-to-action for web developers and testers tohave a more “DOM-aware” attitude when programming and testing. Furthermore,it is also a call-to-action for researchers trying to apply error prevention techniquesto web applications, such as code completion and program comprehension. Morespecifically, such techniques must also be “DOM-aware”, in the sense that theymust give the JavaScript programmer a better grasp of the DOM state and statetransitions. Indeed, others have begun to do work steering towards this direction,including Dompletion [19], which performs code completion for DOM API meth-ods; Clematis [8], which is a program comprehension tool for web applications;and LED [21], which performs automatic synthesis of DOM element selectors.Second, we believe that the prevalence of DOM-related faults is welcome newsfor researchers trying to develop analysis techniques for improving web applica-tion reliability. This is because the DOM is a well-structured piece of data, andthis structure can be exploited to more easily understand how the JavaScript codeinteracts with the DOM, or debug a particular JavaScript fault. In fact, we usedthis observation as one of the driving principles behind VEJOVIS, which suggestsfault repairs by modifying the erroneous DOM method parameter to match, notsome specifications provided by the user, but rather, the current DOM’s structure.This observation can also be useful for other analysis tasks, including code smelldetection and test case generation, among others.There has been considerable work done on statically analyzing JavaScript codeto check for errors and code smells based on syntax. Hence, many of the toolsadopted in practice that perform JavaScript analysis – including JSLint [43] andClosure [57] – are only concerned with deviations from syntax, without looking atthe logic behind the program. On the other hand, our detection, localization, andrepair tools look precisely at the semantics of the program, and try to understandthese semantics. We therefore hope that the techniques we propose in this disserta-tion will inspire others to improve the state of JavaScript tooling, by not relegatingthemselves to syntax analysis, but also paying attention to JavaScript semantics.171The impact of this dissertation also goes beyond JavaScript and web appli-cations. In Chapter 6 in particular, we explored cross-language interactions, anddemonstrated how it can be possible to analyze them and, in this case, to detectinconsistencies that take place as a result of these interactions. This is an importantcontribution because cross-language programming – also known as heterogeneousor polyglot programming – occurs in a wide range of applications relevant to themodern world, and is applied very frequently in industrial projects [170]. For ex-ample, native Android apps are typically implemented using Java (for the activities)and XML (for the manifest file, layout, and other UI components); apps for othermobile devices are implemented in a similar fashion. In addition, cross-languageinteractions also occur in applications written for the cloud, as well as Internetof Things (IoT) applications [31]. Therefore, the idea behind HOLOCRON can berepurposed to also apply to these other systems.Furthermore, while the identification of DOM-related faults as a common faultpattern is a result specific to web applications, at its core, it points to the difficultythat software programs have in interacting with their environment. For example,it is very common for programs to interact with the file system. Unlike the DOM,the file system is not written and designed specifically for the needs of the programbeing written; as a result, file accesses are often restricted to very specific fold-ers whose internal structure is not as complex as that of the DOM. Nonetheless,the interaction of programs with the file system shares many characteristics withthe interaction of the JavaScript code with the DOM, such as the retrieval of filesusing filenames to read and/or manipulate their contents, as well as the dynamicnature of the file system that results from these interactions. Hence, our knowledgeregarding DOM-related faults could be used as a starting point towards understand-ing bugs in other software applications that result from their interaction with theirenvironment.Lastly, the dissertation also sheds light on how to properly conduct researchrelated to reliability issues in software applications. In particular, we have demon-strated the value of conducting an empirical study first, instead of jumping to con-clusions about the importance of a particular class of bugs. Conducting such astudy is useful, for two reasons. First, it allows the researcher to demonstratethe importance of a particular class of bugs, which is done by establishing their172prevalence and impact. Second, the study allows the researcher to gain valuableinsights about the characteristics of these bugs – including their root causes, prop-agation characteristics, and failure characteristics – which puts the researcher in abetter position to design techniques for mitigating them. We used this approach inthis dissertation, and it allowed us not only to gain better understanding of certainclasses of bugs – in particular, DOM-related faults and cross-language inconsisten-cies – but also to create tools that make use of this newfound understanding.7.2 Future WorkThere are several ways to extend the work carried out in this dissertation, includingthe following.Tool Extensions. Even though our tools were implemented primarily with client-side JavaScript in mind, many of the techniques we introduce can potentially beextended to work for other programming languages. Nevertheless, there are somechallenges that can be addressed in the future in order to achieve full compatibilitywith other languages. For example, while the subtree pattern matching techniqueused in HOLOCRON can work for any abstract syntax tree (AST), a general-purposeversion of HOLOCRON that supports arbitrary programming languages will need toperform syntax inference in order to generate the correct AST. Additionally, whileVEJOVIS in principle can be applied to other event-driven programming languagesthat interact with an external entity similar to the DOM (e.g., Java code used inAndroid mobile apps), the type of information collected to infer the symptoms willdiffer depending on the features of the language.Our fault detection tools (AUREBESH and HOLOCRON) can also be extendedto work for other types of JavaScript frameworks apart from MVC. However, do-ing so can be challenging. In particular, one of the benefits of focusing on MVCframeworks is that it allows us to forego dynamic analysis, because these MVCframeworks rely on static bindings between the JavaScript code and an HTMLtemplate, instead of interacting directly with the DOM. On the other hand, non-MVC frameworks typically rely on direct interactions with the DOM; as a result,whenever the state of the DOM changes, the consistency relationship between theJavaScript code and the DOM also changes, which means dynamic analysis is re-173quired to keep track of these changes. Determining how to conduct this dynamicanalysis is an interesting avenue for future work.Finally, implementing our techniques as IDE extensions may also be worth-while from the standpoint of usability. We have already started heading towardsthis direction, with AUTOFLOX being implemented as an Eclipse plugin (Sec-tion 3.5), and HOLOCRON being implemented as a Brackets plugin (Section 6.4).In the same way, VEJOVIS, which is already written in Java, can be extended asan Eclipse plugin, and AUREBESH, which is already written in JavaScript, can berefactored as a Node.js application to work in the Brackets IDE.DOM Analysis. One of the recurring issues we had to deal with when designingour fault detection, localization, and repair tools was determining how to staticallyanalyze code in the presence of a dynamic DOM. In many ways, this problemresembles traditional pointer analysis, in that it requires the code analysis tool tokeep track of multiple references to the same object – in this case, a DOM element.However, analyzing the DOM can also be more challenging because unlike tradi-tional pointers, the initial state of the DOM (i.e., the HTML code) is developedseparately from the JavaScript code. As a result, while analysis of C code, forexample, suffices in understanding pointers, analysis of JavaScript code does notsuffice in understanding the DOM, and an accurate model of this external DOMobject is required. Therefore, this calls on researchers to try to attain a strongerunderstanding of DOM analysis techniques, and explore various ways to conductsuch an analysis.JavaScript Framework Analysis. In order to address issues with JavaScript pro-gramming, many developers either create new frameworks/libraries (e.g., jQuery,AngularJS) or add new features to JavaScript by creating a superset of the language(e.g., TypeScript, Dart). For example, jQuery was written, in part, to simplify theprocess of making JavaScript code cross-browser compatible. In addition, Type-Script was created to help developers keep track of and enforce variable types.Therefore, the choice of a JavaScript framework or a JavaScript-based technologyfundamentally affects the way that a developer writes code, which, in turn, po-tentially affects the reliability of the program. In that regard, it would be helpfulto conduct a thorough study of JavaScript frameworks from the standpoint of re-174liability. Such a study entails determining what features recurrently degrade theprogram’s reliability, and what features potentially enhance it. As a starting point,this study can be coupled with the results of our bug report study in Chapter 2,where the results can be used to create hypotheses (e.g., are frameworks that in-volve heavy cross-language interactions buggier than those that do not?).On a related note, it would also be interesting to explore some of the ways thatwe can create new frameworks with the goal of improving reliability. This pro-cess will be simplified by conducting the empirical study on existing JavaScriptframeworks suggested above, as the study would enable us to decide what prop-erties our new framework would have or would not have. Nonetheless, there is aproperty that already stands out based on the results of this dissertation; in par-ticular, it would be useful to design a framework that minimizes the number ofcross-language interactions set up by the programmer, or at least, makes such in-teractions more manageable to the programmer. To some extent, MVC frameworkshave succeeded in addressing part of this issue by abstracting out the DOM; how-ever, they are still susceptible to a large number of cross-language inconsistencies,as we demonstrated in Chapters 5 and 6.Other Application Types. The programming processes for many new applicationtypes resemble client-side web application programming. For instance, as men-tioned in Section 7.1, mobile, cloud, and IoT applications involve cross-languageinteractions. Even without deviating from the realm of web applications, server-side programming is also beginning to resemble client-side programming, espe-cially with the increase in popularity of Node.js. Hence, it is reasonable to lookfor ways to apply some of our techniques in this dissertation to these other do-mains. For example, many IoT applications also follow an event-driven executionmodel [93], so our method for stitching together separate asynchronous executiontraces can be used to localize faults that occur in them, similar to what AUTOFLOXdoes. In addition, as mentioned in Section 7.1, the file system APIs used in theseapplications are similar to the DOM API in client-side JavaScript. Therefore, VE-JOVIS’ technique of suggesting repairs by comparing erroneous DOM API param-eters to the DOM can be applied to these other applications; the difference, ofcourse, is that the repairs are suggested by comparing erroneous parameters to thefile system instead of the DOM.175Despite the above similarities, there are also differences in the way these otherapplications are programmed. In server-side JavaScript applications, for exam-ple, there is no DOM involved, but there is greater emphasis on JavaScript fileimports, data validation, and database accesses. Further, cloud and IoT applica-tions can run in a multi-threaded environment, and focus on distributed computing.It would therefore be interesting to see what types of bugs occur in these otherapplications, and if these bug categories differ significantly from those found inclient-side JavaScript programs.176Bibliography[1] R. Abreu, P. Zoeteweij, and A. Gemund. Spectrum-based multiple faultlocalization. In Proceedings of the International Conference on AutomatedSoftware Engineering (ASE), pages 88–99. IEEE Computer Society, 2009.→ pages 50[2] Adobe. JSEclipse, 2016. May 27, 2016). → pages 4[3] Adobe Systems. Brackets, 2015. (Accessed: May16, 2015). → pages 4, 157[4] H. Agrawal, J. Horgan, S. London, and W. Wong. Fault localization usingexecution slices and dataflow tests. In Proc. Intl. Symposium on SoftwareReliability Engineering, pages 143–151. IEEE Computer Society, 1995. →pages 50, 80[5] R. Agrawal and R. Srikant. Fast algorithms for mining association rules inlarge databases. In Proceedings of the International Conference on VeryLarge Databases (VLDB), pages 487–499. Morgan Kaufmann PublishersInc., 1994. → pages 157[6] R. Agrawal, T. Imielin´ski, and A. Swami. Mining association rulesbetween sets of items in large databases. In Proceedings of theInternational Conference on Management of Data (SIGMOD), pages207–216. ACM, 1993. → pages 157[7] and Mozilla. Ace, 2015. (Accessed: May 16,2015). → pages 131[8] S. Alimadadi, S. Sequeira, A. Mesbah, and K. Pattabiraman.Understanding javascript event-based interactions. In Proceedings of theInternational Conference on Software Engineering (ICSE), pages 367–377.ACM, 2014. → pages 79, 171177[9] E. Andreasen and A. Møller. Determinacy in static analysis for jquery. InProceedings of the International Conference on Object OrientedProgramming Systems Languages & Applications (OOPSLA), pages17–31. ACM, 2014. → pages 5[10] S. Andrica and G. Candea. WaRR: High Fidelity Web ApplicationRecording and Replaying. In Proc. Intl. Conference on DependableSystems and Networks (DSN). IEEE Computer Society, 2011. → pages 4,67, 79[11] J. Anvik, L. Hiew, and G. C. Murphy. Who should fix this bug? In Proc.Intl. Conference on Software Engineering (ICSE), pages 361–370. ACM,2006. → pages 22[12] Aptana Inc. Aptana Studio, 2013. (Accessed:October 10, 2013). → pages 4[13] A. Arcuri and X. Yao. A novel co-evolutionary approach to automaticsoftware bug fixing. In IEEE World Congress on EvolutionaryComputation (CEC), pages 162–168, 2008. → pages 111[14] S. Artzi, J. Dolby, F. Tip, and M. Pistoia. Practical fault localization fordynamic web applications. In Proc. Intl. Conference on SoftwareEngineering, pages 265–274. ACM, 2010. → pages 4, 80[15] S. Artzi, J. Dolby, S. Jensen, A. Møller, and F. Tip. A framework forautomated testing of JavaScript web applications. In Proceedings of theInternational Conference on Software Engineering (ICSE), pages 571–580.ACM, 2011. → pages 4, 49, 79, 143[16] J. Ashkenas. BackboneJS, 2015. (Accessed:May 16, 2015). → pages 113[17] J. Ashkenas. BackboneJS: Tutorials, blog posts and example sites, 2016.,-blog-posts-and-example-sites (Accessed: April 29, 2016). → pages 158,159[18] S. Bae, H. Cho, I. Lim, and S. Ryu. SAFEWAPI: web API misuse detectorfor web applications. In Proceedings of the International Symposium onthe Foundations of Software Engineering (FSE), pages 507–517. ACM,2014. → pages 47, 78178[19] K. Bajaj, K. Pattabiraman, and A. Mesbah. Dompletion: Dom-awarejavascript code completion. In Proceedings of the International Conferenceon Automated Software Engineering (ASE), pages 43–54. ACM, 2014. →pages 4, 171[20] K. Bajaj, K. Pattabiraman, and A. Mesbah. Mining questions asked by webdevelopers. In Proceedings of the Working Conference on Mining SoftwareRepositories (MSR), pages 112–121. ACM, 2014. → pages 2, 36, 47[21] K. Bajaj, K. Pattabiraman, and A. Mesbah. Led: Tool for synthesizing webelement locators. In Proceedings of the International Conference onAutomated Software Engineering (ASE), pages 848–851. IEEE ComputerSociety, 2015. → pages 171[22] L. Bak and K. Lund. Dart, 2015. (Accessed:May 16, 2015). → pages 10, 22[23] V. Balasubramanee, C. Wimalasena, R. Singh, and M. Pierce. Twitterbootstrap and AngularJS: Frontend frameworks to expedite sciencegateway development. In Proceedings of the International Conference onCluster Computing (CLUSTER), page 1. IEEE Computer Society, 2013. →pages 115, 140[24] A. Bandyopadhyay and S. Ghosh. Tester feedback driven fault localization.In Proceedings of International Conference on Software Testing,Verification and Validation (ICST), pages 41–50. IEEE Computer Society,2012. → pages 79[25] I. D. Baxter, A. Yahin, L. Moura, M. S. Anna, and L. Bier. Clone detectionusing abstract syntax trees. In Proceedings of the International Conferenceon Software Maintenance (ICSM), pages 368–377. IEEE ComputerSociety, 1998. → pages 150[26] Berico. Damerau-Levenshtein Java implementation, 2014. June 5, 2014). → pages 104[27] N. Bettenburg, S. Just, A. Schro¨ter, C. Weiss, R. Premraj, andT. Zimmermann. What makes a good bug report? In Proceedings of theInternational Symposium on the Foundations of Software Engineering(FSE), pages 308–318. ACM, 2008. → pages 10179[28] P. Bhattacharya, M. Iliofotou, I. Neamtiu, and M. Faloutsos. Graph-basedanalysis and prediction for software evolution. In Proc. Intl. Conference onSoftware Eng. (ICSE), pages 419–429. IEEE Computer Society, 2012. →pages 21[29] B. Braun, P. Gemein, H. P. Reiser, and J. Posegga. Control-flow integrity inweb applications. In Proc. of Intl. Symp. on Engineering Secure Softwareand Systems (ESSOS), pages 1–16. Springer, 2013. → pages 3, 46[30] B. Burg, R. Bailey, A. J. Ko, and M. D. Ernst. Interactive record/replay forweb application debugging. In Proceedings of the ACM Symposium onUser Interface Software and Technology (UIST), pages 473–484. ACM,2013. → pages 4, 79[31] R. Buyya and A. V. Dastjerdi. Internet of Things: Principles andParadigms, chapter Polyglot Programming. Elsevier, 2016. → pages 172[32] A. Carzaniga, A. Gorla, and M. Pezze`. Healing web applications throughautomatic workarounds. International Journal on Software Tools forTechnology Transfer (STTT), 10(6):493–502, 2008. → pages 4[33] A. Carzaniga, A. Gorla, N. Perino, and M. Pezze`. Automatic workaroundsfor web applications. In Proceedings of the International Symposium onthe Foundations of Software Engineering (FSE), pages 237–246. ACM,2010. → pages 46, 83, 111[34] S. Chandra and P. M. Chen. Whither generic recovery from applicationfaults? a fault study using open-source software. In Proc. Intl. Conferenceon Dependable Systems and Networks (DSN), pages 97–106. IEEEComputer Society, 2000. → pages 31, 46[35] M. Y. Chen, E. Kiciman, E. Fratkin, A. Fox, and E. Brewer. Pinpoint:Problem determination in large, dynamic internet services. In Proc. Intl.Conference on Dependable Systems and Networks (DSN), pages 595–604.IEEE Computer Society, 2002.doi: →pages 79[36] S. R. Choudhary, M. R. Prasad, and A. Orso. Crosscheck: Combiningcrawling and differencing to better detect cross-browser incompatibilitiesin web applications. In Proceedings of the International Conference onSoftware Testing, Verification and Validation (ICST), pages 171–180. IEEEComputer Society, 2012. → pages 36180[37] S. R. Choudhary, M. R. Prasad, and A. Orso. X-PERT: Accurateidentification of cross-browser issues in web applications. In Proceedingsof the International Conference on Software Engineering (ICSE), pages702–711. IEEE Computer Society, 2013. → pages 36[38] M. Cinque, D. Cotroneo, Z. Kalbarczyk, and R. Iyer. How do mobilephones fail? a failure data analysis of symbian os smart phones. In Proc.Intl. Conference on Dependable Systems and Networks (DSN), pages585–594. IEEE Computer Society, 2007. → pages 46[39] H. Cleve and A. Zeller. Locating causes of program failures. In Proc. Intl.Conference on Software Engineering (ICSE), pages 342–351. ACM, 2005.→ pages 50[40] D. Crockford. JavaScript: The Good Parts. O’Reilly Media, Inc., 2008.ISBN 0596517742. → pages 2[41] B. Demsky and M. Rinard. Data structure repair using goal-directedreasoning. In Proc. Intl. Conference on Software Engineering (ICSE),pages 176–185. ACM, 2005. → pages 111[42] K. Dobolyi and W. Weimer. Modeling consumer-perceived web applicationfault severities for testing. In Proc. Intl. Symposium on Software Testingand Analysis (ISSTA), pages 97–106. ACM, 2010.doi: → pages 80[43] Douglas Crockford. JSLint, 2012. (Accessed: April18, 2012). → pages 4, 143, 171[44] Eclipse Foundation. Eclipse, 2012. (Accessed:April 18, 2012). → pages 4, 67[45] B. Eich. ECMAScript documentation, 2016. (Accessed: April 29, 2016). → pages35[46] S. Elbaum, G. Rothermel, S. Karre, M. Fisher, et al. Leveraginguser-session data to support web application testing. Transactions onSoftware Engineering (TSE), 31(3):187–202, 2005. → pages 46[47] B. Elkarablieh, I. Garcia, Y. L. Suen, and S. Khurshid. Assertion-basedrepair of complex data structures. In Proc. Intl. Conference on AutomatedSoftware Engineering (ASE), pages 64–73. ACM, 2007. → pages 111181[48] EmberSherpa. Open source Ember apps, 2016. (Accessed:April 29, 2016). → pages 158, 159[49] D. Engler, D. Y. Chen, S. Hallem, A. Chou, and B. Chelf. Bugs as deviantbehavior: A general approach to inferring errors in systems code. InProceedings of the ACM Symposium on Operating Systems Principles(SOSP), pages 57–72. ACM, 2001. → pages 143, 167[50] M. Erfani Joorabchi, M. Mirzaaghaei, and A. Mesbah. Works for me!Characterizing non-reproducible bug reports. In Proceedings of theWorking Conference on Mining Software Repositories (MSR), pages 62–71.ACM, 2014. → pages 46[51] Facebook. Flow: a static type checker for JavaScript, 2016. (Accessed: April 29, 2016). → pages 45[52] A. Feldthaus and A. Møller. Checking correctness of TypeScript interfacesfor JavaScript libraries. In Proceedings of the International Conference onObject Oriented Programming, Systems, Language and Applications(OOPSLA). ACM, 2014. → pages 140[53] E. Fortuna, O. Anderson, L. Ceze, and S. Eggers. A limit study ofJavaScript parallelism. In Proc. Intl. Symposium on WorkloadCharacterization (IISWC), pages 1–10. IEEE Computer Society, 2010. →pages 3, 47[54] J. Fujima. Building a meme media platform with a JavaScript MVCframework and HTML5. Webble Technology, pages 79–89, 2013. → pages114, 140[55] A. Gizas, S. Christodoulou, and T. Papatheodorou. Comparative evaluationof javascript frameworks. In Proceedings of the International ConferenceCompanion on World Wide Web (WWW Companion), pages 513–514.ACM, 2012. → pages 140[56] L. Gong, M. Pradel, M. Sridharan, and K. Sen. DLint: Dynamicallychecking bad coding practices in JavaScript. In Proceedings of theInternational Symposium on Software Testing and Analysis (ISSTA). ACM,2015. → pages 167[57] Google. Closure Compiler, 2013. October 10, 2013). → pages 4, 171182[58] Google. AngularJS, 2015. (Accessed: May 16,2015). → pages 6, 113, 114[59] Google. What is AngularJS?, 2015. (Accessed: May 16, 2015). →pages 127[60] Google. Built with AngularJS, 2015. (Accessed: May 16,2015). → pages 131, 158, 159[61] K. Goseva-Popstojanova, S. Mazimdar, and A. D. Singh. Empirical studyof session-based workload and reliability for web servers. In Proc. Intl.Symp. on Softw. Reliability Eng. (ISSRE), pages 403–414. IEEE ComputerSociety, 2004. → pages 1, 3, 46[62] D. Graziotin and P. Abrahamsson. Making sense out of a jungle ofJavaScript frameworks. Product-Focused Software Process Improvement(PROFES), pages 334–337, 2013. → pages 140[63] F. Groeneveld, A. Mesbah, and A. van Deursen. Automatic invariantdetection in dynamic web applications. Technical ReportTUD-SERG-2010-037, Delft University of Technology, 2010. → pages 68[64] S. Guarnieri and B. Livshits. Gatekeeper: mostly static enforcement ofsecurity and reliability policies for JavaScript code. In Proc. Conference onUSENIX Security Symposium (SSYM), pages 151–168. ACM, 2009. →pages 3, 16, 78[65] A. Guha, S. Krishnamurthi, and T. Jim. Using static analysis for AJAXintrusion detection. In Proc. Intl. Conference on World Wide Web (WWW),pages 561–570. ACM, 2009.doi: → pages 16, 78[66] B. Hackett and S.-y. Guo. Fast and precise hybrid type inference forJavaScript. In Proceedings of the ACM Conference on ProgrammingLanguage Design and Implementation (PLDI), pages 239–250. ACM,2012. → pages 45[67] S. Halle´, T. Ettema, C. Bunch, and T. Bultan. Eliminating navigation errorsin web applications via model checking and runtime enforcement ofnavigation state machines. In Proceedings of the International Conferenceon Automated Software Engineering (ASE), pages 235–244. ACM, 2010.→ pages 140183[68] Q. Hanam, F. Brito, and A. Mesbah. Discovering bug patterns inJavaScript. In Proceedings of the International Symposium on theFoundations of Software Engineering (FSE), page 11 pages. ACM, ACM,2016. → pages 46[69] S. Hangal and M. S. Lam. Tracking down software bugs using automaticanomaly detection. In Proceedings of the International Conference onSoftware Engineering (ICSE), pages 291–301. ACM, 2002. → pages 167[70] J. Hewitt. Firebug, 2016. (Accessed: April 29, 2016).→ pages 79[71] A. Hidayat. Esprima, 2015. (Accessed: May 16,2015). → pages 158[72] Y. Hongping, S. Jiangping, and Z. Xiaorui. The update versiondevelopment of “wiki message linking” system-integrated Ajax with MVCmodel. In Proceedings of the International Forum on ComputerScience-Technology and Applications (IFCSTA), pages 209–212. IEEEComputer Society, 2009. → pages 114, 140[73] D. Hovemeyer and W. Pugh. Finding bugs is easy. In CompanionProceedings of the International Conference on Object-OrientedProgramming, Systems, Languages and Applications (OOPSLA), pages132–136. ACM, 2004. → pages 167[74] C.-H. Hsiao, M. Cafarella, and S. Narayanasamy. Using web corpusstatistics for program analysis. In Proceedings of the InternationalConference on Object Oriented Programming Systems Languages &Applications (OOPSLA), pages 49–65. ACM, 2014. → pages 167[75] J. Huggins. Selenium, 2013. (Accessed: October 10,2013). → pages 67[76] Internet World Stats. Internet growth statistics, 2016. (Accessed: May 22,2016). → pages 1[77] D. Jang, R. Jhala, S. Lerner, and H. Shacham. An empirical study ofprivacy-violating information flows in JavaScript web applications. InACM Conf. on Comp. and Communications Security (CCS), pages270–283. ACM, 2010. doi:→ pages 3, 47184[78] S. Jensen, A. Møller, and P. Thiemann. Type analysis for JavaScript.Proceedings of the International Static Analysis Symposium (SAS), pages238–255, 2009. → pages 44, 45, 143[79] S. H. Jensen, M. Madsen, and A. Møller. Modeling the HTML DOM andbrowser API in static analysis of JavaScript web applications. InProceedings of the Joint Meeting of the European Software EngineeringConference and the Symposium on the Foundations of SoftwareEngineering (ESEC/FSE), pages 59–69. ACM, 2011. → pages 16, 78, 143[80] S. H. Jensen, P. A. Jonsson, and A. Møller. Remedying the eval that mendo. In Proc. Intl. Symposium on Software Testing and Analysis (ISSTA),pages 34–44. ACM, 2012. → pages 83, 112[81] L. Jiang, Z. Su, and E. Chiu. Context-based detection of clone-related bugs.In Proceedings of the Joint Meeting of the European Software EngineeringConference and the Symposium on the Foundations of SoftwareEngineering (ESEC/FSE), pages 55–64. ACM, 2007. → pages 167[82] jindw. XMLDOM, 2016. April 29, 2016). → pages 158[83] jjask. Scope variable not accessible (undefined) - AngularJS, 2014. (Accessed: May 16,2015). → pages 116[84] J. Jones and M. Harrold. Empirical evaluation of the Tarantula automaticfault-localization technique. In Proc. Intl. Conference on AutomatedSoftware Engineering (ASE), pages 273–282. ACM, 2005. → pages 4, 49,50, 79[85] M. Kalyanakrishnan, R. Iyer, and J. Patel. Reliability of internet hosts: acase study from the end user’s perspective. Computer Networks, 31(1-2):47–57, 1999. ISSN 1389-1286. → pages 1, 3, 46[86] K. Kambona, E. G. Boix, and W. De Meuter. An evaluation of reactiveprogramming and promises for structuring collaborative web applications.In Proceedings of the Workshop on Dynamic Languages and Applications(DYLA), pages 15–23. ACM, 2013. → pages 115, 140[87] Y. Katz. EmberJS, 2015. (Accessed: May 16,2015). → pages 113185[88] A. Kiezun, V. Ganesh, P. J. Guo, P. Hooimeijer, and M. D. Ernst. Hampi: asolver for string constraints. In Proc. of the Intl. Symposium on SoftwareTesting and Analysis (ISSTA), pages 105–116. ACM, 2009. → pages 104[89] S. Kim and E. J. Whitehead Jr. How long did it take to fix bugs? InProceedings of the International Workshop on Mining SoftwareRepositories (MSR), pages 173–174. ACM, 2006. → pages 42[90] E. Koshelko. Why you should not use AngularJS, 2015. May 16, 2015). → pages 114[91] G. E. Krasner and S. T. Pope. A cookbook for using the model-viewcontroller user interface paradigm in Smalltalk-80. Journal of ObjectOriented Program (JOOP), 1(3):26–49, 1988. → pages 140[92] lagos. Autopager jQuery extension, 2014. (Accessed: June 5, 2014). →pages 85[93] L. Lan, B. Wang, L. Zhang, R. Shi, and F. Li. An event-drivenservice-oriented architecture for the internet of things service execution.International Journal of Online Engineering (iJOE), 11(2):4–8, 2015. →pages 175[94] A. Leff and J. T. Rayfield. Web-application development using themodel/view/controller design pattern. In Proceedings of the InternationalEnterprise Distributed Object Computing Conference (EDOC), pages118–127. IEEE Computer Society, 2001. → pages 140[95] Z. Li and Y. Zhou. PR-Miner: automatically extracting implicitprogramming rules and detecting violations in large software code. InProceedings of the International Symposium on Foundations of SoftwareEngineering (FSE), pages 306–315. ACM, 2005. → pages 167[96] Z. Li, S. Lu, S. Myagmar, and Y. Zhou. CP-Miner: Finding copy-paste andrelated bugs in large-scale software code. Transactions on SoftwareEngineering (TSE), 32(3):176–192, 2006. → pages 167[97] Z. Li, L. Tan, X. Wang, S. Lu, Y. Zhou, and C. Zhai. Have things changednow?: an empirical study of bug characteristics in modern open sourcesoftware. In Workshop on Architectural and System Support for ImprovingSoftware Dependability (ASID), pages 25–33. ACM, 2006.doi: → pages 46186[98] Y. Liao, Z. Zhang, and Y. Yang. Web applications based on AJAXtechnology and its framework. In Proceedings of the InternationalConference on Communications and Information Processing (ICCIP),pages 320–326. Springer, 2012. → pages 140[99] E. Lielmanis. JSBeautifier, 2014. (Accessed:June 5, 2014). → pages 67[100] M. Madsen, B. Livshits, and M. Fanning. Practical static analysis ofjavascript applications in the presence of frameworks and libraries. InProceedings of the Joint Meeting of the European Software EngineeringConference and the Symposium on the Foundations of SoftwareEngineering (ESEC/FSE), pages 499–509. ACM, 2013. → pages 4, 5[101] A. Marchetto, P. Tonella, and F. Ricca. State-based testing of AJAX webapplications. In Proceedings of the International Conference on SoftwareTesting, Verification and Validation (ICST), pages 121–130. IEEEComputer Society, 2008. → pages 49, 143[102] A. Marchetto, F. Ricca, and P. Tonella. An empirical validation of a webfault taxonomy and its usage for web testing. Journal of Web Engineering(JWE), 8(4):316–345, 2009. → pages 46[103] MarionetteJS. Backbone Marionette, 2016. (Accessed: April 29,2016). → pages 146[104] L. Marks, Y. Zou, and A. E. Hassan. Studying the fix-time for bugs in largeopen source projects. In Proc. Intl. Conf. on Predictive Models in Softw.Eng. (PROMISE), page 11. ACM, 2011. → pages 22[105] J. Martinsen, H. Grahn, and A. Isberg. A comparative evaluation ofJavaScript execution behavior. In Proc. of Intl. Conf. on Web Engineering(ICWE), pages 399–402. Springer, 2011. → pages 47[106] P. Mayer and A. Schroeder. Cross-language code analysis and refactoring.In Proceedings of the International Working Conference on Source CodeAnalysis and Manipulation (SCAM), pages 94–103. IEEE ComputerSociety, 2012. → pages 168[107] F. Meawad, G. Richards, F. Morandat, and J. Vitek. Eval begone!:semi-automated removal of eval from JavaScript programs. In Proc. of theACM Intl. Conference on Object Oriented Programming Systems187Languages and Applications (OOPSLA), pages 607–620. ACM, 2012. →pages 112[108] A. Mesbah and M. R. Prasad. Automated cross-browser compatibilitytesting. In Proceedings of the International Conference on SoftwareEngineering (ICSE), pages 561–570. ACM, 2011. → pages 36[109] A. Mesbah and A. van Deursen. Invariant-based automatic testing of AJAXuser interfaces. In Proceedings of the International Conference on SoftwareEngineering (ICSE), pages 210–220. IEEE Computer Society, 2009. →pages 4, 49, 79, 143[110] A. Mesbah, A. van Deursen, and S. Lenselink. Crawling Ajax-based webapplications through dynamic analysis of user interface state changes.ACM Trans. Web (TWEB), 6(1):3:1–3:30, 2012. → pages 67, 104[111] J. Mickens, J. Elson, and J. Howell. Mugshot: deterministic capture andreplay for JavaScript applications. In USENIX Symposium on NetworkedSystems Design and Implementation (NSDI), pages 11–11. ACM, 2010. →pages 4, 67, 79[112] Microsoft. TypeScript, 2015. (Accessed:May 16, 2015). → pages 10, 22[113] S. Mirshokraie and A. Mesbah. JSART: JavaScript assertion-basedregression testing. In Proc. Intl. Conference on Web Engineering (ICWE),pages 238–252. Springer, 2012. → pages 4, 79[114] S. Mirshokraie, A. Mesbah, and K. Pattabiraman. Efficient JavaScriptmutation testing. In Proc. Intl. Conference on Software Testing, Verificationand Validation (ICST). IEEE Computer Society, 2013. → pages 21, 79[115] S. Mirshokraie, A. Mesbah, and K. Pattabiraman. JSeft: AutomatedJavaScript unit test generation. In Proceedings of the InternationalConference on Software Testing, Verification and Validation (ICST). IEEEComputer Society, 2015. → pages 4, 79[116] S. Mirshokraie, A. Mesbah, and K. Pattabiraman. Guided mutation testingfor JavaScript web applications. Transactions on Software Engineering(TSE), 41(5):429–444, 2015. → pages 143[117] S. Moon, Y. Kim, M. Kim, and S. Yoo. Ask the mutants: Mutating faultyprograms for fault localization. In Proceedings of International Conference188on Software Testing, Verification and Validation (ICST), pages 153–162.IEEE Computer Society, 2014. → pages 79[118] R. Morales-Chaparro, M. Linaje, J. Preciado, and F. Sa´nchez-Figueroa.MVC web design patterns and rich internet applications. Proceedings ofthe Jornadas de Ingenierıa del Software y Bases de Datos, pages 39–46,2007. → pages 140[119] Mozilla. Rhino, 2012. April 18, 2012). → pages 67[120] H. A. Nguyen, T. T. Nguyen, N. H. Pham, J. Al-Kofahi, and T. N. Nguyen.Clone management for evolving software. Transactions on SoftwareEngineering (TSE), 38(5):1008–1026, 2012. → pages 150[121] H. V. Nguyen, H. A. Nguyen, T. T. Nguyen, and T. N. Nguyen.Auto-locating and fix-propagating for HTML validation errors to PHPserver-side code. In Proc. Intl. Conference on Automated SoftwareEngineering (ASE), pages 13–22. IEEE Computer Society, 2011. → pages83[122] H. V. Nguyen, C. Ka¨stner, and T. N. Nguyen. Cross-language programslicing for dynamic web applications. In Proceedings of the Joint Meetingof the European Software Engineering Conference and the Symposium onthe Foundations of Software Engineering (ESEC/FSE), pages 369–380.ACM, 2015. → pages 168[123] J. Nijjar and T. Bultan. Bounded verification of Ruby on Rails data models.In Proceedings of the International Symposium on Software Testing andAnalysis (ISSTA), pages 67–77. ACM, 2011. → pages 140[124] N. Nikiforakis, L. Invernizzi, A. Kapravelos, S. Van Acker, W. Joosen,C. Kruegel, F. Piessens, and G. Vigna. You are what you include:Large-scale evaluation of remote JavaScript inclusions. In Proc. of theConf. on Computer and Communications Security (CCS). ACM, 2012. →pages 3, 47[125] F. Ocariza. Projects, 2012.∼frolino/projects/ (Accessed:April 18, 2012). → pages 170[126] F. Ocariza. Vejovis, 2014.∼frolino/projects/vejovis/(Accessed: June 5, 2014). → pages 104, 105, 110189[127] F. Ocariza. Aurebesh, 2015.∼frolino/projects/aurebesh/(Accessed: May 16, 2015). → pages 131[128] F. Ocariza, K. Pattabiraman, and B. Zorn. JavaScript errors in the wild: anempirical study. In Proceedings of the International Symposium onSoftware Reliability Engineering (ISSRE), pages 100–109. IEEE ComputerSociety, 2011. → pages 3, 10, 26, 47, 50, 86, 140, 142[129] F. Ocariza, K. Pattabiraman, and A. Mesbah. AutoFLox: an automatic faultlocalizer for client-side JavaScript. In Proceedings of the InternationalConference on Software Testing, Verification and Validation (ICST), pages31–40. IEEE Computer Society, 2012. → pages iii, 7, 26, 49, 83[130] F. Ocariza, K. Bajaj, K. Pattabiraman, and A. Mesbah. An empirical studyof client-side JavaScript bugs. In Proceedings of the InternationalSymposium on Empirical Software Engineering and Measurement (ESEM),pages 55–64. IEEE Computer Society, 2013. → pages iii, 7, 9, 18, 50, 143[131] F. Ocariza, K. Pattabiraman, and A. Mesbah. Vejovis: suggesting fixes forJavaScript faults. In Proceedings of the International Conference onSoftware Engineering (ICSE), pages 837–847. ACM, 2014. → pages iv, 7,80, 82[132] F. Ocariza, K. Pattabiraman, and A. Mesbah. Detecting inconsistencies inJavaScript MVC applications. In Proceedings of the InternationalConference on Software Engineering (ICSE). IEEE Computer Society,2015. → pages iv, 7, 113, 143[133] F. Ocariza, K. Bajaj, K. Pattabiraman, and A. Mesbah. A study of causesand consequences of client-side JavaScript bugs. Transactions on SoftwareEngineering (TSE), page 17 pages, 2016. → pages iii, 8, 9[134] F. Ocariza, G. Li, K. Pattabiraman, and A. Mesbah. Automatic faultlocalization for client-side JavaScript. Software Testing, Verification andReliability (STVR), 26(1):69–88, 2016. → pages iii, 8, 44, 49, 94[135] S. O’Grady. The RedMonk Programming Language Rankings: January2016, 2016. May 22, 2016). → pages 2[136] V. N. Padmanabhan, S. Ramabhadran, S. Agarwal, and J. Padhye. A studyof end-to-end web access failures. In Proc. Intl. Conf. on Emerging190Networking Experiments and Technologies (CoNEXT), pages 15:1–15:13.ACM, 2006. → pages 3, 46[137] K. Pattabiraman and B. Zorn. DoDOM: leveraging DOM invariants forweb 2.0 application robustness testing. In Proceedings of the InternationalSymposium on Software Reliability Engineering (ISSRE), pages 191–200.IEEE Computer Society, 2010. → pages 4, 49, 79, 143[138] S. Pertet and P. Narasimhan. Causes of failure in web applications.Technical Report CMU-PDL-05-109, Parallel Data Laboratory, CarnegieMellon University, 2005. → pages 3, 46[139] R.-H. Pfeiffer and A. Wasowski. Taming the confusion of languages. InProceedings of the European Conference on Modelling Foundations andApplications (ECMFA), pages 312–328. Springer, 2011. → pages 167[140] T. Polychniatis, J. Hage, S. Jansen, E. Bouwers, and J. Visser. Detectingcross-language dependencies generically. In Proceedings of the EuropeanConference on Software Maintenance and Reengineering (CSMR), pages349–352. IEEE, 2013. → pages 167[141] M. Pradel, P. Schuh, and K. Sen. TypeDevil: Dynamic type inconsistencyanalysis for JavaScript. In Proceedings of the International Conference onSoftware Engineering (ICSE), pages 314–324. IEEE Computer Society,2015. → pages 15, 23, 47[142] P. Ratanaworabhan, B. Livshits, D. Simmons, and B. Zorn. JSMeter:Measuring JavaScript behavior in the wild. In Proceedings of the USENIXConference on Web Application Development (WebApps), pages 1–12.ACM, 2010. → pages 3, 46, 140[143] M. Renieris and S. Reiss. Fault localization with nearest neighbor queries.In Proc. Intl. Conference on Automated Software Engineering (ASE), pages30–39. IEEE Computer Society, 2003. → pages 79[144] G. Richards, S. Lebresne, B. Burg, and J. Vitek. An analysis of thedynamic behavior of JavaScript programs. In Proceedings of theInternational Conference on Programming Language Design andImplementation (PLDI), pages 1–12. ACM, 2010.doi: → pages 3, 47, 140[145] G. Richards, C. Hammer, B. Burg, and J. Vitek. The eval that men do: Alarge-scale study of the use of eval in JavaScript applications. In191Proceedings of the European Conference on Object-Oriented Programming(ECOOP), pages 52–78. Springer, 2011. → pages 3, 55, 143[146] C. Robinson. AngularJS: If you don’t have a dot, you’re doing it wrong!,2013. (Accessed: May16, 2015). → pages 116[147] V. Y. Rosales-Morales, G. Alor-Herna´ndez, and U. Jua´rez-Martı´nez. Anoverview of multimedia support into JavaScript-based frameworks fordeveloping rias. In Proceedings of the International Conference onElectrical Communications and Computers (CONIELECOMP), pages66–70. IEEE Computer Society, 2011. → pages 140[148] H. Samimi, M. Schafer, S. Artzi, T. Millstein, F. Tip, and L. Hendren.Automated repair of HTML generation errors in PHP applications usingstring constraint solving. In Proc. Intl. Conference on SoftwareEngineering (ICSE), pages 277–287. IEEE Computer Society, 2012. →pages 4, 46, 80, 83, 111[149] M. Scha¨fer, M. Sridharan, J. Dolby, and F. Tip. Effective smart completionfor JavaScript. Technical Report Technical Report RC25359, IBMResearch, 2013. → pages 4[150] N. Semenenko, M. Dumas, and T. Saar. Browserbite: Accuratecross-browser testing via machine learning over image features. InProceedings of the International Conference on Software Maintenance(ICSM), pages 528–531. IEEE Computer Society, 2013. → pages 36[151] K. Sen, S. Kalasapur, T. Brutch, and S. Gibbs. Jalangi: A selectiverecord-replay and dynamic analysis framework for javascript. InProceedings of the International Symposium on Foundations of SoftwareEngineering (FSE), pages 488–498. ACM, 2013. → pages 4, 79[152] K. Sera. apriori.js, 2016. (Accessed:April 29, 2016). → pages 158[153] U. Shaked. AngularJS vs. BackboneJS vs. EmberJS, 2014. (Accessed:May 16, 2015). → pages 114, 116, 158[154] G. Shu, B. Sun, A. Podgurski, and F. Cao. Mfl: Method-level faultlocalization with causal inference. In Proceedings of International192Conference on Software Testing, Verification and Validation (ICST), pages124–133. IEEE Computer Society, 2013. → pages 79[155] J. L. Singleton and G. T. Leavens. Verily: a web framework for creatingmore reasonable web applications. In Companion Proceedings of theInternational Conference on Software Engineering (ICSE), pages 560–563.ACM, 2014. → pages 140[156] S. Sprenkle, E. Gibson, S. Sampath, and L. Pollock. Automated replay andfailure detection for web applications. In Proceedings of the InternationalConference on Automated Software Engineering (ASE), pages 253–262.ACM, 2005. → pages 46[157] StackOverflow. StackOverflow Developer Survey 2015, 2015. (Accessed:May 16, 2015). → pages 2[158] D. Strein, H. Kratz, and W. Lo¨we. Cross-language program analysis andrefactoring. In Proceedings of the International Workshop on Source CodeAnalysis and Manipulation (SCAM), pages 207–216. IEEE ComputerSociety, 2006. → pages 168[159] A. Swartz. A brief history of Ajax, 2005. (Accessed: May 22, 2016). →pages 2, 35, 41[160] D. Synodinos. Top JavaScript MVC frameworks, 2013. (Accessed:May 16, 2015). → pages 144[161] Synopsys. Coverity, 2016. (Accessed: April 29,2016). → pages 143[162] B. Taraghi and M. Ebner. A simple MVC framework for widgetdevelopment. In Proceedings of the International Workshop on MashupPersonal Learning Environments (MUPPLE), pages 38–45. CEUR-WS,2010. → pages 114, 140[163] J. Tian, S. Rudraraju, and Z. Li. Evaluating web software reliability basedon workload and failure data extracted from server logs. IEEE Trans.Softw. Eng., 30:754–769, 2004. ISSN 0098-5589.doi: → pages 1, 3, 46193[164] Two Sigma. Beaker, 2016. April 29, 2016). → pages 146[165] I. Vessey. Expertise in debugging computer programs: A process analysis.International Journal of Man-Machine Studies, 23(5):459–494, 1985. →pages 4, 49[166] A. Vetro, F. Tomassetti, M. Torchiano, and M. Morisio. Languageinteraction and quality issues: an exploratory study. In Proceedings of theInternational Symposium on Empirical Software Engineering andMeasurement (ESEM), pages 319–322. ACM, 2012. → pages 167[167] W3C. Cascading style sheets level 2 revision 1 specification: Selectors,2011. (Accessed: June 5, 2014).→ pages 85[168] W3Techs. Usage statistics and market share of jQuery for websites, 2015. (Accessed: May22, 2016). → pages 5[169] W3Techs. Usage of JavaScript for websites, 2015. (Accessed:May 16, 2015). → pages 2[170] D. Wampler, T. Clark, N. Ford, and B. Goetz. Multiparadigm programmingin industry: A discussion with neal ford and brian goetz. IEEE Software,27(5):61, 2010. → pages 172[171] A. Wasylkowski, A. Zeller, and C. Lindig. Detecting object usageanomalies. In Proceedings of the Joint Meeting of the European SoftwareEngineering Conference and the Symposium on the Foundations ofSoftware Engineering (ESEC/FSE), pages 35–44. ACM, 2007. → pages167[172] Y. Wei, Y. Pei, C. A. Furia, L. S. Silva, S. Buchholz, B. Meyer, andA. Zeller. Automated fixing of programs with contracts. In Proc. Intl.Symposium on Software Testing and Analysis (ISSTA), pages 61–72. ACM,2010. → pages 111[173] W. Weimer and G. C. Necula. Mining temporal specifications for errordetection. In Proceedings of the International Conference on Tools andAlgorithms for the Construction and Analysis of Systems (TACAS), pages461–476. Springer, 2005. → pages 167194[174] W. Weimer, T. Nguyen, C. Le Goues, and S. Forrest. Automatically findingpatches using genetic programming. In Proc. Intl. Conference on SoftwareEngineering (ICSE), pages 364–374. IEEE Computer Society, 2009. →pages 111[175] J. Weinberger, P. Saxena, D. Akhawe, M. Finifter, R. Shin, and D. Song.An empirical analysis of XSS sanitization in web application frameworks.Technical Report EECS-2011-11, UC Berkeley, 2011. → pages 3, 47[176] C. Weiss, R. Premraj, T. Zimmermann, and A. Zeller. How long will it taketo fix this bug? In Proceedings of the International Workshop on MiningSoftware Repositories (MSR), pages 1–8. IEEE Computer Society, 2007.→ pages 42[177] J. Wojciechowski, B. Sakowicz, K. Dura, and A. Napieralski. MVC model,struts framework and file upload issues in web applications based on J2EEplatform. In Proceedings of the International Conference on ModernProblems of Radio Engineering, Telecommunications and ComputerScience (TCSET), pages 342–345. IEEE Computer Society, 2004. → pages140[178] A. Yildiz, B. Aktemur, and H. Sozer. Rumadai: A plug-in to record andreplay client-side events of web sites with dynamic content. In Proceedingsof Workshop on Developing Tools as Plug-ins (TOPI), pages 88–89. IEEEComputer Society, 2012. → pages 4, 79[179] Z. Yin, M. Caesar, and Y. Zhou. Towards understanding bugs in opensource router software. ACM SIGCOMM Computer CommunicationReview, 40(3):34–40, 2010. ISSN 0146-4833. → pages 46[180] C. Yue and H. Wang. Characterizing insecure JavaScript practices on theweb. In Proc. Intl. Conf. on World Wide Web (WWW), pages 961–970.ACM, 2009. doi: → pages3, 47[181] A. Zagalsky. CSC485B: Startup Programming, 2015. (Accessed: May 16,2015). → pages 132[182] N. C. Zakas. Maintainable JavaScript. O’Reilly Media, Inc., 2012. ISBN1449327680. → pages 2195[183] L. Zhang, L. Zhang, and S. Khurshid. Injecting mechanical faults tolocalize developer faults for evolving software. In Proceedings of theInternational Conference on Object-Oriented Programming, Systems,Languages and Applications, pages 765–784. ACM, 2013. → pages 79[184] S. Zhang, H. Lu¨, and M. D. Ernst. Automatically repairing brokenworkflows for evolving gui applications. In Proc. Intl. Symposium onSoftware Testing and Analysis (ISSTA), pages 45–55. ACM, 2013. → pages111[185] X. Zhang, H. He, N. Gupta, and R. Gupta. Experimental evaluation ofusing dynamic slices for fault location. In Proc. Intl. Symposium onAutomated Analysis-Driven Debugging (AADEBUG), pages 33–42. ACM,2005. → pages 80[186] Y. Zheng, T. Bao, and X. Zhang. Statically locating web application bugscaused by asynchronous calls. In Proc. Intl. Conference on the World WideWeb (WWW), pages 805–814. ACM, 2011. → pages 16, 78[187] Y. Zheng, X. Zhang, and V. Ganesh. Z3-str: a z3-based string solver forweb application analysis. In Proceedings of the International Symposiumon Foundations of Software Engineering (FSE), pages 114–124. ACM,2013. → pages 112[188] J. Zhou, H. Zhang, and D. Lo. Where should the bugs be fixed? moreaccurate information retrieval-based bug localization based on bug reports.In Proceedings of the International Conference on Software Engineering(ICSE), pages 14–24. IEEE Computer Society, 2012. → pages 79[189] T. Zimmermann, R. Premraj, and A. Zeller. Predicting defects for eclipse.In Proc. Intl. Workshop on Predictor Models in Software Engineering(PROMISE), pages 9–9. IEEE Computer Society, 2007. → pages 46196


Citation Scheme:


Citations by CSL (citeproc-js)

Usage Statistics



Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            async >
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:


Related Items