@prefix vivo: . @prefix edm: . @prefix ns0: . @prefix dcterms: . @prefix dc: . @prefix skos: . vivo:departmentOrSchool "Science, Faculty of"@en, "Computer Science, Department of"@en ; edm:dataProvider "DSpace"@en ; ns0:degreeCampus "UBCV"@en ; dcterms:creator "Zhou, Limin"@en ; dcterms:issued "2008-10-22T19:06:30Z"@en, "1992"@en ; vivo:relatedDegree "Master of Science - MSc"@en ; ns0:degreeGrantor "University of British Columbia"@en ; dcterms:description """To enhance the probability of interworking among protocol implementations, the implementations must be tested for conformance to the standards they purport to support. This thesis deals with several important aspects in protocol conformance testing based on ESTELLE.Y, a variant of the formal description technique ESTELLE. A new method for generating test sequences is presented. It combines both control flow analysis and data flow analysis. The test case generation is done automatically by an extensive and systematic analysis of the formal protocol specification. The resulting test sequences have at least the same fault detection power as that of the characterizing sequences used for control flow analysis, and also the fault detection capability as that of the all definition and use strategy for data flow analysis. The constraint relations of input and output parameters are also generated automatically. The input constraints are used to guide the data selection process while the output constraints are used to check the validity of the output behaviours. The Abstract Test Case Relation Model provides an abstract model to establish the ordering relations among the test cases in a test suite. This relation is used to dynamically select test cases for execution during testing and helps to reduce the number of test cases executed by eliminating those that are guaranteed to fail. This will result in efficient testing. Methods for the combined data flow and control flow analysis and for generating the ATCRM from protocol specifications have been implemented. We also proposed and implemented a constraint generation method based on normal form specification. The techniques and prototype implementations presented in this thesis form an important part of the UBC Protocol Testing Environment which aims to enhance the efficiency and effectiveness of the conformance testing process."""@en ; edm:aggregatedCHO "https://circle.library.ubc.ca/rest/handle/2429/2709?expand=metadata"@en ; dcterms:extent "3565168 bytes"@en ; dc:format "application/pdf"@en ; skos:note "A New Approach to Generating and SelectingTest Sequences for Conformance TestingbyLimin ZhouB.Sc., Tsinghua UniversityA Thesis Submitted in Partial Fulfillment ofthe Requirements for the Degree ofMaster of ScienceinThe Faculty of Graduate StudiesDepartment of Computer ScienceWe accept this thesis as conformingto the required standardThe University of British Columbia© Limin ZhouNovember, 1992r pc iemeeDepartment ofIn presenting this thesis in partial fulfilment of the requirements for an advanceddegree at the University of British Columbia, I agree that the Library shall make itfreely available for reference and study. I further agree that permission for extensivecopying of this thesis for scholarly purposes may be granted by the head of mydepartment or by his or her representatives. It is understood that copying orpublication of this thesis for financial gain shall not be allowed without my writtenpermission.(Signature)The University of British ColumbiaVancouver, CanadaDate^WOV.^L2,DE-6 (2/88)AbstractTo enhance the probability of interworking among protocol implementations, the imple-mentations must be tested for conformance to the standards they purport to support.This thesis deals with several important aspects in protocol conformance testing basedon ESTELLE.Y, a variant of the formal description technique ESTELLE.A new method for generating test sequences is presented. It combines both control flowanalysis and data flow analysis. The test case generation is done automatically by anextensive and systematic analysis of the formal protocol specification. The resulting testsequences have at least the same fault detection power as that of the characterizing se-quences used for control flow analysis, and also the fault detection capability as that ofthe all definition and use strategy for data flow analysis.The constraint relations of input and output parameters are also generated automati-cally. The input constraints are used to guide the data selection process while the outputconstraints are used to check the validity of the output behaviours.The Abstract Test Case Relation Model provides an abstract model to establish the or-dering relations among the test cases in a test suite. This relation is used to dynamicallyselect test cases for execution during testing and helps to reduce the number of test casesexecuted by eliminating those that are guaranteed to fail. This will result in efficienttesting.Methods for the combined data flow and control flow analysis and for generating theATCRM from protocol specifications have been implemented. We also proposed andimplemented a constraint generation method based on normal form specification.The techniques and prototype implementations presented in this thesis form an importantpart of the UBC Protocol Testing Environment which aims to enhance the efficiency andeffectiveness of the conformance testing process.ContentsAbstractContents^ iiiList of Tables vList of Figures^ vi1 Introduction 11.1 Motivations ^ 21.2 Relation of work to UBC Protocol Testing Environment^ 41.3 Layout of the thesis ^ 62 Review of related work 72.1 Test case generation methods ^ 72.1.1^Control flow testing 72.1.2^Data flow testing 102.2 Constraint Generation ^ 133 Generating unified test sequences 153.1 Testing Control Flow ^ 153.1.1^The concept of Cyclic Characterizing Sequences (CCS) ^ 163.1.2^Properties of CCS 183.2 Testing Data Flow ^ 203.2.1^Data Object, def-use path and Data-flow testing strategies 213.3 Generating Unified Test Sequences ^ 253.4 Implementation ^ 283.4.1^Protocol data structure 283.4.2^Building the transition dependency relations ^ 283.4.3^Generating converging states and their CCS 371113.4.4 Generating Unified Testing Sequences (UTS) ^ 393.4.5 A TPO example ^ 40Acknowledgement^ 154 Generation of Data Constraints^ 464.1 The model ^ 474.2 Constraint Derivation^ 474.2.1 Symbolic execution 484.3 Deriving input constraints for controllability ^494.4 Deriving output constraints ^ 545 Abstract Test Case Relation Model 605.1 Test case relations ^615.2 Ordering relation and conformance testing ^ 625.2.1 Property of the ordering relation 625.3 The ATCRM model^ 635.4 The derivation of ATCRM ^ 655.5 ATCRM model in conformance testing ^ 706 Conclusion^ 726.1 Summary 726.2 Future work 74A The results of a case study on TPO^ 75B ESTELLE.Y Specification of TPO 89C ASN.1 Specification of TPO Service Primitives (SPs) and Protocol DataUnits (PDUs)^ 97D The Test Case Generator User Interface^ 101ivList of Tables3.1 Def-use path generated for TPO ^ 433.2 Def-use path generated for TPO, table continue ^ 443.3 Def-use path without redundency for TPO 45List of Figures1.1 UBC Testing Environment ^ 41.2 Test Case Generation System Overview ^ 53.1 Converging states ^ 173.2 Cyclic Characterizing Sequence ^ 173.3 Identifying converging states 193.4 Protocol Data Structure ^ 293.5 Variables' definition and usage and the global du-path^ 303.6 Backtracking Def-use paths of variable x defined in transition t ^ 353.7 The corresponding transition paths in a tree form 363.8 EFSM of TPO ^ 414.1 One predicate, two different interpretations ^ 514.2 Symbolic interpretation^ 534.3 Constraint generation example^ 544.4 Mapping between input parameters and output parameters ^ 575.1 Finite state machine for testing ^ 615.2 An example of a behavior tree for TPO ^ 645.3 An example of an ATCRM Tree for TPO 655.4 Data Structure for each ATCRM Tree node 665.5 Recursion unit for building ATCRM tree ^ 675.6 ATCRM tree in machine readable form 69viAcknowledgementI would like to sincerely thank my supervisor Professor Samuel Chanson, for his guidance,support and commitment throughout my graduate studies at UBC. Without his help,this thesis would not have been possible.I would also like to thank Dr. Son Vuong for his careful reading of the final draft.Thanks to Sijian Zhang for his constructive suggestions and help during my study.Thanks to Jingsong Zhu for a lot of helpful discussions and exchange of ideas.Thanks to Christopher Healey and Mandeep Dhami for taking the time and effort toproofread this thesis.I am indebted to my husband Xun Li for his love, patience and encouragement.The generous and continuous support from CITR research funding and the ComputerScience Department of UBC is gratefully acknowledged.Thanks to all the friends who made my stay at UBC a most enjoyable experience.viiChapter 1IntroductionProtocols are defined in terms of abstract models which are then implemented by vari-ous vendors. For correctness and compatibility, these protocol implementations need toconform exactly to their specifications. For this purpose, conformance testing of the im-plementation is carried out prior to deployment. In the ISO model, conformance testingis done using the black box approach [IS09646]. It assumes that the implementation un-der test (JUT) cannot be directly accessed or observed. The tester observes and controlsthe interactions between the JUT and the environment through Points of Control andObservation (PCO). A test case is designed as a set of input and output events at thesePC0s. Conformance testing is then carried out by running a set of test cases against theJUT.A test suite is a complete set of test cases for a given protocol. By ISO standards, a testsuite is specified using the Tree and Tabular Combined Notation (TTCN) [IS09646-3].1CHAPTER 1. INTRODUCTION^ 2The JUT passes the test only if it passes all the test cases in the test suite.Formal specifications have been increasingly used in software engineering, specially forcommunications protocols. ISO and CCITT have standardized three formal descriptiontechniques (FDTs) — ESTELLE [IS09074], SDL [Sdl] and LOTOS [IS08807]. Testsuites can be directly generated from formal specifications given in a FDT. This thesisis concerned with test case generation techniques and their implementations for boththe control and data portions of protocols specified in a FDT. Our research is basedon ESTELLE.Y [Vuong91] [Lu91], a dialectic of ESTELLE. However, the techniquespresented in the thesis may also be extended to cover specifications written in otherFDTs.1.1 MotivationsA lot of work has been done in deriving test cases from formal specifications. Most ofthe previous work has focused on test case derivation from control flow specificationsonly [Naito81] [Sabn88] [Gone70] [Chow78] [Vuong89] [Chan89] [Sidhu89]. Data flow hasseldom been considered. Even for the cases where data flow is discussed (see e.g., [Sari87][Ura187] [Ura188] [Wu89] [Vuong91]), it is usually handled separately from the control flowor some important aspects, such as data constraints, are not covered. By checking thecontrol and data parts in the same test sequence, more comprehensive tests are appliedCHAPTER 1. INTRODUCTION^ 3than if each is tested separately. Furthermore, this can reduce the total length of thetest sequences and reduce the time required for testing.In order to execute each test case, representative input values need to be chosen. Onlywhen the test data satisfy all the constraints of a test case can the entire test casebe executed. In addition, for each input sequence, the corresponding output sequencewith expected values would have to be generated to compare to the observed outputs.The relations between input data and output data in each test case should be obtainedautomatically from the given protocol specification. Though the constraint generationproblem has been studied in software testing [C1ar76] [King76] [Ram76] [Clar81] [Kore190][Dem91], not much work has been done in the context of protocol testing.A test suite may have a large number of test cases. Many of the test cases have depen-dence relations on one another so that if one test case fails, a set of other test cases willalso fail. It is useful to order the test cases to reduce the number of redundant tests.[Chanson91] [Chanson92-1] proposed an Abstract Test Case Relation Model (ATCRM)to solve this problem.In this thesis, we report our experience in implementing a test case generation methodusing combined data flow and control flow analysis to overcome the drawbacks of con-CHAPTER 1. INTRODUCTION^ 4ventional methods [Chanson92-2]. The ATCRM is also automatically generated fromthe protocol specifications during test case generation [Chanson91] [Chanson92-1]. Wealso propose a constraint generation method based on normal form specification for ES-TELLE.Y and discuss its implementation.1.2 Relation of work to UBC Protocol Testing En-vironmentATCRMFDT Test CaseGeneratorTraceAnalyzerTest CaseManagerReal TesterFigure 1.1: UBC Testing EnvironmentAn integrated protocol testing environment currently under development in the Com-puter Science Department of the University of British Columbia is shown in Figure 1.1.This system consists of a test case generator, a test case manager, a tester and a traceanalyzer.Our model, shown in Figure 1.2, incorporates a test case generator called TESTGEN[Vuong90] [Lu90]. TESTGEN, and hence the new integrated test case generation system,is based on a protocol data structure called PDS which is an internal structure of aProtocol Specification in Estelle.Y1CHAPTER 1. INTRODUCTION^ 5Figure 1.2: Test Case Generation System Overviewprotocol's ESTELLE.Y [Vuong91] [Lu91] specification. We implemented the followingthree components of the test case generation systems (see Figure 1.2):1. Unified Test Sequence Generator. This generates test sequences using a combinedcontrol-flow and data-flow analysis approach [Chanson92-2].2. Constraint Generator. It derives constraints for input and output parameters ofeach test case in a test suite to guide the test data selection process.3. ATCRM Generator. It derives the ATCRM [Chanson91] [Chanson92-1] from theprotocol specification. The ATCRM serves as the basis for ordering relations amongtest cases and facilitates the dynamic selection of test cases for execution.CHAPTER 1. INTRODUCTION^ 6With the aid of these test tools, it is possible to do conformance testing automaticallyand more efficiently.1.3 Layout of the thesisChapter 2 reviews the previous work that has been done in test case generation andselection for conformance testing. Chapter 3 describes the model for generating testcases using combined control flow and data flow analysis [Chanson92-21 [Chanson92-1].In Chapter .4, a method is proposed and implemented to generate data constraints usingsymbolic execution. Chapter 5 discusses the derivation of the ATCRM model. AndChapter 6 concludes the thesis with some directions for future work.Chapter 2Review of related workTest case generation is an essential aspect of protocol conformance testing. Research inthis area has focused on two related problems: automatic test sequence generation andautomatic test data selection. In this chapter, we review various techniques that havebeen developed in solving these problems.2.1 Test case generation methodsMost test case generation methods are based on testing either control flow or data flowof the formal specification of protocols.2.1.1 Control flow testingThe purpose of control flow testing is to ensure that the JUT behaves as specified by theFinite State Machine (FSM) representation of the protocol. It tries to find \"transitionerrors\" and \"transfer errors\" [Sidhu89] [Vuong89] in the JUT. The concept of a transitiontour and characterizing sequence is used in control flow testing. A transition tour is used7CHAPTER 2. REVIEW OF RELATED WORK^ 8to traverse every transition of the FSM. A characterizing sequence for a state is defined asa unique input/output sequence not exhibited by any other states in the FSM [Sidhu89],and is used to check that the FSM has reached that state.We assume that the FSM for a protocol specification is minimal and strongly connected,which means from every state, there is a path that reaches any other state in the FSM[Sidhu89]. Existing control flow testing methods include the T-method [Naito81], theD-method [Gone70], the W-method [Chow78], the UIO-method [Sabn88] [ChVu89], andthe UI0v-method [Vuong89] [Chan89].The T-method Naito81] [Sidhu89] uses a transition tour sequence to cover all the tran-sitions in a FSM to check for output errors, i.e. transition errors. This method is simplebut it does not check for the tail state of each transition. Therefore transfer errors givingthe same output cannot be detected by a transition tour.Improvements on the T-method have been explored. Briefly, in addition to traversingeach transition at least once, they apply a characterizing sequence at the end of eachtransition to check for transfer errors. Using this, both transition and transfer errors canbe detected. The characterizing sequences can be derived in different ways resulting indifferent test methods:CHAPTER 2. REVIEW OF RELATED WORK^ 9• The D-method assumes that the FSM has a distinguishing sequence DS, which isan input sequence that produces different output sequences for each state in theFSM [Gone70] [Sidhu89].• The W-method applies a characterization set of a FSM called W, which consistsof a number of input sequences such that the last output symbols observed fromapplying the strings in W in the same order are different for each state of the FSM[Chow78] [Sidhu89].• The UIO-method derives a unique input/output sequence for each state in theFSM [Sabn88] [Sidhu89]. UI0 exists for most practical FSMs. The UI0v-methodis an improved UI0 method with a verification phase which checks to make surethe UI0s generated from the FSM are still valid for the JUT [Vuong89] [Chan89].The W-method, the DS-method and the UI0v-method produce test sequences withslightly better fault coverage than the UIO-method [Ko90]. However, they producelonger test sequences [Ko90], which means more effort has to be put into testing. TheDS-method is not applicable to all FSMs, because a DS may not exist for some of them[Sidhu89]. Therefore, historically the UI0 method has been the most widely used, thoughin rare cases, a UI0 may not exist for some states in a FSM. In that case, state signaturessimilar to the W-set are used for those states.CHAPTER 2. REVIEW OF RELATED WORK^ 10Some tools have been developed based on the methods introduced above. A few aredescribed below:• TESTL [Testi] is a test design environment developed at the University of Montrealin which test cases can be generated by either the UIO-method or the UI0v method.• TENT [Sato89] is a test sequence generation tool for communication systems devel-oped at Mitsubish Electric Corporation. MT-method and SW-method are imple-mented in the TENT. MT-method uses the T-method to generate the test sequence.SW-method uses the W-method to identify the present state of the JUT while test-ing.The above methods are FSM-based approaches which analyze the control flow of thespecifications only. These methods leave the data part of the protocol untested.2.1.2 Data flow testingData flow testing originated from attempts in checking the effect of test data objectsin software systems. Data flow testing is usually based on a data flow graph which isa directed graph with the nodes representing the functional units of a program and theedges representing the flow of data objects. The functional unit could be a statement, atransition, a procedure, or even a program, depending on the granularity of the study.Data flow testing strategies are based on selecting subpaths that satisfy some character-istics of data flow for all data objects. There are many strategies for this, such as allCHAPTER 2. REVIEW OF RELATED WORK^ 11definition and use path strategy and all used path strategy [Ura187] [Beiz90] [Frank88].Data flow testing is also used in conformance testing of communication protocols to ex-amine invalid behaviour involving the use of data.In recent years, work in this area includes the following:• [Sari87] employs functional program testing method in data flow testing. It claimsthat functional program testing gives the best results for detecting errors. Thismethod uses a data flow graph (DFG) to model the flow of information in a normalform ESTELLE specification excluding major state changes. Using decompositionmethod and functional partitioning method, they derived DFG blocks which areconsidered to be data flow functions (DFFs). Simultaneously, data flow dependen-cies are also obtained.CONTEST-ESTL [Sari89] is a realization of the above test generation method. Ittakes a single module specification and generates test sequences in a semi-automaticmanner. A lot of interactions with the test designer is needed to identify theelementary blocks to be merged since this process can not be automated.• [Ura187] [Ura1911 generates test sequences from the normal form specification ofCHAPTER 2. REVIEW OF RELATED WORK^ 12ESTELLE to cover all definition and usage pairs of variables. A given specificationis transformed into a flow graph which explicitly identifies the associations betweendefinition and usage of a variable. The resulting test sequences can check whetheran JUT establishes the desired relation between the input parameters, internalvariables and output parameters.• [ChVu89] [Vuong92] suggested a hybrid technique which combines two test sequencegeneration methods to yield a test sequence that separately covers both control anddata flow aspects of protocols specified in ESTELLE. The control flow portion ofthe protocol is tested first, followed by the data flow portion.• [Miller921 uses a limited version of ESTELLE. Its corresponding EFSM is convertedinto an equivalent FSM. A data flow graph is constructed directly from the \"equiva-lent\" FSM. Test sequences are derived to cover all definition and observation paths.Control flow testing sequences are also combined into the definition and observationpaths to form test cases.• [Wu89] [Wu89-1] proposed a test sequence derivation method by backtracking thedata dependency along each path of a behavior tree.• TESTGEN [Vuong91] [Lu91] generates test sequences from specifications in ES-TELLE.Y. The test cases are generated by first building the behavior tree andCHAPTER 2. REVIEW OF RELATED WORK^ 13then tracing the definition and use paths in it.• [Chun90] describes a technique for generating test cases for specifications writtenin a limited version of ESTELLE. The approach is quite similar to the UIO-methodin control flow testing.2.2 Constraint GenerationComparatively speaking, much more work has been done on generation of test sequencesfor control flow coverage and data flow coverage than on generation and selection of datafrom specifications to make the test sequences executable.Although test cases can be generated which consist of a sequence of input/output prim-itives, there are many conditions along the path which restrict the data that can beselected. These conditions are called constraints. The Constraint Part is one of the fourmajor parts of a test suite in TTCN [IS09646-3]. Constraint generation is required forselection of data for test cases [Clar81] [Kore190] [DeMi91]. Yet, not much work has beendone on constraint generation from protocol specifications.[Chun90} represents one of the few efforts in this direction. A method similar to ours isused to obtain input constraints, but no output constraints are generated. The results ofour method is closer to the IS09646-3 specifications, which define both input and outputCHAPTER 2. REVIEW OF RELATED WORK^ 14constraints.Conventionally, symbolic execution is used to generate the constraints for each test case[King76] [C1ar76]. These constraints can then be used to select test data by the well-known techniques of constraint satisfaction problem (CSP) defined in Al (see e.g., [Jaff86]Paff871 [CLPR]).Chapter 3Generating unified test sequencesIn this chapter, the issue of automatically generating Unified Test Sequences (UTS) whichtake into account both the control and data flow analyses [Chanson92-2] is addressed.3.1 Testing Control FlowThe control flow part of the EFSM model is modeled with an FSM, and can be testeduse the existing FSM testing methods [Gone70] {Sidhu89] [Chow78] [Vuong891 [Chan89][Naito81]. As mentioned in Chapter 2, the fault model is based on transition faults andtransfer faults. The basic idea of FSM testing is to check to see whether a transitiongenerates the expected output and whether the final state is correctly reached after thetransition.Due to limited observability of the JUT, the state reached is usually checked by a Char-acterizing Sequence (CS). As mentioned in Chapter 2, The most often used methods15CHAPTER 3. GENERATING UNIFIED TEST SEQUENCES^ 16include the UIO, DS and W methods.3.1.1 The concept of Cyclic Characterizing Sequences (CCS)For the purpose of generating test sequences that cover both control and data flows, theconcepts of converging state and Cyclic Characterizing Sequences (CCS) are used.Definition 3.1 (Converging State)States sk1 and 3k2 are called converging states if for two transitions (.c^r In_ ji7 Ii/ O )1and (42, t2, sj2; 12/02) in a FSM, the following conditions are satisfied:t2lel 0 Sk211/01=12102sp.^S j2As illustrated in Figure 3.1, se's outgoing transitions n and si's outgoing transitions t2have the same external behavior, and reach the same destination state s. Thus si and sjare converging states.Definition 3.2 (Cyclic Characterizing Sequence)A cyclic characterizing sequence for state si, denoted as CCS(si), is a concatenation ofthe characterizing sequence of si and the sequence that brings the FSM back to si.For example, as shown in Figure 3.2, the characterizing sequence (CS) of state S ends atstate Tail, and Bk is a sequence from state Tail back to state S. Therefore CCS(S) =S.CHAPTER 3. GENERATING UNIFIED TEST SEQUENCES^17Figure 3.1: Converging statesFigure 3.2: Cyclic Characterizing SequenceCS@Bk, where @ is the concatenation operator.Since the FSMs we are considering are strongly connected ones, for every state of the FSMwhich has a characterizing sequence, a CCS exists. The strongly connected conditionguarantees that there is a sequence to bring the FSM from state Tail back to the originalCHAPTER 3. GENERATING UNIFIED TEST SEQUENCES^ 18state S. Since CCS(S) contains CS(s), it is also a characterizing sequence of state S. Thisis explained further below.3.1.2 Properties of CCSPropertyl.CCS(s) is a state signature for state s.Since CCS contains a CS as its first part, it is itself a CS of the state, and is thus a statesignature for that state.Property2.If S is the tail state of a sequenceSEQ 0,p = {(501411.51; I/O), (SlIi1) s2; I/O), •••(spltp, S;^Op)}and no converging states appear in SEQ, then SEQi,pOCCS(S) is a characterizing se-quence of states si, i = 0, 1, ..p.Since there is no converging state along the sequence path, the sequence SEQi,pOCCS(S),(i = 1, 2, cannot possibly be accepted by any other state and thus is able to distin-guish state si from all other states. Hence all transitions and states along the path canbe considered tested with SEQ0,p@CCS(S).CHAPTER 3. GENERATING UNIFIED TEST SEQUENCES^ 19Property3.If S is the tail state of a sequenceSEQo,p= {(.so, to, si.; I/O),^s2;I/O), ...(sp, t19, S; I/O)},and converging states s,, s22, •••sik appear along the sequence path, (il < i2 <^T5->T6P2: TI->T2Figure 3.6: Backtracking Del-use paths of variable x defined in transition tCHAPTER 3. GENERATING UNIFIED TEST SEQUENCES^ 36Figure 3.7: The corresponding transition paths in a tree formThe traversing procedure uses a backtracking algorithm:from the last node in the queue to the initial nodewhile (Node not empty and not traversed andNode's Father not empty andNode globally dependent on previous transitionwith regard to variable x)beginSet Node to \"visited\";Node 4-- Node's_Father;Path length incremented by 1.endIn a real protocol, some of the definition and use paths generated can be redundant. So,it is necessary to go through the checking process to eliminate the redundant paths.It is possible for a FSM to have two different paths going from the same start state andend in the same end state to exhibit the same I/O behavior. This means it is possibleto have two different def-use paths with exactly the same external behaviors for a givenCHAPTER 3. GENERATING UNIFIED TEST SEQUENCES^37input sequence. To an external observer, the I/O test sequences will be the same. Thedifference in the two transition sequences will become explicit when the execution condi-tion of that transition is taken into account. An example can be found in section 3.4.5.This issue will be addressed in the next chapter.The definition and use paths generated are written onto a file as an intermediate result.The paths can be read from the file when they need to be referenced.3.4.3 Generating converging states and their CCSIn order to merge the data flow and control flow analyses, it is necessary to identifyconverging states and to generate their corresponding CCSs.An algorithm to find all the converging states in a protocol follows:For every state sp in the specification,beginCompute all of its incoming transitionscompare the I/O pattern between every pair of the incomingtransition (sii, ti, sp) and (8,2, t3,sp)if same, then3,1 and s,2 are converging states.endCHAPTER 3. GENERATING UNIFIED TEST SEQUENCES^38The algorithm used to generate CCS(s), where s is a state in the EFSM specifica-tion is given below:For each converging state s in the specificationbegingenerate UI0(s); Let st be the tail state of UI0(s),Find the shortest path from St back to state s.concatenate UI0(s) and st to form CCS(s).end.Algorithm used to generate MO(s) follows:Find_UI0(state_S)beginLength_oLUI0 = 1;For (each outgoing transition t of state s,)beginif (the I/O pattern of t is unique in the specification)beginUIO(S) 4- transition t;return;endelsebeginpush_queue(t);make a list of the transitions with the same I/O pattern.endendContinue_Search_UIO(Queue)endCHAPTER 3. GENERATING UNIFIED TEST SEQUENCES^39Continue_Search_UIO(Queue)beginWhile (Queue not empty)Node 4— pop(Queue);for (each of Node's outgoing transition t)beginI/O_pattern 4— I/0 -Pattern © I/O_pattern_of_t;if (I/O_pattern is unique)UI0(s) 4— transition_list_corresponding_to1/0_patternelsepush(transition_list_corresponding_to_I/0_pattern, queue);endend3.4.4 Generating Unified Testing Sequences (UTS)The generation of the UTS is straightforward by using the merging policy described insection 3.3. The implementation follows the algorithm given below.CHAPTER 3. GENERATING UNIFIED TEST SEQUENCES^40Merge(Def_Use_Filename)beginFor (each definition-use path DU (with start state S, tailstate T) in the Def_Use_File)beginif ((No converging state appear on path DU) orCCSs of the converging state do not redefine x)beginPath 4- Shortest_Path(Init_State, S) 0 DUPath 4-- Path @ Shortest_Path(T, Init_State)Path <- Path 0 CCS(Init_State)For (each converging state C on Path)begininsert CCS(C) into Path at point CendPrint Path into a fileMark all transition along Path C_Visited;endelse if (CCSs of the converging states redefine x)beginPath 4- Shortest_Path(Init_State, S) @ DUPath 4- Path @ Shortest_Path(T, Init_State)Print Path into a fileendendConnect all transitions not in C_Visited Set with a fewtransition tours, Insert CCS wherever a converging stateC appears, insert its CCS at the point of C.end3.4.5 A TPO exampleThe ISO Transport Protocol, Class 0 (TPO) is a simple protocol for the transport layer.It is assumed that the underlying communication network is reliable and error-free. ItsCHAPTER 3. GENERATING UNIFIED TEST SEQUENCES^41Ti, T4TO = TCREQ/CRTi = TCREQ / TDIND12 = CR / TCINDT3 = CR / TCINDT4 = CR / DRT5 = CC / TCCONT6 = DR / NDREQ, TD1ND'17 = DR / NDREQ, TDINDT8 = TCRES / CCTIO = TDREQ /DRTI 1 = TDATR / -T12 =-/DTT13 = DT /-T14 = - / TDATRT15 = TDREQ / NDREQT16 = NDIND / TDINDT17 = NRIND / TDINDT9 = TCRES / DR, TDINDFigure 3.8: EFSM of TPOESTELLE.Y + ASN.1 specification is given in Appendix B and Appendix C. Its EFSMis given in Figure 3.8. The definition and use paths generated for protocol TPO are shownin Table 3.1 and Table 3.2. Examples of redundant paths can be found. For instance,paths 4, 5, 6, 7 and path 9 are the same definition and use paths for different variables.CHAPTER 3. GENERATING UNIFIED TEST SEQUENCES^ 42Also, paths 12, 13, 14, 15, 17 share the same definition and use path for different variables.After eliminating the redundant paths, the resulting definition and use paths are shownin Table 3.3. Although path 7 and path 4 traverse the same states and exhibit the sameexternal behavior, they actually cover different transitions with different predicates.The final Unified Testing Sequence for TPO is listed in Appendix A. As shown, Thenumber of test cases generated is 14, composing 75 test steps; whereas the total numberof test steps for TPO by the hybrid method in [ChVu89] is 64, and the total number oftest steps generated for TPO by definition and use analysis in [Ura187] is 84. However, wecannot make any general conclusion in terms of test sequence length and test coverageaccording to one example. Our methord performs well for practical protocols that we wehave experimented.CHAPTER 3. GENERATING UNIFIED TEST SEQUENCES^43Table 3.1: Def-use path generated for TPOd-u pathl: (TCREQ,-)/(-,CR), (-,DR)/(NDREQ,-),(-,-)/(TDIND,-), (-,CR)/(TCIND,-),states traversed: idle, wfcc, idle, wftrvariable: localRefd-u path2: (-,CR)/(TCIND,-), (TDREQ,-)/(-,DR),states traversed: idle, wftr, idlevariable: remRefd-u path3: (-,CR)/(TCIND,-), (TCRES,-)/(TDIND,DR),states traversed: idle, wftr, idlevariable: remRefd-u path4: (-,CR)/(TCIND,-), (TCRES,-)/(-,CC),states traversed: idle, wftr, datavariable: remRefd-u path5: (-,CR)/(TCIND,-), (TCRES,-)/(-,CC),states traversed: idle, wftr, datavariable: tpduSized-u path6: (-,CR)/(TCIND,-), (TCRES,-)/(-,CC),states traversed: idle, wftr, datavariable: callingAddrd-u path7: (-,CR)/(TCIND,-), (TCRES,-)/(-,CC),states traversed: idle, wftr, datavariable: calledAddrd-u path8: (-,CR)/(TCIND,-), (TCRES,-)/(TDIND,DR),states traversed: idle, wftr, idlevariable: qtsEstd-u path9: (-,CR)/(TCIND,-), (TCRES,-)/(-,CC),states traversed: idle, wftr, datavariable: qtsEstd-u path10: (-,CR)/(TCIND,-), (TDREQ,-)/(-,DR),states traversed: idle, wftr, idlevariable: remRefCHAPTER 3. GENERATING UNIFIED TEST SEQUENCES^44Table 3.2: Def-use path generated for TPO, table continued-u pathll: (-,CR)/(TCIND,-), (TCRES,-)/(TDIND,DR),states traversed: idle, wftr, idlevariable: remRefd-u path12: (-,CR)/(TCIND,-), (TCRES,-)/(-,CC),states traversed: idle, wftr, datavariable: remRefd-u path13: (-,CR)/(TCIND,-), (TCRES,-)/(-,CC),states traversed: idle, wftr, datavariable: tpduSized-u path14: (-,CR)/(TCIND,-), (TCRES,-)/(-,CC),states traversed: idle, wftr, datavariable: callingAddrd-u path15: (-,CR)/(TCIND,-), (TCRES,-)/(-,CC),states traversed: idle, wftr, datavariable: calledAddrd-u path16: (-,CR)/(TCIND,-), (TCRES,-)/(TDIND,DR),states traversed: idle, wftr, idlevariable: qtsEstd-u path17: (-,CR)/(TCIND,-), (TCRES,-)/(-,CC),states traversed: idle, wftr, datavariable: qtsEstd-u path18: (-,CC)/(TCCON,-), (-,-)/(-,TDATR),states traversed: wfcc, data, datavariable: inBufferd-u path19: (-,CC)/(TCCON,-), (-,-)/(-,DT),states traversed: wfcc, data, datavariable: outBufferd-u path20: (TCRES,-)/(-,CC), (TDREQ,-)/(NDREQ,-), (-,CR)/(TCIND,-),states traversed: wftr, data, idle, wftrvariable: localRefCHAPTER 3. GENERATING UNIFIED TEST SEQUENCES^45Table 3.3: Def-use path without redundency for TPOd-u pathl: (TCREQ,-)/(-,CR), (-,DR)/(NDREQ,-),(-,-)/(TDIND,-), (-,CR)/(TCIND,-),states traversed: idle, wfcc, idle, wftrvariable: localRefd-u path2: (-,CR)/(TCIND,-), (TDREQ,-)/(-,DR),states traversed: idle, wftr, idlevariable: remRefd-u path3: (-,CR)/(TCIND,-), (TCRES,-)/(TDIND,DR),states traversed: idle, wftr, idlevariable: remRef, qtsEstd-u path4: (-,CR)/(TCIND,-), (TCRES,-)/(-,CC),states traversed: idle, wftr, datavariables: remRef, tpduSize, callingAddr, calledAddr, qtsEstd-u path5: (-,CR)/(TCIND,-), (TDREQ,-)/(-,DR),states traversed: idle, wftr, idlevariable: remRefd-u path6: (-,CR)/(TCIND,-), (TCRES,-)/(TDIND,DR),states traversed: idle, wftr, idlevariable: remRef, qtsEstd-u path7: (-,CR)/(TCIND,-), (TCRES,-)/(-,CC),states traversed: idle, wftr, datavariables: remRef, tpduSize, callingAddr, calledAddr, qtsEstd-u path8: (-,CC)/(TCCON,-), (-,-)/(-,TDATR),states traversed: wfcc, data, datavariable: inBufferd-u path9: (-,CC)/(TCCON,-), (-,-)/(-,DT),states traversed: wfcc, data, datavariable: outBufferd-u path10: (TCRES,-)/(-,CC), (TDREQ,-)/(NDREQ,-), (-,CR)/(TCIND,-),states traversed: wftr, data, idle, wftrvariable: localRefChapter 4Generation of Data ConstraintsIn Chapter 3, techniques are presented for generating test cases to cover the control flowpaths using combined control and data flow analyses. The resulting test cases are in theform of input/output primitives, i.e., I/O, 12/02, .../n/On. Each primitive, say /j or Ohhas some parameters, Pi, P2, •••) Plc, (k >-= 0). As specified in ISO 9646 [1509646-1], testcases with symbolic parameters to be substituted by actual values at run time are knownas parameterized test cases. The input domain for these parameters may be infinite. Itis not practical, and sometimes not possible to perform exhaustive testing. Hence, weneed representative and valid values for these parameters so that every control flow pathcan be executed.In this chapter, we present a method to automatically generate data constraints for theseparameterized test cases. These constraints can be used as guidelines for selecting valuesof input parameters and verifying the correctness of output parameters during testing.46CHAPTER 4. GENERATION OF DATA CONSTRAINTS^ 47A prototype of the proposed scheme has also been implemented.4.1 The modelWe assume that the specifications in our model are in normal form (NFS) [Sari87]. InNFS, a new transition is created for every distinct path in the control flow and the PRO-VIDED clauses are modified to reflect the conditions required for taking these paths.Recursion and loops with variables bounds are not allowed.Translation of a general specification to NFS often produces more transitions because theloops are opened and the predicates of the if statements are combined with the predicatesin the transitions. After this transformation, a transition segment will have only assign-ment statements left. This makes symbolic substitution feasible in our implementation.4.2 Constraint DerivationThe data objects in a specification can be divided into three classes: input parameters,output parameters, and internal variables. Constraints are defined as the the valid rangesof the values of these parameters and variables. Since each JUT is viewed as a black box,its observability and controllability are greatly limited. Only the input constraints can beCHAPTER 4. GENERATION OF DATA CONSTRAINTS^ 48set and only the output constraints can be observed externally. Under no circumstancescan the constraints of internal variables be seen by the tester. The internal variablesserve as a means to relate the output and input parameters. Since the resulting relationalconstraints are to be expressed in terms of input and output parameters only, symbolicinterpretation techniques need to be used to express all the internal variables in thepredicates in terms of the input parameters.4.2.1 Symbolic executionSymbolic execution is a useful technique for generating structural test constraints [Clar81].Instead of supplying the normal inputs to a specification, symbols representing arbitraryvalues are supplied. The execution proceeds as in normal execution except that the val-ues of some variables may be symbolic formulas over the input symbols.Symbolic execution can be performed in a forward expansion or backward substitutionmanner. Forward expansion essentially models the actions when transitions are actuallyexecuted. It begins with the start node and works toward the final node, while the back-ward substitution technique begins with the final node and works toward the start node[C1ar76] [King76].Forward expansion is normally used for path computation while backward substitutionCHAPTER 4. GENERATION OF DATA CONSTRAINTS^ 49creates the path condition. Both techniques are used in our constraint generation proce-dure, which will be discussed in the following two sections.4.3 Deriving input constraints for controllabilityIn an Estelle specification, each transition is associated with a condition. The evaluationof the predicates in the condition determines whether the transition can be followed ornot. A test case consists of a sequence of transitions which in turn is a path in the controlflow or the data flow graph. In general, each transition forms a test step. The condi-tion must be satisfied by the input parameters for each test step for it to be executable.The constraints of the test case is the conjunction of the individual predicate condi-tions generated at each test step along the control path. Not all the control paths thatexist syntactically are executable. Only if input data exist which satisfy the path con-ditions will the control path be executable and can be used in testing [DeMi91] [Kore190].Usually, a predicate consists of both the input parameters and the internal variables.However, in generating input data to satisfy a path condition, we must work with theinput parameters. By replacing each internal variable in a predicate with its symbolicvalue in terms of the input parameters, the predicate is transformed to an equivalentpredicate with input parameters only; this is called an interpretation of the predicate[Ram76]. Each path that uses the predicate defines one symbolic value for each variableCHAPTER 4. GENERATION OF DATA CONSTRAINTS^ 50in the predicate and provides an interpretation of the predicate.Unless a test step contains no input statement, each test step should receive one inputprimitive with its related parameters. The same input primitive can appear in severaldifferent test steps. An input primitive and its parameters are local to the test step inwhich it resides and the only way they can have global influence on other test steps is byassigning values to some global variable(s). To avoid confusion, it is necessary to specifythe constraints with respect to a specific test step.The symbolic execution here uses backward substitution from the use of the variable toits definition. The test input constraints are structured from bottom to top as the pathis traversed in the reverse order. The variables used in the predicate should have beendefined somewhere previously along the path. An advantage of this method is that nostorage is required for storing the symbolic values of the variables. The following recur-sive algorithm implements this symbolic execution. The result of this symbolic executionis a set of equality and inequality constraints on the input parameters for each step anddefines a subset of the input space that will lead to the execution of the path chosen.Note that different paths may interpret the same predicate differently. Interpretation ofa predicate is obtained by replacing the variables of the predicate with symbols obtainedCHAPTER 4. GENERATION OF DATA CONSTRAINTS^ 51in the upper stream of the path specified by a test case. The symbols are the result ofthe previous substitutions which make the current interpretation dependent on a partic-ular path. As shown in Figure 4.1, Test_casel and Test_case2 share one transition withpredicate g(x). In Test_casel, g(x) is interpreted as g(I12.ad), and in Test_case2 g(x) isinterpreted as g(I21.sign). The interpretation of the predicates depends on the test caseit is in and serves as a path condition for that test case.Figure 4.1: One predicate, two different interpretationsAn algorithm which symbolically interprets the variables in the predicates with givenCHAPTER 4. GENERATION OF DATA CONSTRAINTS^ 52input parameters and generates the path conditions for each transition is given below:Input: All transition paths in the protocol specification.Output: A set of constraints with regard to the input parameters for everytest case.Algorithm:/* Take one path i at a time, Use backtracking method, start from thelast transition on the path i */for (each transition t from last transition on path up tofirst transition on path)beginfor (every variable x appearing in predicate of transition t)begintake transition t1 immediately before (upstream of) transitiont,AGAIN: if x is defined in t1,beginreplace x with its definition (see Chapter 3) in t1;endelseif there is transition t2 immediately before transition t1along the path of this test case, thenset tl=t2;go to AGAINelse finish;endendGenerally, the input parameters may be of different types, e.g. integer, boolean, etc.This will be called type constraints. The types must match during substitutions. More-over, the symbolic execution described above will generate paths constraints in terms ofinput parameters. This is known as relational constraints. The type constraints togetherwith the relational constraints define an allowable input domain for the test cases andCHAPTER 4. GENERATION OF DATA CONSTRAINTS^ 53d-u path for YAfter symbolic substitution, Poed4 = G(1'.parameter, T3 parameterl .11.pleamesea2)Figure 4.2: Symbolic interpretationcan be used as part of the final documentation for the test suite generated. Obviously,the meaningful test data generated by a test data generator should lie within this inputdomain.The resulting constraints will very much depend on the relationships of the control flowon its inputs. If the control flow is completely independent of the input parameters, norelational constraint will exist on each test step along the path. The more the controlflow is dependent on the input parameters, the more the input constraints.As an example, the relational constraints generated for the test case shown in Figure 4.3CHAPTER 4. GENERATION OF DATA CONSTRAINTS^ 54TCREQ / CR^DR / NDREQ, TD1ND^CR / MIND TCRES / DR predicate:TCREQ.qtaReq >=0Transition:localref 300predicate:DR.disResson 0Transition:predicate:loadref <>0Transition:qtaPat 26predicate:TCRES.qtaReq qtsEstTransition:••••••Figure 4.3: Constraint generation exampleare the following set of equalities and inequalitiesstepl: TCREQ.qtsReq >= 0step2: Dr.disReason == 0step4: TCRES.qtsReq <= 264.4 Deriving output constraintsWith the constraints on input parameters, the values on output parameters are also con-strained. In TTCN [ISO 9646-3], these constraints are specified with each test step. Theverdict of test result takes these constraints into consideration.One notable difference exists between the input constraints and the output constraints.While input constraints are mostly composed of a wide range of data values which theparameters can assume, in the output domain, each point is calculated from the corre-sponding values in the input domain. The mapping functions from the input domain tothe output domain are regarded as the output constraints. They can be derived from theCHAPTER 4. GENERATION OF DATA CONSTRAINTS^ 55specifications using the algorithm described below.In a protocol, an output action takes place only after all of its related parameters havebeen assigned appropriate values through the internal variables, constants, or local inputparameters. If an output parameter is assigned by a local input parameter, the constraintrelation will be observed in one test step; no global flow analysis is required. However,if this is not the case, symbolic interpretation technique could be used to substitute thevariables by the closest definitions of the variables that can be found backward alongthe path. If the variable under consideration is defined by another variable, this processhas to be continued recursively until it has been interpreted by input parameters andconstants only.The mapping from the input domain to the output domain is described in the following:Suppose test suite TSuite consists of n test cases, and each test case TCasei consists ofm test steps as shown:TSuite {TCaseo,TCasei,...,TCasen_i},TCaseiFor each test step TStep3, there is an observable interaction between the JUT and theCHAPTER 4. GENERATION OF DATA CONSTRAINTS^ 56environment, with the input stimulus and output action denoted by'J(p'21,1,122,^PI,k)andOi(poi1,Poi2,•••,po;respectively.As shown in Figure 4.4, each test case with k test steps under consideration has k inputand output interactionsI/O, 12/02, -, Ik/Ok,If the test cases start from a known state (e.g., the idle state), then the values of systemvariables at that state are known. In this case, the values of the output parameters for01 depend only on the values of the input parameters for /1; the values of the outputparameters for 02 depend on the values of the input parameters for ./.1. and 12; and thevalues of the output parameters for Ok depend on the values of the input parameters 11through and so on.As explained in section 4.3, the values of the same parameters may differ when intepretedby different test steps of the test case. Therefore, the final constraints should identifyIk-1(q(k-1)0,..., q(k-1)m)^/Ok-1(PO, Pl, ... Pt)test_stepl^test-step2I0(q00, q01,... q0n) /00(PO, P1,... Pk)I1(q10, qll, ... qlv)/01(PO, Pl, ...)CHAPTER 4. GENERATION OF DATA CONSTRAINTS^ 57output parameter Pi at test step j = F ( 00 On, 10^(k- 1)0 (k- 1)m)input parameters at stepinput parameters at step 2input parameters at step kFigure 4.4: Mapping between input parameters and output parametersnot only the input parameters which define the output parameters under considerationbut also the test step those input parameters are associated with so as to avoid possibleconfusion.The algorithm to obtain the output constraints employs symbolic execution technique asfollows:CHAPTER 4. GENERATION OF DATA CONSTRAINTS^ 58For each test case (tc) in a test suite,beginFrom the first test step of tc down to the last test stepbegin1) assign initial symbolic values to the parametersin the input primitive of this test step2) for each assignment statement in the transition of this stepbeginsubstitute each appearance of the variable or primitiveon the RHS of the assignment statement with the definitionof the variable or expression;endendFrom the assignment statements after the above substitutions,for each output primitive in the test step,take the set of equations with output parameters on the LHS, asconstraint for this output primitive, and prefix withthe identification of the test step.endThe aggregated constraints for each output primitive of a test step by the above algo-rithm are the mapping functions from the input domain to the output domain. Thesemapping functions are used to derive the domain of output parameters in the outputprimitives according to the values given for the input parameters in the input primitives.The output constraints and the input constraints will be used, together with the testsequences produced by the methods described in Chapter 3, to construct a test suite inTTCN.CHAPTER 4. GENERATION OF DATA CONSTRAINTS^ 59To summarize, in this chapter, we have introduced a method and described its imple-mentation for deriving constraints for input and output primitives for test cases from anormal form specification. An example can be found in Appendix A.To find the concrete variable values that satisfy the constraints, the techniques for Con-straint Logic Programming (CLP) can be used [Jaff86] [Jaff87] [CLPR].Chapter 5Abstract Test Case Relation ModelIn Chapter 3 and Chapter 4, the test case and constraint generation methods havebeen introduced. In this chapter, we will address the problem of selecting test cases forexecution and show how the abstract test case relation model (ATCRM) [Chanson91][Chanson92-1] can be implemented for a given protocol.Traditionally, selecting test cases for conformance testing is done statically (i.e. priorto execution). The test case management system (TCMS) developed at UBC in 1991[Chanson92-1] automates the test case selection process and includes dynamic selection.Its design has incorporated the ATCRM for static and dynamic test case selection. Inorder to use the TCMS to manage the test cases already generated, there is a need tostudy the relations among test cases and derive the ATCRM for each protocol.60ti t2_TCL _ _ .^ TC2^CHAPTER 5. ABSTRACT TEST CASE RELATION MODEL^61Figure 5.1: Finite state machine for testing5.1 Test case relationsGenerally speaking, there are many different ways to relate the test cases to one another.For example, we could relate the test cases with respect to some of their common fea-tures, such as test purposes or transitions. This way, the test cases can be organized intogroups and subgroups, and the test selector can choose whatever group of test cases hewould like to test according to the specific features he is interested in.Ordering is another way to relate the test cases in the same test suite. For example, asshown in Figure 5.1. if test case TC1 is used to test transition ti, and test case TC2 isused to test transition ti followed by transition t2, then, if test case TC1 fails the test,test case TC2 will also fail the test. We shall denote this failure dependency orderingbetween TC1 and TC2 by TC1 << TC2.CHAPTER 5. ABSTRACT TEST CASE RELATION MODEL^625.2 Ordering relation and conformance testingThe ordering relations among the test cases in the same test suite is very useful inconformance testing. It can be used to select test cases to be executed dynamicallyduring testing to speed up the testing process. For example, assume the set of test casesS = {TCO3TC1,...TC} has the following relationship among themselves:TC0 « TC1 « « TC.Then if i < j, test case TCi should be selected and executed before test case TCj, andif TCi fails the test, it is not necessary to test TCi+i, TCi+2, ...TC„ before the JUT isfixed, because they will also fail. This effort results in higher efficiency in the testingprocedure.5.2.1 Property of the ordering relationThe ordering relation << is a partial relation. This partial ordering relation processesthe transitive property:if A << B, B^C , where A, B, C are test cases in the same test suite, then,A « CThis property will be used in the ATCRM representation and the mapping model inTCMS.CHAPTER 5. ABSTRACT TEST CASE RELATION MODEL^635.3 The AT CRM modelThe ordering relations can be derived from the protocol specification itself or from thetest suite generated. Deriving ordering relations from the test suite is simple and straightforward. However, it only orders test cases that appear in the given test suite.An abstract test case relation model (ATCRM) which derives and expresses the orderingrelations among the test cases from protocol specifications is proposed in [Chanson91][Chanson92-1] to help select test cases statically and dynamically.ATCRM is derived from the behavior tree to identify the relationships among test casesbased on their test purposes. In a behavior tree, the states are taken as nodes and transi-tions as edges connecting the states. The tree begins with the idle state and ends at thesame state by unfolding the control flow. A sample behavior tree is shown in Figure 5.2.ATCRM is also expressed in the form of a tree. The mapping from behavior tree to theATCRM tree is an one-to-one mapping. Corresponding to each node E in the behaviortree, we have one distinct node E' in the ATCRM tree. The path from the root of thetree to node E' is a test case TCE, whose test purpose is specified by the sequence oftransitions starting from the idle state to E'. A sample ATCRM tree which is derivedCHAPTER 5. ABSTRACT TEST CASE RELATION MODEL^64Figure 5.2: An example of a behavior tree for TPOfrom the behavior tree in Figure 5.2 is shown in Figure 5.3.The arrows in the tree indicate ordering relations. The test cases in the nodes should beexecuted in the order specified by the arrows.The ATCRM tree expresses the ordering relation among the test cases in a hierarchicalstructure. The higher the level of the node (i.e., closer to the root node), the higher thepriority of the test case in the selection. In general, the high level nodes will influencethe lower level nodes reachable from them.Suppose the test case associated with node E' is Ti,k, the test cases associated with itsCHAPTER 5. ABSTRACT TEST CASE RELATION MODEL^65DRadisindNdis.req Tdis.req/DRFigure 5.3: An example of an ATCRM Tree for TPOchildren are Ti+i,ki,Ti+1,k2, • • •Ti-1-1,kn then the relation<< Ti+i,kjholds for j = 1, 2, ..., nThe above relation can be recursively applied to every node of the tree.5.4 The derivation of ATCRMWe present a recursive algorithm to build the ATCRM tree.CHAPTER 5. ABSTRACT TEST CASE RELATION MODEL^66Node_IdentityState_KeyTransition_KeyNb_of KidsFatherKids_ListFigure 5.4: Data Structure for each ATCRM Tree nodeSet_Tree(Root, Nb_of_Siblings)beginfor (Root and every siblingbegincalculate the numberfor (each kid of thebeginallocate memoryof Root (denoted as Root_Sibling))of kids Root_Sibling hasRoot_Sibling)for this kid andset proper values for the fields in kid node;endSet_Tree(Root_Sibling's first kid,Root_Sibling's number of kids);endend.The data structure used to represent an ATCRM node is shown in Figure 5.4. The re-cursion unit is shown in Figure 5.5.As shown in Figure 5.5, all sibling nodes are assigned consecutive locations dynamicallyCHAPTER 5. ABSTRACT TEST CASE RELATION MODEL^67Tree Bulit UpRecusion UnitSibling_CoantWidthFigure 5.5: Recursion unit for building ATCRM treeduring execution. The number of siblings is determined by the outgoing transitions oftheir father and is calculated during execution.The execution of the algorithm to generate the ATCRM tree should terminate. However,because the unfolding of a behavior tree can go on forever when loops exist, the termi-nating condition has to be set properly to ensure termination of the algorithm. In theATCRM tree, each loop transition is allowed to appear in each path no more than once.The reason for this rule is that the main purpose of the ATCRM model is to representthe ordering among the test cases, therefore a test case need not appear more than once.For any self loop transition T, and any transition Ti that would appear after/before Tsalong the path, the ordering relation between T, and Ti is already exhibited in the treeeven though the loop transition Ts appears only once.CHAPTER 5. ABSTRACT TEST CASE RELATION MODEL^68The tree building algorithm is guaranteed to terminate for the reason that each path(from root to leaf) will terminate when the leaf_node's state appears more than oncealong the path. This condition must hold since the number of states in a EFSM is finite.This algorithm builds the tree in a machine readable form. A sample of the tree is shownin Figure 5.6.After the tree has been built, test cases and their ordering relations are generated. A testcase consists of the path from the root node to any node in the tree (i.e., the transitionsalong the path) with a postamble added leading back to the idle state as required byIS09646. As the path is traversed, only the ordering relation between the father and itschildren need to be written out, because the rest of the relations between any two testcases can be easily derived by using the transitive property.To derive the relations between the test cases, all nodes in the ATCRM tree should betraversed, and the test cases and the ordering relations between the father and the chil-dren should be recorded along the way. A breadth first traversing algorithm is suitablefor this purpose. By using the breadth first algorithm, the nodes that are closer to theroot will be traversed first, so test cases from these nodes will be generated first. This isalso the order that these test cases should be executed. The test cases generated fromCHAPTER 5. ABSTRACT TEST CASE RELATION MODEL^69Figure 5.6: ATCRM tree in machine readable formthese nodes should be tested before those generated from the nodes that are further awayfrom the root because once the former test cases fail, all test cases that are generated bytheir descendents should be discarded as they will also fail. A First-In-First-Out queueis used to help with the traversing process.CHAPTER 5. ABSTRACT TEST CASE RELATION MODEL^70Traverse(Root)beginInitiate (Queue)Push_Queue(Root);while (Queue not empty)beginpush(Head_of_Queue's_all_kid_nodes);Current_Node <- pop(Queue);backtracking from Current_Node to the Root;output this test case;output the ordering relation for this test case.endend5.5 ATCRM model in conformance testingTo apply the ATCRM model in conformance testing, a mapping is needed to specifythe relations among the test cases in the test suite independent of how they were gen-erated. With the ATCRM tree, ordering test cases becomes a matter of mapping thetest cases of a test suite onto the nodes of the ATCRM by comparing their I/O sequences.The ATCRM model is very useful in the process of automatically selecting the test casesto be executed. The ATCRM of a protocol specification represents an ordering of thetest cases to be tested. By selecting the next test case dynamically based on the resultof the previous test case, the number of test cases that need to be executed during theearly stages of testing is greatly reduced resulting in higher efficiency and shorter testingtime.CHAPTER 5. ABSTRACT TEST CASE RELATION MODEL^71An added benefit of the ATCRM tree is that it provides a means to check if a giventest suite has any useful test case missing. When the test cases are mappped onto theATCRM tree that is generated from a formal specification of the protocol, any missingor superfluous test cases can be detected.Chapter 6Conclusion6.1 SummaryThis thesis research has covered the following aspects1. Test sequence generation for combined control flow and data flow testing.2. Constraint generation for test cases.3. Abstract test case relation model generation.A new method for generating test sequences was presented. It combines both controlflow analysis and data flow analysis. The test case generation is done automatically byan extensive and systematic analysis of the formal specifications. The resulting test se-quences have at least the same fault detection power as that of the UIO-method used forcontrol flow analysis, and also the fault detection capability as that of the all definitionand use strategy for data flow analysis (Chapter 3).72CHAPTER 6. CONCLUSION^ 73The constraint relations of input and output parameters are also generated automati-cally. With these constraint relations, the TTCN form of the test suite can be generated.The input constraints are used to guide the data selection process by the tester. Theoutput constraints are used to check the validity of the output behaviours.The ATCRM generated from the formal specification provides an abstract model to es-tablish the ordering relations among the test cases in a test suite. This relation is usedto dynamically or statically select the test cases for execution during testing and helpsto reduce the number of test cases executed by eliminating those that are guaranteed tofail. This will result in efficient testing.We implemented methods for combined data flow and control flow analysis and for gen-erating the ATCRM from protocol specifications. We also proposed and implemented aconstraint generation method based on normal form specification. All implementationsare run under the UNIX environment. The total number of lines of C code generated isabout 6000. The user interface of the test case generator is shown in Appendix D.CHAPTER 6. CONCLUSION^ 746.2 Future workMany areas remain open for future study in the investigation and improvement of auto-matic test sequence and constraints generation methodologies. The following are possibletopics for future work:• Optimization techniques can be employed to optimize the test sequences we havegenerated. However, the effect of the optimization on coverage should be carefullyexamined as well.• Our current constraint relations are generated based on the normal form of ES-TELLE.Y specifications, which does not allow recursion and loops with variablebounds. Constraints generation from more general ESTELLE specifications needto be studied.• While our methods produce input domain in terms of constraint relations, morework is needed for automatic generation of test data with good fault coverage fromthis information.Appendix AThe results of a case study on TPOThe following is the test cases generated from TPO specification in Estelle.Y by using theUnified Test Sequence Generation MethodTest Case 0 : idle, wfcc, idle, wftr, idle, wfcc, idlestep 0: (TCREQ,-)/(-,CR)step 1: (-,DR)/(NDREQ,-), (-,-)/(TDIND,-)step 2: (-,CR)/(TCIND,-)step 3: (TCRES,-)/(TDIND,DR)step 4: (TCREQ,-)/(-,CR)step 5: (-,DR)/(NDREQ,-), (-,-)/(TDIND,-)Test Case 1: idle, wftr, idle, wfcc, idlestep 0: (-,CR)/(TCIND,-)step 1: (TDREQ,-)/(-,DR)step 2: (TCREQ,-)/(-,CR)step 3: (-,DR)/(NDREQ,-), (-,-)/(TDIND,-)Test Case 2: idle, wftr, idle, wfcc, idlestep 0: (-,CR)/(TCIND,-)step 1: (TCRES,-)/(TDIND,DR)step 2: (TCREQ,-)/(-,CR)step 3: (-,DR)/(NDREQ,-), (-,-)/(TDIND,-)75Test Case 3: idle, wftr, data, idle, wfcc, idlestep 0: (-,CR)/(TCIND,-)step 1: (TCRES,-)/(-,CC)step 2: (TDREQ,-)/(NDREQ,-)step 3: (TCREQ,-)/(-,CR)step 4: (-,DR)/(NDREQ,-), (-,-)/(TDIND,-)Test Case 4: idle, wftr, idle, wfcc, idlestep 0: (-,CR)/(TCIND,-)step 1: (TDREQ,-)/(-,DR)step 2: (TCREQ,-)/(-,CR)step 3: (-,DR)/(NDREQ,-), (-,-)/(TDIND,-)Test Case 5: idle, wftr, idle, wfcc, idlestep 0: (-,CR)/(TCIND,-)step 1: (TCRES,-)/(TDIND,DR)step 2: (TCREQ,-)/(-,CR)step 3: (-,DR)/(NDREQ,-), (-,-)/(TDIND,-)Test Case 6: idle, wftr, data, idle, wfcc, idlestep 0: (-,CR)/(TCIND,-)step 1: (TCRES,-)/(-,CC)step 2: (TDREQ,-)/(NDREQ,-)step 3: (TCREQ,-)/(-,CR)step 4: (-,DR)/(NDREQ,-), (-,-)/(TDIND,-)Test Case 7: idle, wfcc, data, data, idle, wfcc, idlestep 0: (TCREQ,-)/(-,CR)step 1: (-,CC)/(TCCON,-)step 2: (-,-)/(-,TDATR)step 3: (TDREQ,-)/(NDREQ,-)step 4: (TCREQ,-)/(-,CR)step 5: (-,DR)/(NDREQ,-), (-,-)/(TDIND,-)Test Case 8: idle, wfcc, data, data, idle, wfcc, idle76step 0:step 1:step 2:step 3:step 4:step 5:(TCRED, - )/( - ,CR)(- ,CC)/(TCCON, - )(-,-)/(-,DT)(TDREQ,-)/(NDREQ,-)(TCREQ,-)/(-,CR)(-,DR)/(NDREQ,-), (-,-)/(TDIND,-)Test Case 9: idle, wftr, data, idle, wftr, idle, wfcc, idlestep 0: (-,CR)/(TCIND,-)step 1: (TCRES,-)/(-,CC)step 2: (TDREQ,-)/(NDREQ,-)step 3: (-,CR)/(TCIND,-)step 4: (TCRES,-)/(TDIND,DR)step 5: (TCREQ,-)/(-,CR)step 6: (-,DR)/(NDREQ,-), (-,-)/(TDIND,-)Test Case 10: idle, idle, idle, wfcc, idlestep 0: (TCREQ,-)/(TDIND,-)step 1: (-,CR)/(-,DR)step 2: (TCREQ,-)/(-,CR)step 3: (-,DR)/(NDREQ,-), (-,-)/(TDIND,-)Test Case 11: idle, wfcc, idle, wfcc, idlestep 0: (TCREQ,-)/(-,CR)step 1: (-,DR)/(NDREQ,-), (-,-)/(TDIND,-)step 2: (TCREQ,-)/(-,CR)step 3: (-,DR)/(NDREQ,-), (-,-)/(TDIND,-)Test Casewfcc, idlestep 0:step 1:step 2:step 3:step 4:12: idle, wfcc, data, idle, wfcc, data, idle, wfcc, data, idle,(TCREQ ,- )/( -, CR)( - ,CC)/(TCCON, - )(TDATR,-)/(-,-)(TCREQ,-)/(-,CR)(-,DR)/(NDRED,-), (- , -)/(TDIND, - )77step 5: (-,DT)/(-,-)step 6: (TCREQ,-)/(-,CR)step 7: (-,DR)/(NDREQ,-), (-,-)/(TDIND,-)step 8: (NDIND,-)/(TDIND,-)step 9: (TCREQ,-)/(-,CR)step 10: (-,DR)/(NDREQ,-), (-,-)/(TDIND,-)Test Case 13: idle, wfcc, data, idle, wfcc, idlestep 0: (TCREQ,-)/(-,CR)step 1: (-,CC)/(TCCON,-)step 2: (NRIND,-)/(TDIND,-)step 3: (TCREQ,-)/(-,CR)step 4: (-,DR)/(NDREQ,-), (-,-)/(TDIND,-)The next is the input constraints for all the above test casesTest Case 0(^step0:^TCREq.qtsReq ) >= ( 0 )(^stepl:^DR.disReason ) == ( 0 )(^step3:^TCRES.qtsReq ) <= ( 26 )(^step4:^TCREQ.qtsReq ) >= ( 0 )(^step5:^DR.disReason ) == ( 0 )Test Case 1(^step2:^TCREQ.qtsReq ) >= ( 0 )(^step3:^DR.disReason ) == ( 0 )Test Case 2(^stepl:^TCRES.qtsReq ) <= ( 11 )(^step2:^TCREQ.qtsReq ) >= ( 0 )(^step3:^DR.disReason ) == ( 0 )Test Case 378(^step1:^TCRES.qtsReq ) <= ( 11 )(^step3:^TCREQ.qtsReq ) >= ( 0 )(^step4:^DR.disReason ) == ( 0 )Test Case 4(^step2:^TCREQ.qtsReq ) >= ( 0 )(^step3:^DR.disReason ) == ( 0 )Test Case 5(^stepl:^TCRES.qtsReq ) <= ( 26 )(^step2:^TCREQ.qtsReq ) >= ( 0 )(^step3:^DR.disReason ) == ( 0 )Test Case 6(^step1:^TCRES.qtsReq ) <= ( 26 )(^step3:^TCREQ.qtsReq ) >= ( 0 )(^step4:^DR.disReason ) == ( 0 )Test Case 7(^step0:^TCREQ.qtsReq ) >= ( 0 )(^step4:^TCREQ.qtsReq ) >= ( 0 )(^step5:^DR.disReason ) == ( 0 )Test Case 8(^step0:^TCREQ.qtsReq ) >= ( 0 )(^step4:^TCREQ.qtsReq ) >= ( 0 )(^step5:^DR.disReason ) == ( 0 )Test Case 9(^step1:^TCRES.qtsReq ) <= ( 11 )(^step4:^TCRES.qtsReq ) <= ( 26 )(^step5:^TCREQ.qtsReq ) >= ( 0 )(^step6:^DR.disReason ) == ( 0 )Test Case 1079(^stepl:^CR.option ) == ( -100 )(^step2:^TCREQ.qtsReq ) >= ( 0 )(^step3:^DR.disReason ) == ( 0 )Test Case 11(^step0:^TCREQ.qtsReq ) >= ( 0 )(^step1:^DR.disReason ) != ( -1 )(^step2:^TCREQ.qtsReq ) >= ( 0 )(^step3:^DR.disReason ) == ( 0 )Test Case 12(^step0:^TCREQ.qtsReq ) >= ( 0 )(^step3:^TCREQ.qtsReq ) >= ( 0 )(^step4:^DR.disReason ) == ( 0 )(^step6:^TCREQ.qtsReq ) >= ( 0 )(^step7:^DR.disReason ) == ( 0 )(^step9:^TCREQ.qtsReq ) >= ( 0 )(^step10:^DR.disReason ) == ( 0 )Test Case 13(^step0:^TCREQ.qtsReq ) >= ( 0 )(^step3:^TCREQ.qtsReq ) >= ( 0 )(^step4:^DR.disReason ) == ( 0 )The following is the output constraints for the above test casesTest Case 0step0: CR.sourceRef = 300step0: CR.option =^100step0: CR.callingAddr = step0: TCREQ.fromAddrstep0: CR.calledAddr =^step0: TCREQ.toAddr80step0: CR.maxTpdu = 80step1: NDREQ.discReason =^step1: DR.disReasonstepl: TDIND.tsDiscReason =^stepl: DR.disReasonstepl: TDIND.tsUserReason = stepl: DR.addClearReasonstep2: TCIND.toAddr =^step2: CR.calledAddrstep2: TCIND.fromAddr = step2: CR.callingAddrstep2: TCIND.qtsPro =^26step3: DR.destRefer =^step2: CR.sourceRefstep3: DR.disReason =^0step3: DR.addClearReason = 0step3: TDIND.tsDiscReason = 0step4: CR.sourceRef = 300step4: CR.option =^100step4: CR.callingAddr = step4: TCREQ.fromAddrstep4: CR.calledAddr = step4: TCREQ.toAddrstep4: CR.maxTpdu = 80step5: NDREQ.discReason = step5: DR.disReasonstep5: TDIND.tsDiscReason =^step5: DR.disReasonstep5: TDIND.tsUserReason = step5: DR.addClearReasonTest Case 1step0: TCIND.toAddr =^step0: CR.calledAddrstep0: TCIND.fromAddr = step0: CR.callingAddrstep0: TCIND.qtsPro =^11step1: DR.disReason =^0stepl: DR.addClearReason = step1: TDREQ.tsUserReasonstep1: DR.destRefer =^step0: CR.sourceRefstep2: CR.sourceRef =^300step2: CR.option =^100step2: CR.callingAddr = step2: TCREQ.fromAddrstep2: CR.calledAddr =^step2: TCREQ.toAddrstep2: CR.maxTpdu = 80step3: NDREQ.discReason =^step3: DR.disReasonstep3: TDIND.tsDiscReason =^step3: DR.disReason81step3: TDIND.tsUserReason = step3: DR.addClearReasonTest Case 2step0: TCIND.toAddr =^step0: CR.calledAddrstep0: TCIND.fromAddr = step0: CR.callingAddrstep0: TCIND.qtsPro =^11stepl: DR.destRefer = step0: CR.sourceRefstep1: DR.disReason =^0stepl: DR.addClearReason = 0step1: TDIND.tsDiscReason =^0step2: CR.sourceRef = 300step2: CR.option =^100step2: CR.callingAddr = step2: TCREQ.fromAddrstep2: CR.calledAddr = step2: TCREQ.toAddrstep2: CR.maxTpdu = 80step3: NDREQ.discReason =^step3: DR.disReasonstep3: TDIND.tsDiscReason =^step3: DR.disReasonstep3: TDIND.tsUserReason = step3: DR.addClearReasonTest Case 3step0: TCIND.toAddr = step0: CR.calledAddrstep0: TCIND.fromAddr = step0: CR.callingAddrstep0: TCIND.qtsPro =^11stepl: CC.destRefer = step0: CR.sourceRefstepl: CC.sourceRef =^124stepl: CC.callingAddr = step0: CR.callingAddrstepl: CC.calledAddr =^step0: CR.calledAddrstepl: CC.maxTpdu = step0: CR.maxTpdustep2: NDREQ.discReason = step2: TDREQ.tsUserReasonstep3: CR.sourceRef = 300step3: CR.option =^100step3: CR.callingAddr = step3: TCREQ.fromAddrstep3: CR.calledAddr = step3: TCREQ.toAddr82step3: CR.maxTpdu = 80step4: NDREQ.discReason = step4: DR.disReasonstep4: TDIND.tsDiscReason =^step4: DR.disReasonstep4: TDIND.tsUserReason = step4: DR.addClearReasonTest Case 4step0: TCIND.toAddr =^step0: CR.calledAddrstep0: TCIND.fromAddr = step0: CR.callingAddrstep0: TCIND.qtsPro =^26step1: DR.disReason =^0step1: DR.addClearReason = step1: TDREQ.tsUserReasonstep1: DR.destRefer =^step0: CR.sourceRefstep2: CR.sourceRef =^300step2: CR.option =^100step2: CR.callingAddr = step2: TCREQ.fromAddrstep2: CR.calledAddr = step2: TCREQ.toAddrstep2: CR.maxTpdu = 80step3: NDREQ.discReason =^step3: DR.disReasonstep3: TDIND.tsDiscReason =^step3: DR.disReasonstep3: TDIND.tsUserReason = step3: DR.addClearReasonTest Case 5step0: TCIND.toAddr = step0: CR.calledAddrstep0: TCIND.fromAddr = step0: CR.callingAddrstep0: TCIND.qtsPro = 26stepl: DR.destRefer =^step0: CR.sourceRefstepl: DR.disReason = 0stepl: DR.addClearReason = 0step1: TDIND.tsDiscReason = 0step2: CR.sourceRef = 300step2: CR.option =^100step2: CR.callingAddr = step2: TCREQ.fromAddrstep2: CR.calledAddr =^step2: TCREQ.toAddr83step2: CR.maxTpdu = 80step3: NDREQ.discReason =^step3: DR.disReasonstep3: TDIND.tsDiscReason =^step3: DR.disReasonstep3: TDIND.tsUserReason = step3: DR.addClearReasonTest Case 6step0: TCIND.toAddr =^step0: CR.calledAddrstep0: TCIND.fromAddr = step0: CR.callingAddrstep0: TCIND.qtsPro =^26stepl: CC.destRefer =^step0: CR.sourceRefstepl: CC.sourceRef =^124step1: CC.callingAddr = step0: CR.callingAddrstep1: CC.calledAddr =^step0: CR.calledAddrstep1: CC.maxTpdu =^100step2: NDREQ.discReason = step2: TDREQ.tsUserReasonstep3: CR.sourceRef = 300step3: CR.option =^100step3: CR.callingAddr = step3: TCREQ.fromAddrstep3: CR.calledAddr = step3: TCREQ.toAddrstep3: CR.maxTpdu = 80step4: NDREQ.discReason = step4: DR.disReasonstep4: TDIND.tsDiscReason =^step4: DR.disReasonstep4: TDIND.tsUserReason = step4: DR.addClearReasonTest Case 7step0: CR.sourceRef = 300step0: CR.option =^100step0: CR.callingAddr = step0: TCREQ.fromAddrstep0: CR.calledAddr =^step0: TCREQ.toAddrstep0: CR.maxTpdu = 80stepl: TCCON.qtsRes =^27step3: NDREQ.discReason = step3: TDREQ.tsUserReasonstep4: CR.sourceRef = 30084step4: CR.option =^100step4: CR.callingAddr = step4: TCREQ.fromAddrstep4: CR.calledAddr = step4: TCREQ.toAddrstep4: CR.maxTpdu = 80step5: NDREQ.discReason =^step5: DR.disReasonstep5: TDIND.tsDiscReason =^step5: DR.disReasonstep5: TDIND.tsUserReason =^step5: DR.addClearReasonTest Case 8step0: CR.sourceRef = 300step0: CR.option =^100step0: CR.callingAddr = step0: TCREQ.fromAddrstep0: CR.calledAddr =^step0: TCREQ.toAddrstep0: CR.maxTpdu = 80stepl: TCCON.qtsRes = 27step3: NDREQ.discReason = step3: TDREQ.tsUserReasonstep4: CR.sourceRef = 300step4: CR.option =^100step4: CR.callingAddr = step4: TCREQ.fromAddrstep4: CR.calledAddr = step4: TCREQ.toAddrstep4: CR.maxTpdu = 80step5: NDREQ.discReason =^step5: DR.disReasonstep5: TDIND.tsDiscReason =^step5: DR.disReasonstep5: TDIND.tsUserReason = step5: DR.addClearReasonTest Case 9step0: TCIND.toAddr =step0: TCIND.fromAddr =step0: TCIND.qtsPro =stepl: CC.destRefer =stepl: CC.sourceRef =step1: CC.callingAddr =stepl: CC.calledAddr =step0: CR. calledAddrstep0: CR.callingAddr11step0: CR. sourceRef124step0: CR. callingAddrstep0: CR.calledAddr85stepl: CC.maxTpdu = step0: CR.maxTpdustep2: NDREQ.discReason = step2: TDREQ.tsUserReasonstep3: TCIND.toAddr = step3: CR.calledAddrstep3: TCIND.fromAddr = step3: CR.callingAddrstep3: TCIND.qtsPro = 26step4: DR.destRefer = step3: CR.sourceRefstep4: DR.disReason = 0step4: DR.addClearReason = 0step4: TDIND.tsDiscReason = 0step5: CR.sourceRef = 300step5: CR.option =^100step5: CR.callingAddr = step5: TCREQ.fromAddrstep5: CR.calledAddr = step5: TCREQ.toAddrstep5: CR.maxTpdu = 80step6: NDREQ.discReason = step6: DR.disReasonstep6: TDIND.tsDiscReason =^step6: DR.disReasonstep6: TDIND.tsUserReason = step6: DR.addClearReasonTest Case 10step0: TDIND.tsDiscReason = 0step0: TDIND.tsUserReason = 0stepl: DR.destRefer =^step1: CR.sourceRefstep1: DR.disReason =^0step2: CR.sourceRef = 300step2: CR.option =^100step2: CR.callingAddr = step2: TCREQ.fromAddrstep2: CR.calledAddr = step2: TCREQ.toAddrstep2: CR.maxTpdu = 80step3: NDREQ.discReason =^step3: DR.disReasonstep3: TDIND.tsDiscReason =^step3: DR.disReasonstep3: TDIND.tsUserReason = step3: DR.addClearReasonTest Case 1186step0: CR.sourceRef = 300step0: CR.option =^100step0: CR.callingAddr = step0: TCREQ.fromAddrstep0: CR.calledAddr = step0: TCREQ.toAddrstep0: CR.maxTpdu = 80stepl: NDREQ.discReason =^stepl: DR.disReasonstepl: TDIND.tsDiscReason =^step1: DR.disReasonstep2: CR.sourceRef = 300step2: CR.option =^100step2: CR.callingAddr = step2: TCREQ.fromAddrstep2: CR.calledAddr = step2: TCREQ.toAddrstep2: CR.maxTpdu = 80step3: NDREQ.discReason = step3: DR.disReasonstep3: TDIND.tsDiscReason =^step3: DR.disReasonstep3: TDIND.tsUserReason = step3: DR.addClearReasonTest Case 12step0: CR.sourceRef =^300step0: CR.option =^100step0: CR.callingAddr = step0: TCREQ.fromAddrstep0: CR.calledAddr =^step0: TCREQ.toAddrstep0: CR.maxTpdu = 80step1: TCCON.qtsRes = 27step3: CR.sourceRef = 300step3: CR.option =^100step3: CR.callingAddr = step3: TCREQ.fromAddrstep3: CR.calledAddr = step3: TCREQ.toAddrstep3: CR.maxTpdu = 80step4: NDREQ.discReason =^step4: DR.disReasonstep4: TDIND.tsDiscReason =^step4: DR.disReasonstep4: TDIND.tsUserReason = step4: DR.addClearReasonstep6: CR.sourceRef = 300step6: CR.option =^100step6: CR.callingAddr = step6: TCREQ.fromAddr87step6: CR.calledAddr =^step6: TCREQ.toAddrstep6: CR.maxTpdu = 80step7: NDREQ.discReason = step7: DR.disReasonstep7: TDIND.tsDiscReason =^step7: DR.disReasonstep7: TDIND.tsUserReason = step?: DR.addClearReasonstep8: TDIND.tsDiscReason = 0step9: CR.sourceRef = 300step9: CR.option =^100step9: CR.callingAddr = step9: TCREQ.fromAddrstep9: CR.calledAddr =^step9: TCREQ.toAddrstep9: CR.maxTpdu = 80step10: NDREQ.discReason =^step10: DR.disReasonstep10: TDIND.tsDiscReason =^step10: DR.disReasonstep10: TDIND.tsUserReason = step10: DR.addClearReasonTest Case 13step0: CR.sourceRef = 300step0: CR.option =^100step0: CR.callingAddr = step0: TCREQ.fromAddrstep0: CR.calledAddr =^step0: TCREQ.toAddrstep0: CR.maxTpdu = 80stepl: TCCON.qtsRes =^27step2: TDIND.tsDiscReason = 0step3: CR.sourceRef = 300step3: CR.option =^100step3: CR.callingAddr = step3: TCREQ.fromAddrstep3: CR.calledAddr =^step3: TCREQ.toAddrstep3: CR.maxTpdu = 80step4: NDREQ.discReason =^step4: DR.disReasonstep4: TDIND.tsDiscReason =^step4: DR.disReasonstep4: TDIND.tsUserReason = step4: DR.addClearReasonAppendix BESTELLE.Y Specification of TPOSpecification TPO;CONSTNORMAL = 100 : int;UNNORMAL = -100 : int;ZERO = 0 : int;LZERO = -1 : int;GZERO = 4 : int;VARlocalRef:^int;callingAddr: int;calledAddr: int;tpduSize:^int;remRef:^int;qtsEst:^int;inBuffer:^int;outBuffer:^int;89ISPTCREQ^TSAP;TCRES^TSAP;TDREQ^TSAP;TDATR^TSAP;NDIND^NSAP;NRIND^NSAP;OSPTDIND^TSAP;TCIND^TSAP;TCCON^TSAP;TDATI^TSAP;NDREQ^NSAP;PDUCR^sent_in TCIND,recv_in TCREQ;DR^sent_in TDIND,recv_in TDREQ;CC^sent_in TCCON,recv_in TCRES;DT^recv_in TDATR;TIMERTHT 230;STATE90idle, wftr, wfcc, data;INITIALIZATIONTO idleBEGINlocalRef := 0;END;TRANSfrom idleto wfccwhen TCREQprovided (TCREQ.qtsReq >= ZERO)OUTPUT CRbeginlocalRef := 300;tpduSize := 80;callingAddr := TCREQ.fromAddr;calledAddr := TCREQ.toAddr;CR.sourceRef := localRef;CR.option := NORMAL;CR.callingAddr := callingAddr;CR.calledAddr := calledAddr;CR.maxTpdu := tpduSize;end;to idlewhen TCREQOUTPUT TDINDbeginTDIND.tsDiscReason := ZERO;TDIND.tsUserReason := ZERO;end;91to wftrwhen CROUTPUT TCINDbeginremRef := CR.sourceRef;tpduSize := CR.maxTpdu;callingAddr := CR.callingAddr;calledAddr := CR.calledAddr;qtsEst := 11;TCIND.toAddr := calledAddr;TCIND.fromAddr := callingAddr;TCIND.qtsPro := qtsEst;end;to wftrwhen CRprovided (localRef <> ZERO)OUTPUT TCINDbeginremRef := CR.sourceRef;tpduSize := 100;callingAddr := CR.callingAddr;calledAddr := CR.calledAddr;qtsEst := 26;TCIND.toAddr := calledAddr;TCIND.fromAddr := callingAddr;TCIND.qtsPro := qtsEst;end;to idlewhen CRprovided (CR.option = UNNORMAL)OUTPUT DRbeginDR.destRefer := CR.sourceRef;92DR.disReason := ZERO;end;TRANSfrom wfccto datawhen CCOUTPUT TCCONbeginqtsEst := 27;TCCON.qtsRes := qtsEst;inBuffer := GZERO;outBuffer := GZERD;end;to idlewhen DRprovided (DR.disReason = ZERO )OUTPUT NDREQ,TDINDbeginNDREQ.discReason := DR.disReason;TDIND.tsDiscReason := DR.disReason;TDIND.tsUserReason := DR.addClearReason;end;to idlewhen DRprovided (DR.disReason <> LZERO)OUTPUT NDREQ,TDINDbeginNDREQ.discReason := DR.disReason;TDIND.tsDiscReason := DR.disReason;end;93TRANSfrom wftrto datawhen TCRESprovided (TCRES.qtsReq <= qtsEst)OUTPUT CCbeginlocalRef := 124;CC.destRefer := remRef;CC.sourceRef := localRef;CC.callingAddr := callingAddr;CC.calledAddr := calledAddr;CC.maxTpdu := tpduSize;end;to idlewhen TCRESprovided (TCRES.qtsReq <= qtsEst)OUTPUT DR,TDINDbeginDR.destRefer := remRef;DR.disReason := ZERO;DR.addClearReason := ZERO;TDIND.tsDiscReason := ZERO;end;to idlewhen TDREQOUTPUT DRbeginDR.disReason := ZERO;DR. addClearReason := TDREQ.tsUserReason;94DR.destRefer := remRef;end;TRANSfrom datato datawhen TDATR;to dataprovided ( outBuffer <> ZERO )OUTPUT DT;to datawhen DT;to dataprovided (inBuffer <> ZERO)OUTPUT TDATR;to idlewhen TDREQOUTPUT NDREQbeginNDREQ.discReason := TDREQ.tsUserReason;end;to idlewhen NDINDOUTPUT TDINDbeginTDIND.tsDiscReason := ZERO;end;95to idlewhen NRINDOUTPUT TDINDbeginTDIND.tsDiscReason := ZERO;end;end.Appendix CASN.1 Specification of TPO ServicePrimitives (SPs) and Protocol DataUnits (PDUs)TPO DEFINITIONS ::=BEGINTSAP ::= CHOICE{TCREQ,TCRES,TDREQ,TDATR,TDIND,TCIND,TCCON,TDATI1TCREQ ::= SEQUENCE{toAddr^INTEGER,fromAddr INTEGER,qtsReq^INTEGER971TCRES ::= SEQUENCE{qtsReq^INTEGER}TDREQ ::= SEQUENCE{tsUserReason INTEGER}TDATR ::= SEQUENCE{tsduFragment INTEGERITDIND ::= SEQUENCEftsDiscReason INTEGER,tsUserReason INTEGERITCIND ::= SEQUENCE{toAddr^INTEGER,fromAddr^INTEGER,qtsPro INTEGER}TCCON ::= SEQUENCE{qtsRes^INTEGER}98TDATI ::= SEQUENCEtsduFragment INTEGERNSAP ::= CHOICENDIND,NRI ND,NDREQNDREQ ::= SEQUENCEdiscReason INTEGERNDIND ::= SEQUENCEdummy INTEGERNRIND ::= SEQUENCEdummy INTEGER1Tpdu ::= CHOICECR,DR,CC,DT99CR ::= SEQUENCE{sourceRef^INTEGER,option INTEGER,callingAddr^INTEGER,calledAddr^INTEGER,maxTpdu INTEGER}DR ::= SEQUENCE{destRefer^INTEGER,disReason INTEGER,addClearReason^INTEGERCC ::= SEQUENCEdestRefer^INTEGER,sourceRef INTEGER,callingAddr^INTEGER,calledAddr INTEGER,maxTpdu^INTEGER1DT ::= SEQUENCE{userData3.ENDINTEGER100Appendix DThe Test Case Generator UserInterfaceThe following is the user interface for the Test Case Generator.############################################# Test Suite Generation - Prototype V.2 ############################################## ## TSG Main Menu^ ## ## ## Enter:^ ## p to access the parser menu,^ ## v to access the PDS verification menu,^ ## c to set or modify the TSG and SPP constraints,^ ## g to access the test case generation menu. ## ## - to return to previous menu,^ ## h for help,^ ## e to exit TSG. ## #101############################################# Test Suite Generation - Prototype V.2 #############################################^ =-== Test Suite GenerationIt The subtour identification can be runwith the following options:It Enter:It 1 to count the number of identified subtoursIt 2 to print the identified subtoursIt 3 to select identified subtoursIt 4 to generate Unified Test Sequences# 5 to generate Unified Test Sequences and ConstraintsIt 6 to generate ATCRMIt 7 to generate ATCRM, Unified Test Sequences and Constraints# 9 to trace the backtracking algorithm (enter 'q' to quit trace)^ItIt -:previous, h:help, m:main, p:parser, c:constraints, g:gen...e:exit It^#========^= == 102Bibliography[Aho88] Alfred V. Aho, Anton T. Dahbura, et al., An Optimization Technique forProtocol Conformance Test Generation Based on UI0 Sequences and RuralChinese Postman Tours, Protocol Specification, Testing, and Verification VIII,IFIP, 1988.[Alder891 R. Alderden, COOPER, The Compositional Construction of a CanonicalTester, Formal Specification Techniques for Distributed Systems and Com-munications Protocols, FORTE'89, Vancouver, December, 1989.[Beiz90] Boris Beizer, Software Testing Techniques, Van Nostrand Reinhold, New York,1990.[Bo89] A. Bourguet-Roger, et al., Exhaustive validation and test generation in ELVIS,SDL '89: The Language at Work, Proc. of the Fourth SDL Forum, Amsterdam,The Netherlands, 1989. North-Holland.[Ca891 J. Camacho et al., ELVIS: An Integrated SDL environment, The Language atWork, Proc. of the Fourth SDL Forum, Amsterdam, The Netherlands, 1989.North-Holland.[Chan89] Wendy W.L. Chan, Son T. Vuong and M. Robert Ito, An Improved Pro-tocol Test Generation Procedure Based on UI0s, Proceedings of the ACMSIGCOMM'89 Symposium on Communication Architectures and Protocols,September 1989.[Chanson91] Samuel Chanson and Qin Li, On Static and Dynamic Test Case Selectionsin Protocol Conformance Testing, 4th Internal Workshop on Protocol TestingSystems, Leidschendam, the Netherland, October, 1991.[Chanson92-2] Samuel Chanson and Jingsong Zhu, A Unified Approach to Protocol TestSequence Generation, IEEE INFOCOM, San Francisco, March 1993.103[Chanson92-1] Samuel Chanson and Sijian Zhang, A Test Case Management System, 5thInternal Workshop on Protocol Testing Systems (IWPTS), Montreal, Septem-ber, 1992[Chow78] T.S. Chow, Testing Software Design Modeled by Finite State Machine, IEEETransactions on Computer, Vol.4 NO.3, March 1978.[Chun90] Woojik Chun and Paul D. Amer, Test Case Generation for Protocols Speci-fied in ESTELLE, The Third International Conference on Formal DescriptionTechniques (FORTE'90), Madrid Spain, November, 1990[ChVu89] W. Chan, S. Vuong, M. Ito, On Test Sequence Generation for Protocols, Pro-ceedings of Ninth IFIP WG 6.1 International Symposium on Protocol Specifi-cation, Testing, and verification, University of Twente, the Netherlands, June1989.[C1ar76] Lori A. Clarke, A System to Generate Test Data and Symbolically Execute Pro-grams, IEEE Transactions on Software Engineering, Vol. SE-2, No.3, Septem-ber 1976.[Clar81] Lori A. Clarke, et, al., Symbolic Evaluation Methods - Implementations andApplications, Computer Program Testing, 1981.[C1ar89] Lori A. Clarke, Andy Podgurski, Debra J. Richardson, and Steven J. Zeil, AFormal Evaluation of Data Flow Path Selection Criteria, IEEE Transactionson Software Engineering, Vol. 15, No.11, November 1989.[CLPR] CLP(R) Version 1.0 Reference Manual - Draft, IBM Watson Research Center[DeMi91] Richard DeMillo, et, al., Constraint-Based Automatic Test Data Generation,IEEE Transactions on Software Engineering, Vol. 17, No.9, September 1991.[DeMi78] Richard DeMillo, et, al., Hints on Test Data Selection: Helping for the Prac-ticing Programmer, IEEE Computer, Vol. C-11, April, 1978[Frank88] Phyllis G. Frankl and Elaine Weyuker, An Applicable Family of Data FlowTesting Criteria, IEEE Transactions on Software Engineering, Vol. 14, No. 10,October 1988.[Gerrard90] Christopher Paul Gerrard, Derek Coleman, and Robin M. Gallimore, For-mal Specification and Design Time Testing, IEEE Transaction on SoftwareEngineering, Vol. 16, No. 1, January 1990.104[Gone70] G. Gonenc, A Method for the Design of Fault Detection Experiments, IEEETransactions on Computers, Vol.19, No.6, June 1970[1S09646-1] Conformance Testing Methodology and Framework, Part 1: General Con-cepts, December, 1988[1S09646-3] DIS 9646-3 The Tree and Tabular Combined Notation (TTCN), November,1989[IS09074] Information Processing Systems — Open Systems Interconnection —Estelle(Formal Description Technique Based on an Extended State Transition Model,1989.[1S08807] Information Processing Systems — Open Systems Interconnection — LOTOS —A Formal Description Technique Based on the Temporal Ordering of Obser-vational Behaviour, International Organization for Standardization, 1989.[Jaff86] J.Jaff and S. Michaylov, Methodology and Implementation of a ConstraintLogic Programming System, Technical Report, Department of Computer Sci-ence, Monash Univ., June, 1986[Jaff87] J.Jaff and J-L Lassez, Constraint Logic Programming System, Proc 14th ACMPOPL Conf., Munich, Jan., 1987[King76] James King, Symbolic Execution and Program Testing, Communications of theACM, Vol.19, No.7, July 1976.[Ko90]^Kai-Chung Ko, Protocol Test Sequence Generation and Analysis Using AlTechniques, Master's thesis, Department of Computer Science, University ofBritish Columbia, July, 1990[Kore190] Bogdan Korel, Automated Software Test Data Generation, IEEE Transactionon Software Engineering, Vol. 16, No.8, August 1990[Laski83] Janusz W. Laski and Bogdan Korel, A Data Flow Oriented Program TestingStrategy, IEEE Transactions on Software Engineering, Vol. SE-9, No.3, May1983[Lo90]^Lotest: A LOTOS Test Generation Tool. Tool Demonstration at the ThirdIntl Workshop on Protocol Test System, McLean, VA, USA, November, 1990.105[Lu91] Ying Lu, On TESTGEN, An Environment for Protocol Test Sequence Gen-eration, and Its Application to the FDDI MAC Protocol, Master's thesis, De-partment of Computer Science, University of British Columbia, September,1991[Mi88] COTTAGE: Systematic Method for the Development of Communication Soft-ware, Protocol Specification, Testing, and Verification, VIII, Amsterdam, theNetherlands, June, 1988, North-Holland.[Miller92] Raymond E. Miller and Sanjoy Paul, Generating Conformance Test Sequencesfor Combined Control and Data Flow of Communication Protocols, Proceed-ings of Protocol Specifications, Testing, and Verifications (PSTV'92), Florida,USA, June 1992.[Naito81] S. Naito and M. Tsunoyama, Fault Detection for Sequential Machines by Tran-sition Tours, Proceedings of the 11th IEEE Fault-Tolerant Computing Sym-posium, June 1981.[Phal] M. Phalippou, R. Groz, Evaluation of an Empirical Approach for Computer-aided Test Case Generation, IWPTS 1990.[Ram76] C. V. Ramamoorthy, Siu-Bun F. Ho, W. T. Chan, On the Automated Genera-tion of Program Test Data, IEEE Transactions on Software Engineering, Vol.SE-2, No.4, December 1976[Sabn88] Krishan Sabnani and Anton Dahbura, A Protocol Test Generation Procedure,Computer Networks and ISDN Systems, Vol.15, No.4, September 1988.[Sari87] Behcet Sarikaya, Gregor v. Bochmann and Eduard Cerny, A Test DesignMethodology for Protocol Testing, IEEE Transactions on Software Engineering,Vol. SE-13, NO. 5, May 1987[Sato89] F.Sato, et, al., TENT: Test Sequence Generation Tool For CommunicationSystems. FORTE'89, Vancouver, December 1989.[Shen90] Y. Shen, F. Lombardi and A. Dahbura, Protocol Conformance Testing UsingMultiple UI0 Sequences, IFIP 1990.[Sidhu89] Deepinder P. Sidhu, Ting-Kau Leung, Formal Methods for Protocol Testing: ADetailed Study, IEEE Transaction on Software, IEEE Transactions on SoftwareEngineering, Vol.15, No.4, April 1989106[Sdl]^Specification and Description Language — Recommendation Z.100, Geneva,Switzerland: CCITT press, 1986[Tai80]^Kuo-Chung Tai, Program Testing Complexity and Test Criteria, IEEE Trans-actions on Software Engineering, Vol.SE-6, No.6, November 1980[Testi]^TESTEL, Incremental Test Design Environment, Proceedings of IFIP Inter-national Workshop on Protocol Testing Systems, October 1991.[Ura187] Hasan Ural, Test Sequence Selection Based on Static Data Flow Analysis,Computer Communications, Vol. 10, No.5, October 1987.[Ura1911 Hasan Ural and Bo Yang, A Test Sequence Selection Method for Protocol Test-ing, IEEE Transaction on Communications, Vol.39, No.4, April 1991.[Vuong89] Son T. Vuong, Wendy W.L. Chan and M. Robert Ito, The UI0v-Methodfor Protocol Test Sequence Generation, Second International Workshop onProtocol Test System, Berlin, October, 1989.[Vuong91] Son T. Vuong, H.Janssen and Y. Lu, TESTGEN: An Environment for TestSuite Generation and Selection, Proc. of 1991 International Symposium onCommunications (ISCOM-91), Taipei, Taiwan, Dec. 1991.[Vuong92] Son T. Vuong, Wendy W.L. Chan, et. al., One Chain of Techniques for Gen-erating Protocol Test Sequences, to be published.[Weyuker85] Elaine J. Weyuker, Selecting Software Test Data Using Data Flow Infor-mation, IEEE Transactions on Software Engineering, Vol. SE-11, No.4, April1985.[Weyuker90] Elaine J. Weyuker, The Cost of Data Flow Testing: An Empirical Study,IEEE Transactions on Software Engineering, Vol. 16, No.2, February 1990.{Wu891 Jianping Wu and Samuel T. Chanson, Test Sequence Derivation Based on Ex-ternal Behaviour Expression, 2nd International Workshop on Protocol TestingSystems, Berlin, West Germany, October, 1989.{Wu89-1] Jianping Wu and Samuel Chanson, A New Approach to Test Sequence Deriva-tion based on External Behavior Expression (EBE), Technical Report 89-3,Department of Computer Science, University of British Columbia, January1989.107"@en ; edm:hasType "Thesis/Dissertation"@en ; vivo:dateIssued "1993-05"@en ; edm:isShownAt "10.14288/1.0051421"@en ; dcterms:language "eng"@en ; ns0:degreeDiscipline "Computer Science"@en ; edm:provider "Vancouver : University of British Columbia Library"@en ; dcterms:publisher "University of British Columbia"@en ; dcterms:rights "For non-commercial purposes only, such as research, private study and education. Additional conditions apply, see Terms of Use https://open.library.ubc.ca/terms_of_use."@en ; ns0:scholarLevel "Graduate"@en ; dcterms:title "A New approach to generating and selecting test sequences for conformance testing"@en ; dcterms:type "Text"@en ; ns0:identifierURI "http://hdl.handle.net/2429/2709"@en .