UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

A cross-layer approach to reducing packet latency in mobile ad hoc networks Gervais-Harreman, Olivier 2005

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata

Download

Media
831-ubc_2005-0448.pdf [ 4.22MB ]
Metadata
JSON: 831-1.0065416.json
JSON-LD: 831-1.0065416-ld.json
RDF/XML (Pretty): 831-1.0065416-rdf.xml
RDF/JSON: 831-1.0065416-rdf.json
Turtle: 831-1.0065416-turtle.txt
N-Triples: 831-1.0065416-rdf-ntriples.txt
Original Record: 831-1.0065416-source.json
Full Text
831-1.0065416-fulltext.txt
Citation
831-1.0065416.ris

Full Text

A C R O S S - L A Y E R A P P R O A C H T O R E D U C I N G P A C K E T L A T E N C Y IN M O B I L E A D H O C N E T W O R K S by OLIVIER GERVAIS-HARREMAN B.Eng., McGill University, 2002 A THESIS SUBMITTED IN PARTIAL FULLFILMENT OF THE REQUIREMENTS FOR THE DEGREE OF MASTER OF APPLIED SCIENCE in T H E F A C U L T Y O F G R A D U A T E STUDIES (Electrical and Computer Engineering) THE UNIVERSITY OF BRITISH COLUMBIA July 2005 © Olivier Gervais-Harreman, 2005 11 Supervisor: Dr. Vijay K. Bhargava ABSTRACT Mobile Ad Hoc Networks (MANETs) are known for providing connectivity between de-vices quickly and easily, without requiring an underlying infrastructure. Nevertheless, they are not popular commercially because they cannot effectively support applications that are delay-sensitive. This drawback stems from the fact that most of their routing protocols choose routes according to the shortest-hop criterion. When congestion is present in the network, the shortest path from source to destination is not necessarily the one that mini-mizes packet latency. To expedite packets to their destination, link-layer information must be available at the routing layer so that the routing protocols can avoid regions that are congested. This thesis characterizes congestion in terms of end-to-end delay, as the sum of ex-pected contention delays at transmitting nodes. By integrating this additive link-layer met-ric into their decision-making process, existing on-demand routing protocols become better equipped to choose routes that reduce the average end-to-end delay of packets. Simulation results demonstrate that this metric can produce significant gains in end-to-end delay per-formance for both Dynamic Source Routing (DSR) and Ad Hoc On-Demand Distance Vector (AODV) routing. This increase in performance allows for more user applications to be supported by MANETs, which is sure to drive commercial demand for devices that can connect in this way. Ill Table of Contents Abstract ii Table of Contents iii List of Tables vi List of Figures vii List of Abbreviations ix Acknowledgement xi Dedication xii 1 Introduction 1 1.1 Significance of Research 2 1.2 Thesis Outline . 2 2 Background 4 2.1 Sources of Delay 4 2.1.1 Propagation and Transfer Times 5 2.1.2 Buffering During Route Discovery 5 2.1.3 Delays at the Link Layer 8 2.1.3.1 Delays at the Interface Queue 9 2.1.3.2 Delays from Link-Layer Retransmissions 10 2.2 Related Work 13 Table of Contents iv 3 Characterizing Congestion in Terms of End-to-End Delay 16 3.1 A Framework for Evaluating End-to-End Delay Performance 18 3.1.1 Introducing the Contention profile 18 3.1.2 Applying the Active Path 21 3.1.3 Building Congestion 22 3.1.4 Finding the Ideal Path 23 3.1.5 Evaluating End-to-End Delay Performance 24 3.1.6 Experiment 1: Varying Congestion Levels by Changing the Packet Injection Rate 25 3.1.6.1 Description of Network Layout and Attributes 26 3.1.6.2 Simulation Results and Analysis 28 3.1.7 Experiment 2: Varying Congestion by Changing the Number of Contention Nodes 33 3.1.7.1 Simulation Results and Analysis 33 3.1.8 Summary of Experiments 1 and 2 35 3.1.9 Experiment 3: Validating the Metric 39 3.1.9.1 Experimental Setup 39 3.1.9.2 Simulation Results and Analysis 40 4 Integrating the Delay Metric with Existing Routing Protocols 43 4.1 AODV with CCDM 44 4.2 DSR with CCDM 44 4.3 Experiment 4: Integrating the Link-Layer Metric at the Routing Layer . . . 44 4.3.1 Experimental Setup 44 4.3.2 Simulation Results 46 4.3.2.1 AODV Route Performance (Scenario A) 46 4.3.2.2 DSR Route Performance (Scenario A) 50 Table of Contents v 4.3.2.3 Performance Evaluation of CCDM with AODV and DSR (Scenario A) 52 4.3.3 Simulation Results 54 4.3.3.1 AODV Route Performance (Scenario B) 54 4.3.3.2 DSR Route Performance (Scenario B) 58 4.3.3.3 Performance Evaluation of C C D M with DSR and AODV (Scenario B) 58 4.3.4 Simulation Results (Scenario C) 62 4.3.4.1 AODV Route Performance (Scenario C) 62 4.3.4.2 DSR Route Performance (Scenario C) 64 4.3.4.3 Performance Evaluation of CCDM with AODV and DSR (Scenario C) 67 4.4 Discussion and Suggestions for Future Work 69 4.4.1 Discussion of Results 69 4.4.2 Areas for Further Work 70 5 Conclusion 73 Bibliography 74 vi List of Tables Table 3.1 Description of the different contention profiles used in Experiment 1. 26 Table 3.2 GloMoSim network parameters used in Experiment 1 29 Table 3.3 Description of the different contention profiles used in Experiment 2. 33 Table 3.4 Parameters for the calculation of contention delay in Experiment 3. . 39 Table 3.5 CCDM calculation of a route for a packet injection rate of 10 pkts/s. . 41 Table 3.6 C C D M calculation of a route for a packet injection rate of 1600 pkts/s. 42 Table 4.1 Route performance of AODV for Scenario A 47 Table 4.2 Route performance of DSR for Scenario A 50 Table 4.3 Route performance of AODV for Scenario B 56 Table 4.4 Route performance of DSR for Scenario B 58 Table 4.5 Route performance of AODV for Scenario C 63 Table 4.6 Route performance of DSR for Scenario C 66 List of Figures vii Figure 2.1 The effectiveness of the RTS/CTS handshake due to transmitter-receiver distance d, from [15] 11 Figure 2.2 Various carrier-sensing ranges and their effectiveness, from [15]. . . 12 Figure 3.1 An illustration of a contention profile 19 Figure 3.2 Network topology used for Experiment 1 27 Figure 3.3 Delay performance of AODV and DSR in Experiment 1 28 Figure 3.4 Potential gain of AODV and DSR in Experiment 1 30 Figure 3.5 Average hop count of AODV and DSR in Experiment 1 31 Figure 3.6 Illustration of congestion in the network for Experiment 1. 32 Figure 3.7 Delay performance of AODV and DSR in Experiment 2. 34 Figure 3.8 Potential gain of AODV and DSR in Experiment 2. 35 Figure 3.9 Average hop count of AODV and DSR in Experiment 2. 36 Figure 3.10 Transition state diagram of the IEEE 802.11 DCF, from [11] 37 Figure 3.11 Expected contention delay at a node 40 Figure 4.1 Node layout for Scenario A 46 Figure 4.2 Instantaneous gain of routes at the destination node in Scenario A. . 48 Figure 4.3 Cumulative average gain at the destination node for Scenario A. . . . 49 Figure 4.4 Cumulative average gain at the source node for Scenario A 49 Figure 4.5 Instantaneous gain of routes at the source node in Scenario A 51 Figure 4.6 Cumulative average gain at the source node for Scenario A 51 Figure 4.7 Delay performance of AODV and DSR when using CCDM (Sce-nario A) 52 List of Figures viii Figure 4.8 Performance gain of AODV and DSR when using C C D M (Scenario A) 53 Figure 4.9 Average added delay in route discovery for AODV and DSR when using CCDM (Scenario A) 53 Figure 4.10 Node layout for Scenario B 55 Figure 4.11 Instantaneous gain of routes at the destination node in Scenario B. . 56 Figure 4.12 Cumulative average gain at the destination node for Scenario B. . . . 57 Figure 4.13 Cumulative average gain at the source node for Scenario B 57 Figure 4.14 Instantaneous gain of routes at the source node in Scenario B 59 Figure 4.15 Cumulative average gain at the source node for Scenario B 59 Figure 4.16 Delay performance of AODV and DSR when using C C D M (Sce-nario B) 60 Figure 4.17 Performance gain of AODV and DSR when using CCDM (Scenario B) 61 Figure 4.18 Average added delay in route discovery for AODV and DSR when using CCDM (Scenario B) 61 Figure 4.19 Node layout for Scenario C 63 Figure 4.20 Instantaneous gain of routes at the destination node in Scenario C. . 64 Figure 4.21 Cumulative average gain at the destination node for Scenario C. . . . 65 Figure 4.22 Cumulative average gain at the source node for Scenario C 65 Figure 4.23 Instantaneous gain of routes at the source node in Scenario C 66 Figure 4.24 Cumulative average gain at the source node for Scenario C 67 Figure 4.25 Delay performance of AODV and DSR when using CCDM (Sce-nario C) 68 Figure 4.26 Performance gain of AODV and DSR when using C C D M (Scenario C) 68 Figure 4.27 Average added delay in route discovery for AODV and DSR when using C C D M (Scenario C) 69 List of Abbreviations ACK Acknowledgement AODV Ad Hoc On-Demand Distance Vector CACP Contention-Aware Admission Control Protocol C C D M Cumulative Contention Delay Metric CSLAR Contention-Sensitive Load-Aware Routing CSMA/CA Carrier-Sense Multiple Access with Collision Avoidance CTS Clear-to-Send DCF Distributed Coordination Function DIFS DCF Inter-Frame Space D-LAOR Delay-Based Load-Aware On-Demand Routing DLAR Dynamic Load-Aware Routing DOSPR Delay-Oriented Shortest-Path Routing DSR Dynamic Source Routing FIFO First-In-First-Out IEEE Institute of Electrical and Electronics Engineers LBAR Load-Balanced Routing LSR Load-Sensitive Routing M A C Medium Access Control MANET Mobile Ad Hoc Network NAV Network Allocation Vector PCF Point Coordination Function QoS Quality of Service RERR Route Error List of Abbreviations RREP Route Reply RREQ Route Request RRT Route Request Tail RTS Request-to-Send SIFS Short Inter-Frame Space xi Acknowledgement I would like to acknowledge my mentor and friend, Dr. Vijay Bhargava, who guided me throughout the past years and gave me a second chance when I needed it. A big thank you also to Dr. Ekram Hossain for helping me develop this idea. To Jahangir, Hugues, Chandika, as well as all of the other members of the Telecommunications Lab at UVic and UBC, thank you for answering my plethora of questions. Tim, thank you for keeping the home safe while I was living at school. Thank you to Stephane and Michelle, for keeping their brother from going crazy when he was trapped at the lab. Finally, to Shazia... thank you for coming into my life when you did. You kept me focused and motivated until my thesis was complete. Love and thanks to you all. x u Dedication To My Parents ...Without whose love and support I could never have written this, and who have always been there for me no matter what. Thank you for everything. Chapter 1 I Introduction Mobile ad hoc networks (MANETs) will gain popularity in the years to come. This in-evitability is due to an alignment of user needs with design feasibility. Ad hoc networks provide connectivity amongst users in areas where it was previously not possible, and many wireless devices already come equipped with the technology that makes ad hoc network-ing possible. In other words, companies can increase the functionality of their devices by improving their networking capabilities. Devices in ad hoc networks communicate directly to one another without requiring an underlying infrastructure. The nodes self-assimilate into clusters and manage themselves accordingly to provide connectivity between devices. This decentralized coordination in MANETs produces flexible networks that are easily deployed and maintained. Because devices can be quickly connected without planning and without building an infrastructure, they are ideal for military and disaster scenarios. MANETs are already in use today though their functionality is best-suited for time-insensitive applications such as file transfers. However, ad hoc networks would prove more effective for today's uses if they could provide real-time inter-communication applications such as live-video feeds and voice communications. Ideally, ad hoc networks will be able to rival infrastructure networks in their ability to support time-sensitive applications. Once ad hoc networks are able to support both time-sensitive and time-insensitive applications effectively, they will not only become more useful in the aforementioned disaster-relief and military scenarios, but will also gain popularity in civilian applications such as conferenc-1.1 Significance of Research 2 ing and interactive gaming. Ultimately, the ability to connect in an ad hoc fashion adds functionality to any wireless device, provides added flexibility to users, and enhances user applications. Without the benefit of central management, meeting the individual delay requirements of applications is a difficult task to undertake. This is especially true when users are free to roam randomly: this causes frequent changes in network topology and congestion levels, both of which affect the transfer time of packets between source and destination. Conse-quently, quality of service (QoS) is almost impossible to guarantee for MANETs; an option is a best-effort approach to expediting packets to their destination by finding the route that minimizes end-to-end delay. 1.1 Significance of Research This thesis demonstrates that the end-to-end delay of packets improves by adopting a cross-layer approach to routing. A link-layer metric based on the IEEE 802.11 Distributed Co-ordination Function (DCF) is integrated into the route-discovery process of on-demand routing protocols. The gain in performance is demonstrated using the DSR and AODV routing protocols as examples; however, what makes this metric valuable, is that it can improve routing protocols that are in existence today. 1.2 Thesis Outline This thesis is organized as follows: chapter 2 identifies the components of end-to-end delay, provides background information on the routing and M A C layers, and presents previous research that is relevant to this thesis. Chapter 3 presents a framework for evaluating the performance of on-demand routing protocols and introduces the link-layer metric. Chapter 4 demonstrates how the metric can be integrated with DSR and AODV, shows that these protocols improve when they base their routing decisions on the metric, and proposes ways 1.2 Thesis Outline which this metric can be further improved. Chapter 5 concludes the thesis. 4 Chapter 2 Background Cross-layer design in routing has just recently been getting attention. Historically, protocol designers aimed to keep separation between layers in the protocol stack. This modular approach was initially embraced for ad hoc networks because it worked well for wireline networks and for those wireless networks with infrastructure. However, in a self-managed network, the routing protocol cannot find the best route without information from other layers. Therefore, separation needs to be sacrificed in order to get better performance in terms of end-to-end delay. In [1], the authors suggest that the performance of M A N E T s would improve by combining the routing and M A C layers. In other words, the routing protocol should make its routing decisions based on information provided by the link layer. The next section investigates delays at various layers in the protocol stack. This analysis is useful for understanding how to combine the routing and M A C layers to reduce the average end-to-end delay of packets transmitted. 2.1 Sources of Delay The paper by J.-H. Song et al. [13] identifies the main contributors of packet end-to-end delay to be: • Propagation and transfer times (physical layer) • Buffering during route discovery (routing layer) • Queuing at the interface queue (link layer) 2.1 Sources of Delay 5 • Retransmissions (link layer) 2.1.1 Propagation and Transfer Times Packets incur a trivial delay at the physical layer due to propagation and transfer for every hop it has in its route to the destination. The propagation time is amount of time the packet is airborne between nodes. The transfer time is the time it takes for the node to transfer all of the frame's bits onto the link. 2.1.2 Buffering During Route Discovery The routing layer is responsible for quickly finding a route between source and destination nodes in the network. Most ad hoc routing protocols are designed with the goal of finding the shortest path to the destination. Proactive (table-driven) routing protocols use updated routing tables to supply routing information on request. These tables are kept up-to-date by exchanging routing information about various nodes in the network. Since shortest routes to destinations are calculated a-priori, packets do not suffer a delay due to a route discovery process. Nevertheless, extensive research shows that when networks become large or when there are frequent topology changes, packets suffer from high end-to-end delay due to excessive overhead and stale route information [3]. For better end-to-end delay performance, networks use reactive (on-demand) routing. Reactive routing schemes obtain routes to a destination on-demand when there are data packets to send by using a query-reply mechanism called route discovery. This process oc-curs when the route to a specific destination is unknown. Route maintenance is the process of collecting additional information about routes to the destination in order to shorten the existing route or to recover from link failures. Seeing as reactive protocols need to perform route discovery, packets suffer from an initial delay before the source can start sending packets to the destination. However, after a route has been obtained by the protocol and as long as the network is stable, the end-to-2.1 Sources of Delay 6 end delay of packets is not affected by the transmission of subsequent routing information. When a large number of packets use this route, the impact of the route discovery delay diminishes, and the end-to-end delay becomes more dependent on the choice of routes rather than on the time it takes to find it. On the whole, reactive routing protocols discover routes with tolerable latency, provide paths with good end-to-end delay, and offer quick adaptation to dynamic link conditions. The most common on-demand routing algorithms in literature are Ad Hoc On Demand Distance Vector routing (AODV) [9] and Dynamic Source Routing (DSR) [2]. They are often included along with the new routing protocol in research papers for comparison pur-poses. Likewise, this thesis illustrates how cross-layer information improves the routing choices made by routing protocols using AODV and DSR as examples. The ways in which a metric does this can be translated over to other routing protocols. AODV and DSR perform route discovery and maintenance using three control pack-ets: route requests (RREQs), route replies (RREPs), and route errors (RERRs). When a data packet arrives at the source that does not have a functional route to the destination in its routing table, it floods the network with a RREQ. The RREQ propagates through the network until it finds the destination. The destination subsequently responds to the first RREQ it receives by sending a RREP along the same path the RREQ followed. Once the source node receives this reply packet, the route is said to have been discovered and the source begins to send the packets that have been buffered awaiting transmission. The delay for route discovery is the time it takes for the source to receive the RREP after sending a RREQ. Reactive routing protocols reduce this delay by allowing a node to respond to a RREQ with a RREP if it has a valid route to the destination. AODV and DSR differ in the way they execute routing: AODV uses forwarding tables whereas DSR uses source routes. In AODV, the forwarding tables are compiled using backwards learning: an intermediate node that forwards the RREQ temporarily stores the last hop along with the source node in its forwarding table so that when it receives the corresponding RREP it knows the next node towards the source. This RREP is sent as a 2.1 Sources of Delay 7 response to the first RREQ received by the destination. Forwarding tables provide a node with the next hop along the path and the number of hops to the destination, along with other information. In DSR, on the other hand, the header of data packet includes the entire route to the destination. During route discovery, the RREQ propagates through the network compiling a list of visited intermediate nodes into a source route. When the destination receives the RREQ, it unicasts a RREP for each RREQ that is received back towards the source by including a copy of the reversed source route in the header. Once the source node receives the replies, it stores the entire paths into its routing table, and uses the first route that arrives. An intermediate node gets forwarding information by examining the source route in the header rather than by looking-up forwarding tables. Proper route maintenance is crucial to the performance of ad hoc networks as it dic-tates the time it takes to recover from problems in the chosen path. Nodes update their route caches when they receive new routing information about destinations. Since wireless transmission is broadcast in nature, this can be done by promiscuously overhearing RREP packets as well as DSR data packets. When link-breaks occur due to changes in network topology or congestion levels, route maintenance is accomplished using RERR packets. The error originates at the M A C layer of the node that does not successfully deliver a frame to the next node in the route. This node reacts by unicasting a RERR towards the source, which informs the source node and all nodes that overhear it of the broken link. Nodes that receive a RERR remove the routes that use this broken link from their routing tables and attempt to repair the route locally by checking their own routing tables for another viable route to the destination. If an intermediate node is successful, it uses this route to salvage the packet and then sends information about the new route towards the source so that nodes can update their routing information. If intermediate nodes are not successful, enroute packets are dropped until the source receives the RERR. Subsequent packets incur a buffering delay as the source initiates another route discovery. 2.1 Sources of Delay 8 The end-to-end delay of packets depends on the route that is discovered by the routing protocol. The route that the routing protocol chooses can place an unnecessary burden on the M A C protocol if it goes through congested areas, possibly dropping the throughput level of the application to an unacceptable level and causing intolerable end-to-end delays. Since delays at the M A C layer can affect the end-to-end delay of packets significantly, they are discussed in the following section. 2.1.3 Delays at the Link Layer The M A C protocol does its best to ensure collision-free transmission by ensuring that only one device transmits at a time. The M A C layer, which is part of the link-layer, coordinates access to the medium when multiple devices share a single broadcast link. For a MANET, the shared communication medium is part of the radio spectrum. Having more than one device transmitting frames simultaneously within the same vicinity of one another would result in collisions, which would ultimately cause higher end-to-end delay and possibly packet loss. Consequently, the other devices must delay their transmission to avoid colli-sion. Understanding how medium access affects end-to-end delay at individual nodes along a chosen route is essential in making appropriate routing decisions. In order for the M A C protocol to efficiently coordinate access to the medium in a timely manner, it must have low overhead to reduce the delays at the interface queue and capture the medium effectively to reduce the number of retransmissions. The most popular scheme is from the IEEE 802.11 standard: the Distributed Coordination Function (DCF). This pop-ularity arises from the commercial adoption of this technology for connecting wirelessly to local area networks. The DCF of the M A C sublayer satisfactorily fulfills all of the above requirements for ad hoc networks by providing asynchronous access using Carrier-Sense Multiple-Access with Collision Avoidance (CSMA/CA), which includes a three-way hand-shake that acts as a virtual carrier sense. The 802.11 M A C has an optional Point Coordina-tion Function (PCF) but it is not discussed here because it is typically used with wireless networks that have an infrastructure. 2.1 Sources of Delay 9 2.1.3.1 Delays at the Interface Queue The delays at the interface queue can be divided into two components: queuing delay and contention delay. The queuing delay is the time a packet must remain in a FIFO (first-in-first-out) buffer while the node services the packets that have arrived before it. The magnitude of the delay depends on the relative arrival rates and service rates of data packets. The contention delay is the time the packet spends at the front of the queue, waiting for the node to successfully capture the medium. Its magnitude depends on the number of active neighbours competing for the shared medium. In congested networks, both of these delays are major contributors to packet end-to-end delay. A packet is delayed at the interface queue until the transmitting node successfully cap-tures the medium for collision-free transmission. This procedure comes in the form of a RTS/CTS/ACK exchange. When a node senses the medium free, it sends a Request-To-Send (RTS) control frame to the next node in the link. In doing so, all nodes within transmission range remain silent for the duration specified in the RTS called the Network Allocation Vector (NAV). Upon receiving this RTS, the destination node replies with a Clear-To-Send (CTS), which also silences its surrounding nodes for the same duration. When the source node receives the CTS it then transmits the data frame. Once it is received at the destination and is checked-free of errors, the destination sends an acknowledgment (ACK) to confirm that it was transmitted successfully. The inclusion of the three-way hand-shake adds a contention delay at each node along the path even if there are no other stations competing for the medium. Time gaps at the link layer allow for the asynchronous transmission of data and control frames, which contribute to the contention delay a packet encounters at the interface queue. In 802.11 DCF, these delays are from two inter-frame spaces of different lengths, the DCF Inter-Frame Space (DIPS) and Short Inter-Frame Space (SIFS), as well as from a random component due to backoff and deferring. DIFS is the amount of time a node senses the channel when it has a data frame to send. If the medium is sensed busy in DIFS, the node defers its transmission until it senses the channel idle for DIFS, at which point, it waits 2.1 Sources of Delay 10 a random backoff period. If the medium is sensed idle for DIFS, the node enters a stage where it attempts to transmit. The transmission of control information only has to wait SIFS. This allows control frames to gain access to the medium before any data frames, since it is shorter [4]. This RTS/CTS reservation scheme works in silencing only the nodes that are within the transmission range of the transmitter and receiver since the NAV is information that is included in the RTS and CTS frames. As a result, the RTS/CTS/ACK exchange is unable to prevent collisions from nodes that are outside the transmission range of the receiver but still within its interference range. Retransmissions due to collisions degrade network performance by increasing end-to-end delay, jitter, and packet losses. 2.1.3.2 Delays from Link-Layer Retransmissions A node retransmits a frame if a CTS or ACK is not received within an allotted time pe-riod. This absence arises if the transmitter and receiver move outside of their transmission range or if a collision occurs with either a control or data frame. This thesis assumes a packet reception model based on the received Signal-to-Noise Ratio (SNR): the receiver drops packets if they are received below the specified threshold of lOciBRegardless, the node must repeat the process of capturing the medium and waiting for an ACK. In the 802.11 DCF, the frame is dropped and the network is notified of a link-break if the node is unsuccessful in transmitting the data frame after a number of attempts. The collision-causing nodes, which are located within the interference area that is not monitored by the RTS/CTS reservation scheme, are called hidden nodes. The hidden node problem is studied extensively in [15]. The presence of hidden nodes reduces the effective-ness of the RTS/CTS exchange, which is defined in as: „ AiRTS/CTS , n n &RTS/CTS — Z , U - i J where Ai is the total interference area and AiRTs/CTS is the part of the interference area where nodes can successfully receive an RTS or CTS. 2.1 Sources of Delay 11 0L 1 ^ 0.56*1** Distance between transmitter and receiver d Figure 2.1. The effectiveness of the RTS/CTS handshake due to transmitter-receiver dis-tance d, from [15]. With a node's signal-to-noise threshold set to lOdB, nodes within the interference ra-dius Ri = 1.78 x d, where d is the transmitter-receiver separation distance, are capable of interfering with a received signal. In other words, if d < 0.56 x Rtx, where Rtx is the transmission range, the RTS/CTS exchange is 100% effective. When the distance between the transmitter and receiver is greater than 0.56 x Rtx, the effectiveness drops rapidly be-cause the interference range exceeds the transmission range, as illustrated in Figure 2.1. Fortunately, the 802.11 DCF uses physical carrier sensing (CSMA/CA) to further prevent collisions. Increasing the carrier-sensing range helps to reduce the interference area that is not cov-ered by the RTS/CTS exchange.Typically, the carrier-sensing range is slightly higher than the transmission range, as illustrated by i? c s 1 in the figure. Although interference occurs at the receiver and carrier sensing detects transmitters, carrier sensing cannot help much unless it is increased past a range of Rcs2 = d + Rtx. With a range of Rcs3 = 2.78 x Rtx, the entire interference area is covered by carrier sensing. Figure 2.2 represents the various sensing ranges. Although the number of collisions is reduced by increasing the carrier-sensing range above d + Rtx, all nodes within this range now compete against one another for network resources, which can lead to increased delays due to greater channel con-2.1 Sources of Delay 12 | 1 Area covered by R T S / C T S I I Inierlerenee area not covered by R T S / C T S Figure 2.2. Various carrier-sensing ranges and their effectiveness, from [ 15]. 2.2 Related Work 13 tention. A tradeoff thus materializes between reducing delays due to retransmissions and reducing delays at the interface queue. Mobility in MANETs causes relative distances be-tween nodes to vary constantly, which creates difficulties in calculating an optimum carrier sensing range that minimizes packet end-to-end delay. With this analysis of the delays at the various layers of the protocol stack, an appropriate cross-layer delay metric can be chosen for use in existing reactive routing protocols. In doing so, routing protocols are able to choose routes that reduce the average end-to-end delay of the packets, which enhances the overall performance of MANETs. 2.2 Related Work Using cross-layer information for enhancing reactive routing protocols has received in-creased attention in the past few years. When these protocols use link-layer information to expedite packets to the destination, they are usually called load- or congestion-based routing schemes. This section outlines these proposed routing protocols. Many load-based schemes use queue lengths as their route selection criteria. In [6] for instance, the authors introduce Dynamic Load-Aware Routing protocol (DLAR) schemes 1, 2, and 3, which use the number of packets buffered at each interface queue as their only route selection criterion. Their on-demand route discovery and forwarding-table routing approaches resemble AODV. Intermediate nodes append their metric calculations to the RREQ as it propagates to the destination. Instead of immediately responding with a RREP, the destination node waits a specified amount of time so that it can compare the metric to those included in subsequent RREQs. DLAR sends a RREP along the path that provides the minimum metric received. Although queuing delay at the interface does contribute to the average end-to-end delay of packets, there are complications in using this as the metric because it neglects the second delay at the interface queue: contention delay. Nodes that are not part of any active routes, but are in regions where there are many active neighbours, would provide relatively good 2.2 Related Work 14 metrics. By ignoring the contention delays, buffers at congested nodes would quickly fill up, which results in high end-to-end delay and potential packet loss. The Load-Sensitive Routing (LSR) [14] and Load-Balanced Routing (LBAR) [5] pro-tocols try to resolve the problems associated with DLAR by adding the metric from neigh-bours as well. Whether the metric is queue length (LSR) or the number of sessions that a node services (LBAR), these metrics incorrectly represent the delay because they improp-erly characterize contention [12]. For example, two nodes may provide the same metric but have a different number of active neighbours. In this case, the node with the higher number of neighbours causes a higher contention delay for the packet, which means that the other node should route the packet. The metric should better characterize contention delay at the M A C layer. Contention-Sensitive Load-Aware Routing (CSLAR) [7] uses three components in cal-culating its metric for routing decisions: the number of packets in the queue, the number of hops, and contention information from the MAC layer. Whereas the first two components are used in the previous approaches, the third component provides information about the contention and traffic situation around a node. This information is a moving average of the NAV that a node endures over a long period of time. Because this estimate inherently pro-vides the routing layer with contention-delay information, this routing protocol performs better than the previously-mentioned protocols in terms of end-to-end delay. However, based on the discussion of RTS/CTS effectiveness, the effectiveness of this protocol also varies with transmitter-receiver separation, as illustrated in Figure 2.1. The presence of interfering nodes can impact the delay significantly causing poor routing decisions. To circumvent the problems of discovering hidden nodes, the authors of [12] follow an-other approach in Delay-Based Load-Aware On-Demand routing (D-LAOR). This protocol uses two metrics: hop count and estimated delay. This estimation is an average combining queuing, contention, and retransmission delays, the sum of which equals the total link-layer delays described at the beginning of this chapter. As far as metrics are concerned, this one is the most comprehensive. 2.2 Related Work 15 Because the metric in D-LAOR is pre-calculated at each node, it considers the state of the network before the introduction of the new session. According to [10], this inac-curately represents contention by ignoring the effects due to intra-flow contention. Since multiple nodes of a flow can be within the carrier-sensing range of the transmitter, these nodes compete with each other for the medium. Intra-flow contention is an inevitability if the carrier-sensing range is greater than twice the transmission range, which is the recom-mended value in [15]. Because D-LAOR uses hop count as one of its criteria to shorten routes, it reduces the number of nodes within carrier-sensing range. Nevertheless, in or-der to make better routing decisions, the inclusion of intra-flow contention information is essential during route discovery. From the previous discussion on protocol strengths and weaknesses, a metric would further improve average end-to-end delay if it contained information about delays due to: • queuing • contention • intra-flow contention • retransmission. To accurately convey this information in a metric, a deeper understanding of these delays is necessary to isolate the factors that reduce end-to-end delay. This is the objective of Chapter 3. Chapter 3 Characterizing Congestion in Terms of End-to-End Delay In order to find the route through the network that minimizes the average end-to-end packet delay, the routing protocol must minimize the sum of all sources of delay. In other words, the routing protocol finds the ideal route by choosing route r for which the average end-to-end delay, Dr, is minimized: _ E ^ p Dr = ( 3 - D where P is the total number of packets delivered and Dvr is the end-to-end delay of a packet p that uses route r. According to the discussion on delays in Section 2.1, the end-to-end delay that a packet p encounters along route r can be defined as: nr—1 (n) + + dpqueue(n) + dpretx(n)l (3.2) n = l where nr represents the number of nodes along route r. d pprophtTans(n) is the delay due to propagation and packet transfer at node n, o?C(mt(n) is the contention delay at the interface queue, d pqueue(n) is the buffering delay at the interface queue, and d Petx(n) is the delay due to retransmissions at the M A C layer. In theory, the routing protocol would find this ideal route if it had all of the network in-formation at its disposal when choosing a route. However, this is not possible in MANETs 3. Characterizing Congestion in Terms of End-to-End Delay 17 because network information is decentralized and constantly varying. In order to reduce the transfer of network information, it is necessary to isolate the principal cause of end-to-end delay. By examining these delay components at various layers in the protocol stack, it is evident that the delays at the link-layer contribute the most to packet end-to-end delay. When there is congestion present in the network, it is assumed that the delays due to the propagation and transfer of packets become small in comparison to the other delays. As such, the end-to-end delay of a packet can be approximated as: nT — 1 DPr « J2 + d*ueue(n) + (3.3) n = l Of the remaining components, the contention delay at the interface queue, although perhaps not the greatest in magnitude, still impacts the overall delay the most. In [11], the authors suggest that the contention delay can be roughly treated as the buffer delay. They defend their reasoning by suggesting that nodes that are not congested have small contention delays, which results in a fast servicing of buffered packets. Conversely, small increases in contention delays can cause intolerable increases in queuing delays. According to [8], the probability of a collision is proportional to the number of stations competing for the medium. Since a high contention delay reveals that there are many active nodes in the vicinity, it can also indicate that a node may have a high probability of retransmission. Consequently, the end-to-end delay of a packet becomes proportional to the contention delay: nr — 1 ^ « £ [ < & * ( « ) ] • (3.4) n = l The main objective of a routing protocol should be to minimize the sum of contention delays that are incurred at individual nodes along a route. In other words, the routing protocol should choose route r such that it minimizes E n E ( C W ) Tf = p = 1 n = 1 (3 5) ^approx p W—V 3.1 A Framework for Evaluating End-to-End Delay Performance 18 In doing so, the route avoids congested regions, which decreases the number of col-lisions and retransmissions, the medium contention delay, and the time packets wait in buffers; in other words, the average end-to-end delay of packets decreases, which enhances the overall performance of MANETs. It should be mentioned that because route discovery uses flooding, it is not guaranteed that the routing protocol finds the ideal route. On-demand routing protocols that use minimum hop count as their only route selection criterion are unable to avoid congested regions in network as this metric does not contain any information regarding contention. By dropping the shortest-path criterion and instead making decisions based on contention delay, routing protocols avoid regions that are con-gested, which decreases buffering at nodes, contention delay, and retransmissions. By characterizing congestion as the contention delay at transmitting nodes, routing protocols can avoid areas that increase end-to-end delay by avoiding nodes that cause a high con-tention delay. The objective of the next section is to accurately define congestion in terms of contention delay so that this information can be used as a metric at the routing layer. 3.1 A Framework for Evaluating End-to-End Delay Per-formance 3.1.1 Introducing the Contention profile Incorrectly defining congestion, whether with queue lengths or number of flows, hinders load-based routing protocols from performing to their potential. The previous section sug-gests that the level of congestion is directly related to the amount of contention at a trans-mitting node. To represent contention delay, a nodal representation is insufficient because it conveys only relative distances and hop counts. One solution is to use weights on the edges of a link-graph. This thesis introduces a more illustrative option: using a contention profile. A contention profile is a spatial representation of contention through a network. The 3.1 A Framework for Evaluating End-to-End Delay Performance 19 -17.32 -20 Figure 3.1. An illustration of a contention profile. contention profile resembles a landscape whose hills represent congestion that should be avoided. The height of the contention is used in calculating the metric that is passed to the routing protocol. Visualizing congestion in this way is helpful in determining why some routes have better end-to-end delay performance than others. An example of a contention profile is illustrated in Figure 3.1. Furthermore, contention profiles can also give insight into other QoS performance issues such as jitter, throughput, and packet drops, although these are not the objective of this thesis. The illustration in Figure 3.1 represents the circumstance under which the shortest path experiences the most congestion. When such congestion levels occur in a network, shortest-path algorithms no longer provide the routes which perform the best in terms of end-to-end delay. In order for routing protocols to choose better performing routes, they require information from the link layer. 3.1 A Framework for Evaluating End-to-End Delay Performance 20 The contention delay depends on the probability of sensing the channel idle in a certain time, along with multiple M A C and packet constants [11]. If all traffic on the network follows a Poisson distribution with an average rate A, the probability that transmitting node i senses the channel idle in time i is PLe(t) = e-(N(i)xX)t, (3.6) where N(i) is the number of neighbours within its carrier-sensing range. A = N(i) x A represents the total packet arrival rate at node i, from simplifying the packet arrival rate from multiple sources as a joint Poisson process due to the homogeneous traffic condition. By definition, contention delay is inversely proportional to the probability of finding the channel idle. Consequently, it follows that: C » t ( M ) « 1 °C e-(N(i)x\)t oc (JV(i) x X)t (3.7) In [11], the authors define A = |^4dj(0l x A, where |A#(i)| is the number of neigh-bouring stations in a cluster. The importance of considering all neighbours within the carrier-sensing range of a transmitter was emphasized in Section 2.1.3.2. In this thesis, the equation is altered to include all neighbours within the carrier-sensing range of transmitter i : N(i). Since contention delay is proportional to N(i) x A, congestion can be characterized as A. As such, A can be used to represent the height of the contention profile, which provides the transmitting node with sufficient information to calculate contention delay. It is important to note that none of the load-based routing protocols considered the number of active neighbours within the carrier-sensing range as one of their route-selection criteria. In a network with heterogeneous traffic, this information would especially be useful, although additional criteria would have to be used. For the sake of this thesis, traffic is limited to the homogeneous case. 3.1 A Framework for Evaluating End-to-End Delay Performance 21 A contention profile in a MANET is not a fixed entity; through time, the profile changes shape according to varying traffic characteristics, node movement, and the introduction of new flows. Its height changes with packet injection rate and active-node density, whereas its span depends on node location. Both the height and the span are affected by new ses-sions, as these blanket regions of the network. This overlay is particularly important as it represents the intra-flow contention that was never included in previous load-based routing protocols. It is helpful to examine how end-to-end delay is affected when congestion changes due to these different circumstances. As such, the next section uses various contention profiles in order to show that congestion is accurately defined as the contention delay at a transmitting node. Routing protocols can subsequently use the height of the contention profile at various nodes as a decision-making strategy in choosing high performance routes through a network. Al l traffic generated on the network is homogeneous and follows a Poisson distribution with exponentially distributed inter-arrival times. Poisson traffic is chosen instead of CBR to reflect the assumptions made earlier when defining congestion. This thesis uses GloMoSim version 2.03 [16] in all simulations to accurately model the intricate cross-communication between multiple layers of a protocol stack. Although GloMoSim has yet to include code for generating Poisson traffic, appropriate changes were made to the code to provide this option. 3.1.2 Applying the Active Path To test the end-to-end performance of a route through the network, packets are sent between a source and a destination. In this thesis, the route these packets follow is called the active path; nodes that are available to the routing protocol for this path are termed routing nodes. 3.1 A Framework for Evaluating End-to-End Delay Performance 22 3.1.3 Building Congestion To confirm the presumption that contention for the medium is the principle factor in end-to-end delay, it is necessary to set up a contention profile that exposes packets along the active path to contention delays only. That is, packets along the active path should not be delayed due to the servicing of other flows since these affect queuing delays as well as contention delay. By disallowing other flows at the interface-queue, it better demonstrates the dependence of queuing delay on contention delay, an assumption that was used when deriving a definition for congestion. More importantly, it illustrates the extent to which the contention delay impacts end-to-end delay. The network introduces contention using nodes whose sole purpose is to compete for the medium, not to route packets. These are termed contention nodes to differentiate them from the routing nodes used for the active path. Just as contention nodes cannot be part of the active path, routing nodes cannot route contention packets. Contention is established by transmitting packets between contention nodes. GloMoSim does not have provisions for contention nodes since it assumes that all nodes are equal. As a result, the GloMoSim routing code is edited to permit these contention nodes to coexist among routing nodes but not actually participate in the routing process. To distinguish between routing nodes and contention nodes, the edited code forces nodes to drop route query packets if they are not of the same genre. In other words, when a contention node receives a RREQ, it ignores the request by neither propagating it nor re-sponding to it. For analysis purposes, it is necessary to keep the contention profile as constant as pos-sible between tests. When contention nodes send packets to one another, they use routes that are hardcoded in the reactive routing code. The GloMoSim code is edited to prevent the contention packets from switching routes, causing a change in the contention profile. The routes are inserted into the routing tables at initiation instead of using route discovery, and the simulation stops if a contention node initiates a route error. 3.1 A Framework for Evaluating End-to-End Delay Performance 23 3.1.4 Finding the Ideal Path To demonstrate the effectiveness of a routing protocol in expediting packets to its desti-nation, the average end-to-end delay of the paths chosen by the routing protocol, called the discovered paths, are compared to those that exhibit the best end-to-end delays. These optimum routes are called ideal paths. In order to compare these end-to-end delays, the code is edited so that it can evaluate the end-to-end delays of specified routes. The ideal path is defined per contention profile per seed due to random processes that occur at the link-layer when nodes experience contention. In other words, even though the contention profile remains constant with every seed, the ideal path changes according to which route acquires the minimum end-to-end delay for that seed. When examining the performance of routing protocols in congested networks, focus is shifted entirely on the end-to-end delays they achieve rather than the routes they choose; a group of ideal paths collectively yield the minimum average end-to-end delay per contention profile. The ideal paths are found by testing all routes from source to destination for their end-to-end delays. To reduce simulation time, certain criteria and heuristics are employed to reduce the number of routes to test for minimum end-to-end delay. Testing individual routes through the network to obtain their end-to-end delays requires inserting routes into routing tables at initiation. This method of hard-coding active routes is the same as for hard-coding contention: the path is inserted into the source's routing table at initiation so that the node does not implement a route discovery. Further editing of Glo-MoSim was required to disable the mechanism for shortening active routes. Furthermore, the simulation stops once the active path experiences a link-break rather than initiating a route discovery. These alterations provide more accurate end-to-end delay characteristics for a specified route by maximizing the number of packets over which to take the average. For the tested routes to be useful in analysis, it is crucial that they be compatible with the chosen routes with the original version of the routing code. In other words, a particular route should exhibit the same end-to-end delay characteristics whether it is inserted into the routing table or discovered by the routing protocol. In order to determine compatibil-3.1 A Framework for Evaluating End-to-End Delay Performance 24 ity, the average end-to-end delays of routes are compared for networks with and without congestion. In networks with only one active session, the end-to-end delays of the chosen and tested routes differ minimally. This negligible difference is due to the route discovery process. Even though packets are buffered during the route discovery process, the average end-to-end delay is minimally affected by this if there is a large number packets delivered. When the end-to-end delays of the first few packets are ignored, there is no discrepancy between the end-to-end delays, as well as all other recorded statistics. In networks using a contention profile, the end-to-end delays of the chosen and dis-covered routes differ noticeably. This discrepancy is caused by the random component introduced at the link-layer when a node experiences contention. This difference is consid-erably reduced when the end-to-end delays are averaged over 5 seeds and weighted by the number of received packets. Nevertheless, since the characteristics of routes are consistent in cases where there was no contention profile, it is assumed that the route characteristics are the same whether the routing protocol chooses the tested path or not. 3.1.5 Evaluating End-to-End Delay Performance The focus of this thesis is to improve existing routing protocols so that they provide bet-ter end-to-end delay performance. As such, it is helpful to develop metrics that evaluate the performance of a routing protocol and that quantify the possible gain. In doing so, it is possible to see how well protocols perform in terms of end-to-end delay in congested networks. The performance of the routing protocol is determined by finding the potential gain. This potential gain describes how much better a routing protocol could do in terms of end-to-end delay given an ideal scenario where it has all the information necessary to choose the best route. This potential-gain metric is calculated by comparing the end-to-end delays 3.1 A Framework for Evaluating End-to-End Delay Performance 25 of the discovered routes with those of the ideal routes using the following formula: T~)disc f)ideal Gp = = . (3.8) T~)disc This indicates the improvement in terms of average end-to-end delay that is possible for a certain contention profile. Although some scenarios may produce high end-to-end delays, the routing protocol does well in finding routes that minimize end-to-end delay if Gp is low. In other words, a low result indicates that the routing protocol is good at finding routes that minimize contention along their paths. The Gp metric can be altered slightly to determine the relative performance of routes when comparing routel to routel: Drl _ Dr2 Gr = . (3.9) £)rl Although this thesis examines only DSR and AODV, it provides a framework that can be used to evaluate the end-to-end delay performance of other routing protocols. 3.1.6 Experiment 1: Varying Congestion Levels by Changing the Packet Injection Rate In this experiment, the performances of AODV and DSR are evaluated in terms of their end-to-end delays and the potential-gain metrics for multiple contention profiles. The experiment employs a static uniform network to systematically analyze the impact of changes in the contention profile. By keeping node locations constant, it is possible to determine the effects of changing the packet injection rate on the end-to-end delay perfor-mance of both routing protocols. Moreover, the number of routes for a specific source-destination pair remains constant in a static network. This simplifies analysis in finding the ideal end-to-end delay, which is crucial to this analysis; this result determines the perfor-mance of the routing protocols and shows that their performance can be improved. 3.1 A Framework for Evaluating End-to-End Delay Performance 26 Table 3.1. Description of the different contention profiles used in Experiment 1. Parameter Contention Profile Profile Name A B C D E F G H Inter-Packet Arr. Times (s) 0.1 0.05 0.045 0.04 0.035 0.03 0.025 0.02 No. of Contention Nodes 8 8 8 8 8 8 8 8 Active Contention Nodes 63-70 63-70 63-70 63-70 63-70 63-70 63-70 63-70 3.1.6.1 Description of Network Layout and Attributes The experiments test the performance of the DSR and AODV protocols with the IEEE 802.11 DCF M A C protocol. The quality metric is calculated for 8 different contention profiles ranging in height by altering the average packet generation rate. These contention profiles are described in Table 3.1. The network contains 61 routing nodes and 12 con-tention nodes, which are organized in the hexagonal mesh network illustrated in 3.2. The terrain dimensions are 800 m by 700 m in order to be sufficiently large for a separation distance of 100 m between all nodes. The transmission range is set to 173 m to guarantee that nodes can only transmit to their closest neighbours. The carrier-sense range is set to 278m, which eliminates the effects of hidden nodes according to Rcs3 = d + 1.78<i [15]. The static hexagonal topology is used for isolating the ability of the routing protocol to avoid regions that are congested. In a mobile scenario, the presence of hidden terminals depends on node movement and carrier-sensing range; if movement is random, routes will only suffer from hidden nodes on some events. Also, by increasing the carrier sensing range to include hidden terminals, a node in the hexagonal configuration competes with fewer neighbours than in a traditional square lattice. Each simulation is run for 30 seconds unless a route discovery preemptively stops the simulator while testing individual routes. The ideal routes are found by testing plausible routes from source to destination, which in this case is limited to the top 100 most-likely 3.1 A Framework for Evaluating End-to-End Delay Performance 27 35* S 18 2 7 '56 . 58 5B& 60 • 50 S'l •83 S.4 59 - • ; • • v-44- 45>: -4f;; , ' * •36" :'39'. •4o.:. •41 • • 62 .6© ,68 So- 72 • • • • » 32 *_ 28 i i 29 :!:':» ,30' \i<>. "f * * '•+• ' ; ^ ^:; ; - ' ' - wy' " v": * ' • • ' * : . ! * ' :;6-3; ;63. •fS 7.t 19 20 '21 :"22 23 ',"24 * 42 33 O 25 • ~12.- 1 3 14. 15 16 •s- f 8 9 •.-lO'' "*" o ••i *2"; -A' Figure 3.2. Network topology used for Experiment 1. 3.1 A Framework for Evaluating End-to-End Delay Performance 28 Delay-Performance Vs. Congestion Level Figure 3.3. Delay performance of AODV and DSR in Experiment 1. routes. Source node 26 waits 1 second before sending packets to destination 34 in order to make sure the contention profile is relatively constant. For accuracy, the discovered routes are run over 200 seeds and the inserted routes are run over 5 seeds. Table 3.2 indicates the traffic parameters for both the contention profiles and the routes under test; the traffic is homogeneous. 3.1.6.2 Simulation Results and Analysis Figure 3.3 illustrates the average end-to-end delays of the discovered, ideal, and shortest paths, for the different contention profiles. From the graph, a decrease in inter-packet arrival time causes an increase in average end-to-end delay for all paths through the network. DSR performs better than AODV for lower levels of congestion, which can be seen from DSR's lower average end-to-end delay. When the inter-packet arrival time decreases past 0.035 on the other hand, AODV performs better than DSR. 3.1 A Framework for Evaluating End-to-End Delay Performance Table 3.2. GloMoSim network parameters used in Experiment 1 Parameter Value Simulation Time 30sec Terrain Dimensions 800m x 700m Number of Nodes 73 Mobility Position Granularity 0.5 Propagation Limit -UldBm Noise Figure lOdB Temperature 29QK Radio Type RADIO-ACCNOISE Radio Frequency 2AGHz Radio Bandwidth 2Mbps Radio Tx Type SNR-Bounded Radio Rx SNR Threshold \0dJ3 Radio Tx Power 10.047d.Bm Radio Antenna Gain OdB Radio Rx Sensitivity -80.66dB (277.8m) Radio Rx Threshold -14.16dB (173.0m) M A C Protocol 802.11 Network Protocol IP Network Output Queue Size Per Priority 100 Routing Protocols DSR and AODV 3.1 A Framework for Evaluating End-to-End Delay Performance 30 Figure 3.4. Potential gain of AODV and DSR in Experiment 1. The graph also shows that ideally AODV should perform better than DSR, regardless of the level of congestion. This is shown by the performance of the ideal paths through the network for both routing protocols. Since AODV should perform better in terms of end-to-end delay, it has more potential to gain from using a link-layer metric to help in making routing decisions for low to moderate levels of congestion. The potential gain is plotted as a function of inter-packet arrival time in Figure 3.4. At high levels of congestion resulting from an inter-packet arrival time of 0.03 and lower, both DSR and AODV can gain by approximately the same amount. At this point however, AODV's performance increases relative to DSR's performance. Figure 3.5 reveals the average hop counts for the discovered and ideal routes. The average hop count of the ideal route through the network increases from 8 to 12 hops as the level of congestion increases. Because of the network topology, this indicates that the ideal route shifts from the shortest path to the path that follows the outside edge of the network in order to avoid the more congested regions, which are indicated by the darker-shaded areas 3.1 A Framework for Evaluating End-to-End Delay Performance 31 Avg. Hop Count Vs. Congestion Level 1 1 1 1 1 1 1 1 1 • 1 — e — DSR discovered — • — DSR ideal — a — AODV discovered — • — AODV ideal ... 0I i i i i . 1 ' 0.1 0.09 0.08 0.07 0.06 0.05 0.04 0.03 0.02 Inter-Packet Arrival Time (s) Figure 3.5. Average hop count of AODV and DSR in Experiment 1. in Figure 3.6. The behavior of the ideal routes reinforces the definition of congestion that is adopted in this thesis: congestion can be characterized as the amount of contention delay at a node. Figure 3.5 indicates that both routing protocols encounter difficulties in finding the shortest path, since on average they find routes with 10 hops instead of 8 hops. By finding the shortest paths, DSR and AODV would increase their performance by an average of 22% and 30%, respectively, for the lowest congestion levels of contention profile A, and by 69% and 79%, respectively, for the congestion levels of contention profile F. For higher congestion levels, packets cannot be delivered using the shortest path. The average hop count of the discovered routes increases slightly with as the level of congestion increases. This suggests that the routing protocols initially choose routes that avoid the congested area unnecessarily since the contention delay is not sufficiently large to cause a variation between the ideal path and the shortest path. Conversely, when the congestion levels are high, the routing protocols do not add a sufficient number of hops in Figure 3.6. Illustration of congestion in the network for Experiment 1. 3.1 A Framework for Evaluating End-to-End Delay Performance 33 Table 3.3. Description of the different contention profiles used in Experiment 2. Parameter Contention Profile Profile Name F4 F8 F12 Inter-Packet Arrival times (s) 0.03 0.03 0.03 Number of Contention Nodes 4 8 12 Active Contention Nodes 65-68 63-70 61-72 order to adequately avoid the area; the added nodes still suffer from high contention delay. By varying the packet injection rate of the packets, this last experiment investigates the effects of changing the height of the contention profile by varying the injection rate of packets. Since node mobility changes the height as well as the width of the contention profile, the next experiment examines the effects of increasing the width by increasing the number of contention nodes. 3.1.7 Experiment 2: Varying Congestion by Changing the Number of Contention Nodes The width of the contention profile changes by varying the number of contention nodes. The experiment is performed using the same procedure as in Experiment 1. The packet injection rate is constant at 33.3 pkts/s (1 packet every 0.03 seconds). Table 3.3 describes the different contention profiles used for this experiment. 3.1.7.1 Simulation Results and Analysis Figure 3.7 illustrates the average end-to-end delay of discovered, ideal, and shortest paths, for the different contention profiles. The results for both end-to-end delay and potential gain are similar to those of Experiment 1. 3.1 A Framework for Evaluating End-to-End Delay Performance 3 4 DelayPerformance Vs. .Congestion Level. Number of.Contention. Nodes Figure 3.7. Delay performance of AODV and DSR in Experiment 2. The average end-to-end delay of packets increases with increasing congestion levels. DSR performs better than AODV for low congestion levels. Similarly to Experiment 1 results, AODV performs better than DSR for higher levels of congestion. This strengthens the earlier observation that AODV avoids nodes that experience high contention delays better than DSR. For low congestion levels, AODV has more potential gain than DSR, as demonstrated in Figure 3.8. When the level of congestion is to 8 and 12 contention nodes, the potential gain becomes approximately the same for both AODV and DSR. This is the same observation that is made for Experiment 1 when the congestion is increased by reducing the inter-packet arrival time below 0.03s. Even though the level of congestion increases with the addition of contention nodes, the average hop count for both DSR and AODV does not increase much. Interestingly, when the ideal hop count is 12 hops, the shortest path (8 hops) performs better than the discovered routes that average between 10 and 11 hops. This means that although the routing protocols 3.1 A Framework for Evaluating End-to-End Delay Performance 35 Potential Gain.Vs.-Gongestioh Level '-•1? ' ' " Dig! 0.6 07 •I 05 CO (3-H 0.5 aj <£ 0.4 0.3 0,2 0.1 0 Number of Contention Nodes Figure 3.8. Potential gain of AODV and DSR in Experiment 2. add nodes in order to avoid the congested area, they did not sufficiently circumvent the area to reduce the contention delay at individual nodes. In other words they add extra nodes that experience significant contention delay. 3.1.8 Summary of Experiments 1 and 2 This experiment acknowledges that contention delay alone is enough to influence the end-to-end delay of routes through the network. As a result, this information is sufficient to improve the end-to-end delay performance; it should be included in the metric that is pro-vided at the routing layer during route discovery. These results indicate that as congestion increases, whether by adding more nodes in a region or by increasing the packet injection rate, routes should avoid the peaks to reduce end-to-end delay. However, this may not necessarily be the case if the magnitude of the peak is not high enough relative to the height of the profile at the added nodes; the route should avoid a node or a group of nodes if their combined contention delay is greater 3.1 A Framework for Evaluating End-to-End Delay Performance 3 6 1* 12 10 8 8 ro 6 Avg:;Hop .G6uht:Vs: eongestioii-Level .DSR discovered1 -«—DSR ideal -e^-AODV-discoyer'e'd - •—AQDV ideal 8 •Number, cf Contention.Nodes 12 Figure 3.9. Average hop count of AODV and DSR in Experiment 2. than the combined contention delay of circumventing nodes. Rather than basing routing decisions on the congestion level at individual nodes, the decisions should be made by considering the congestion levels along the entire route. This suggests that the link-layer metric should be additive in nature so that its magnitude can reveal the performance of a route. A mathematical relationship between congestion level and contention delay is required so that the metric can accurately convey the relative end-to-end delay performance of routes through a congested network. For the metric to improve existing routing protocols, it should accurately rank the routes in terms of expected end-to-end delay. In [11], the authors derive the expected contention delay at a node by analyzing the IEEE 802.11 DCF. For simplicity, the authors opt to use this information in their table-driven routing protocol: Delay-Oriented Shortest Path Routing (DOSPR). This thesis, on the other hand, adopts this derivation for use as a metric in existing on-demand routing protocols. 3.1 A Framework for Evaluating End-to-End Delay Performance 37 STA senses channel idle STA senses channel is busy in DIFS and goes backoff again Figure 3.10. Transition state diagram of the IEEE 802.11 DCF, from [11]. The authors use the simplified transition state diagram of Figure 3.10 to model the contention delay that a packet encounters. This delay is defined as the time it takes for a packet, once it arrives at the front of the queue, to be transmitted successfully to the intended receiver. As such, the metric includes the contention and transmission delays, but ignores delays due to queuing. When the packet arrivals are modeled using a Poisson distribution with expected arrival rate A, the probability that a node senses the medium idle in time t is: PLtt) = e-k, (3.10) where A is total expected arrival rate of packets from neighbours within the carrier-sensing range. From Figure 3.10, the expected delay for the attempt and backoff states EA(i) and 3.1 A Framework for Evaluating End-to-End Delay Performance 38 EB(i), respectively, for node i are equal to: EA(i) = Pidle(slot)x(RTS + 2SIFS + CTS) +(l-PLe(slot))x(RTS + 2SIFS + EB(i)) (3.11) and irpMj, = \ x {Pidle(DIFS) x {DIFS + b + RTS + 2SIFS + P}dle{slot) x CTS) +(l-Pidle(DIFS))xB], (3.12) where B = RTS + 3SIFS + CTS + PacketDuration + ACK. This is the average delay until a packet attempts to be transmitted after entering backoff. In the formula, b is the mean backoff time for a maximum of 5 retransmissions, and is calculated as: 4 n=0 +(1 - Pidle(slot)f x 2 4 x W • (3.13) where n is the current retransmission (0 < n < 5) and = 32 x slot is the initial contention window size. From the transition state diagram, the expected contention and expected propagation and transfer delays are calculated to be: DLtnc - Pidle{DIFS)x(DIFS + b + EA{i)) • +(1 - Ptdle{DIFS)) x (SIFS + EB{i)) -{-PacketDuration (3.14) The Cumulative Contention Delay Metric (CCDM) is the summation of the average contention, propagation, and transfer delays at every node along the path from source to destination, as derived by equation 3.14. It is shown subsequently that it can be used by the routing protocol to determine the relative performance of routes in terms of end-to-end delay. 3.1 A Framework for Evaluating End-to-End Delay Performance Table 3 . 4 . Parameters for the calculation of contention delay in Experiment 3. 39Parameter Value Channel Bit Rate 2 Mbps ACK (s) 192 X 1 0 - ° + slot (s) 20 X 1 0 ~ 6 SIFS (s) 10 X 1 0 ~ 6 header (s) 192 x 1 0 - 6 + RTS (s) 192 X l O " 6 + 2 x 1 0 ° CTS (s) 1 9 2 x 1 0" 6 + S m 5 Packet Duration (s) ( 1 2 8 X 8 ) 2 X 1 0 ° prop(s) 1 X 1 0 - 6 T. (s) RTS + 3 X SIFS + CTS + header + Packet Duration + Ack + 4 x prop + DIFS Tc(s) RTS + DIFS + prop n 0-15 3.1.9 Experiment 3: Validating the Metric This experiment validates the metric by comparing the metric for routes under both low and high congestion conditions. 3.1.9.1 Experimental Setup The network layout of Experiment 1 and 2 is used to show the transition of the ideal path from the shortest path (route 1218) to the edge path (route 1). The metric is calculated using the parameters shown in Table 3.4, which are consistent with the parameters used by GloMoSim. As was discussed at the beginning of this chapter, congestion is a function of the number of nodes competing for the medium and the packet injection rate. As such, the simulations are run for up to 15 active neighbours and varying packet injection rates from 100 pkts/s to 1600 pkts/s . The summation of the metric at each node along the active path should provide an insight into the relative performance of one route compared to another. 3.1 A Framework for Evaluating End-to-End Delay Performance 40 x 10" A v g . Content ion De lay V s . N u m b e r of Ne ighbours 5 6 7 8 9 10 11 12 13 14 15 Number of Ne ighbours Figure 3.11. Expected contention delay at a node. 3.1.9.2 Simulation Results and Analysis Figure 3.11 shows the expected contention delay at a node as a function of the number of neighbours and average packet injection rate. From the graph, it is possible to see the impact of both packet injection rate and the number of neighbours on the metric. Contention delay is proportional to N(i) and A, as suggested earlier. The C C D M is first calculated for low congestion levels using the contention profile of Experiment 1 with an average packet injection rate of 10 pkts/s. The total number of neighbours is included in Table 3.5, indicating the number of contention nodes and routing nodes within the carrier-sensing range of each node along the path. The calculated metric 3.1 A Framework for Evaluating End-to-End Delay Performance 41 Table 3.5. CCDM calculation of a route for a packet injection rate of 10 pkts/s. Active Route 1 (12 hops) 26 18 U 5 0 i 2 3 4 10 J7 25 34 Contention Neighbours 2 2 2 1 0 0 0 0 0 1 2 2 X Intra-Path Neighbours 2 3 5 5 4 5 6 5 4 5 4 2 X TOTAL Active Neighbours 4 5 7 6 4 5 6 5 4 6 6 4 X Metric per Node 1.423 1.423 1.423 1.423 1.423 1.423 1.423 1.423 1.423 1.423 1.423 X CCDM: 0.017076654 Simulated End-to-End Delay (s): 0.030295424 Active Route 1218 (8 hops) 26 27 28 29 30 31 32 33 34 Contention Neighbours 2 4 6 8 8 8 6 • 4 X Intra-Path Neighbours 2 3 4 4 4 4 3 2 X TOTAL Active Neighbours 4 7 10 12 12 12 9 6 X Metric per Node 1.423 1.423 1.424 1.425 1.425 1.425 1.424 1.423 CCDM: 0.01139086 Simulated End-to-End Delay (s): 0.021795479 and simulated average end-to-end delay are compared in the table to establish the ability of the metric to predict the relative performance of routes. Route 1218 incurs a lower metric, as shown in Table 3.5, which indicates that it provides a better end-to-end delay performance than active route 1. In this case, the congestion from the contending neighbours along the shortest path is not significant enough as to warrant the addition of 4 extra nodes. The simulation results indicate that the metric was correct in predicting the relative performance of the routes. Next, the C C D M is calculated for the same routes using a contention profile with the average packet injection rate of 1600 pkts/s. The results are included in Table 3.6. The congestion caused from having a packet injection rate of 1600 pkts/s is high enough to cause route 1, which contains 12 hops, to become the ideal route. This change in ideal routes is caused by an increase in congestion along the shortest path. These results agree with those of Experiment 1. However, Experiment 1 results indicate that a packet injection rate of 33.3 pkts/s is enough to cause this event. Based on C C D M calculations, route 1 has a lower expected end-to-end delay than route 1218 when the packet injection rate is greater than 1400 pkts/s. Even though the discrepancy is significant, the CCDM is still tested in the next experiment, which provides a more realistic scenario. 3.1 A Framework for Evaluating End-to-End Delay Performance Table 3.6. CCDM calculation of a route for a packet injection rate of 1600 pkts/s. Active Route 1 (12 hops) 26 18 U. 5 0 I 2 3 4 10 17 25 34 . Contention Neighbours 2 2 2 1 0 0 0 0 0 1 2 2 X Intra-Path Neighbours 2 3 5 5 4 5 6 5 4 5 4 2 X TOTAL Active Neighbours 4 5 7 6 4 5 6 5 4 6 6 4 X Metric per Node 1.775 1.943 2.394 2.148 1.775 1.943 2.148 1.943 1.775 2.148 2.148 1.775 X CCDM: 0.023916383 Active Route 1218 (8 hops) 26 27 28 29 30 21 32 33 H Contention Neighbours 2 4 6 8 8 8 6 4 X Intra-Path Neighbours 2 3 4 4 4 4 3 2 X TOTAL Active Neighbours 4 7 10 12 12 12 9 6 X Metric per Node 1.775 2.394 3.420 4.399 4.399 4.399 3.025 2.148 CCDM: 0.025960685 43 Chapter 4 Integrating the Delay Metric with Existing Routing Protocols In Chapter 3, it is shown that reactive routing protocols have room for improvement. For better end-to-end delay performance, reactive routing protocols must be capable of avoid-ing regions of congestion in a network. This chapter demonstrates that the C C D M is ac-curate enough in predicting the relative performance of routes to significantly reduce the average end-to-end delay performance of existing on-demand routing protocols. It is as-sumed that nodes know the precise number of active nodes within their carrier-sensing range. Most on-demand routing protocols choose their route according to the path that is dis-covered first. Since a packet experiences a random delay when it contends for the medium, the first RREQ packet to arrive at the destination does not necessarily identify the route with the best average end-to-end delay. Furthermore, the end-to-end delay of a single RREQ packet does not include the effects from intra-flow contention, which impacts the average end-to-end delay of packets along a route. The CCDM, on the other hand, provides a steady-state insight into the delay character-istics of routes, which supplies more accurate information for routing decisions. To ensure that the metric contains up-to-date information, intermediate nodes are not permitted to re-spond to RREQs or RERRs, as outlined in [7]. A routing protocol uses the metric at either the source or destination, depending on its route discovery method. 4.1 AODV with CCDM 44 4.1 AODV with CCDM In AODV, the delay metric is used at the destination node. Instead of responding to the first RREQ it receives, the destination delays a certain period of time to send the RREP to the route that has the lowest corresponding metric. This added delay can be chosen in accordance with the application's QoS requirements for route discovery. 4.2 DSR with CCDM In DSR, the destination node responds to all RREQ packets it receives, which makes it a good candidate for using the metric at the source. As a result, the source node does not have to delay its transmission; it can use the first discovered route, the primary route, until a route with a lower C C D M replaces it. The source node can be allowed to shorten the route if this process does not introduce new nodes. 4.3 Experiment 4: Integrating the Link-Layer Metric at the Routing Layer This experiment shows how the metric can improve the end-to-end delay performance of DSR and AODV when there are varying degrees of congestion in the network. 4.3.1 Experimental Setup The experiment involves 3 different scenarios in which 75 nodes are placed randomly in a 600m x 800m rectangle. Since the nodes are arranged differently for each scenario, the nodes experience different levels of contention. Although the nodes are not mobile, this scenarios represent quasi-static cases, where the state of the network does not change in the period of observation. 4.3 Experiment 4: Integrating the Link-Layer Metric at the Routing Layer 45 Each node has a transmission range of 100m and a carrier-sensing range of 278m, which eliminates hidden nodes according to Rcs3 in Figure 2.2. Three or four active sessions are established, depending on the scenario. One of these sessions is the route under test, which has source and destination nodes 26 (S) and 34 (D), respectively. The remaining sessions are used for contention purposes. Al l traffic on the network is homogeneous with 128B packets. The source nodes have an average packet injection rate of either 100 or 200 pkts/s for low traffic conditions, 400 pkts/s for moderate traffic conditions, or 800 pkts/s for high traffic conditions. The original version of AODV is run over 15 seeds, where each seed can cause a route to be replaced if subsequent routes arrive with lower CCDMs. There are at least 15 oppor-tunities for possible route replacement per congestion level per scenario. The DSR protocol is tested over 12 seeds, creating 12 opportunities for route replacement. When active routes are inserted into the simulator for both AODV and DSR, they are tested over 25 seeds. In other words, there are 3 quasi-static scenarios with 4 different congestion levels, where each of these 12 cases produces 12 or 15 opportunities for route replacement. Al l other parameters are the same as for the experiments in Chapter 3, which are included in Tables 3.2 and 3.4. The routes are discovered by running GloMoSim over different seeds and traffic injec-tion rates. These routes are then tested for their end-to-end delay performance by inserting them into the routing tables at initiation using the edited code from Experiment 1. Re-sults that involve routes that deliver fewer than 5 packets are dismissed because they do not provide an accurate average end-to-end delay and can therefore distort performance results. The CCDM is calculated assuming that nodes have perfect knowledge of their active neighbours. The qualities of the different paths are ranked according to this metric so that when the routing protocol decides between two routes, it chooses the one with the lower metric. The gain in performance whenrou£e2 replaces routel is evaluated using equation 3.9. The cumulative average gain is plotted as a function of added delay at the source or destination. In the case of DSR, the added delay is used only for reference since the source 4.3 Experiment 4: Integrating the Link-Layer Metric at the Routing Layer 46 0 1X Xfl 300 «J0 505 H» 700 803 Figure 4.1. Node layout for Scenario A. can start sending packets using the primary route. 4.3.2 Simulation Results The network topology for Scenario A is plotted in Figure 4.1. With this layout, AODV and DSR discovered 16 and 11 routes, respectively, under the different traffic conditions. Congestion is introduced by using three sessions, which affect the nodes within the dotted circles in the figure. The gains in route performance for both AODV and DSR due to the C C D M are analyzed in the next two sections, whereas the overall performances of both AODV and DSR are demonstrated in the third. 4.3.2.1 AODV Route Performance (Scenario A) Figure 4.2 and Table 4.1 summarize the route performance results for Scenario A. The in-stantaneous gain in the average end-to-end delay performance for a chosen route is plotted 4.3 Experiment 4: Integrating the Link-Layer Metric at the Routing Layer 47 Table 4.1. Route performance of AODV for Scenario A. Scenario A Traffic Injection Rate Average Gain Max Inst. Gain %Replaced lOOpkts/sec 0.665 0.799 55.6 200pkts/sec 0.367 0.674 83.3 400pkts/sec 0.028 0.028 55.673 800pkts/sec 0.131 0.131 14.3 as a function of added delay at the destination node. These results show that the C C D M can find routes that reduce packet latency by up to 80% and 67% under low congestion conditions. On average, the routes chosen according to the C C D M are 67%, 35%, 3%, and 13% faster than the primary routes that AODV discovers, for traffic injection rates of 100 pkts/s, 200 pkts/s, 400 pkts/s, and 800 pkts/s, respectively. The C C D M is more active for low to moderate congestion levels, which suggests that the routing protocols may gain more under these conditions. By using the metric, AODV considers routes whose RREQs incur smaller CCDMs than the initial RREQ that is received at the destination. If better performing routes do not subsequently arrive, the average end-to-end delay performance of the routing protocol is not affected by using the CCDM; rather, it is the route discovery time that increases because the routing protocol delays its RREP. As such, it is useful to plot the average gain in route performance that accumulates over the added delay at the destination. This relation is illustrated in Figure 4.3. If the destination waits 60 ms after receiving its first RREQ in this scenario, the routing protocol finds routes that can reduce the average end-to-end delay of the packets by an average of 20%, 23%, 1%, and 0%, for increasing contention profiles; by delaying 125 ms, the protocol can reduce the respective average end-to-end delay by an average of 37%, 31%, 2%, and 2%. The graph also demonstrates that delaying the RREP at the destination increases the 4.3 Experiment 4: Integrating the Link-Layer Metric at the Routing Layer 48 :'o 6 bo' Avg 100 pkts/s '= 0.671 Avg 200 pkts/s =,0.35. ' : " ' ° ' " " ' " ' ' ' "'v*''-' ' * ' ' Avg 8C0 pkts/s.= 0.13 - \ Avg 400 pkts/s - 0.03: " • - • ; -60 80 iO'6 120 140. 3dlbelay<at-b.estiha:tion.;NbdB.(ms) Figure 4.2. Instantaneous gain of routes at the destination node in Scenario A. route performance under low congestion conditions more than for high conditions. Because the source buffers its packets until the RREP is received, it is necessary to examine how the added delay at the destination node impacts route discovery time. The tradeoff between average end-to-end delay performance and added route discovery time is illustrated in Figure 4.4, where the cumulative average gain is plotted as a function of added delay at the source node. Interestingly, the added delay at the source is often less than the added delay at the destination. This means that the RREP corresponding to the path with the better CCDM experiences a lower end-to-end delay. For instance, the average gain of that is experienced at the destination node after 120 ms is experienced at the source node after only 102ms, which means that the discovery time is increased by 102ms, not 120ms. Furthermore, positive gains for added delays at the source that are negative indicate that the CCDM can increase the average end-to-end delay performance and reduce route discovery time simultaneously. 1, .OS' 0.8 0.7 CD" .0:6 to p: §'•'0.5 CO J~0:4 : to rz .0.3. 0.2 0.1 0 1 GO. pkts/s 200 pkts/s 400 pkts/s 800 pkts/s • « . . . . * . 20 40 Add. 4.3 Experiment 4: Integrating the Link-Layer Metric at the Routing Layer 04 0.35 0.3 l 1 r--100 pkts/s -200 pkts/s -400 pkts/s -800 pkts/s 23 40 GO. 80 100 120 Added ;Delayarbestihation.:Node(ms) 140. Figure 4.3. Cumulative average gain at the destination node for Scenario A. 04 0.35 0.3 CD 0:25 o>: 0.2 1 0.15 o 0.1 0.05 ;-20 -100 pkts/s -200 pkts/s -.400/pkts/s. -800 pkts/s 20 40 60 80 Added Delay at Source Node (rr.s) 100 120 Figure 4.4. Cumulative average gain at the source node for Scenario A. 4.3 Experiment 4: Integrating the Link-Layer Metric at the Routing Layer 50 Table 4.2. Route performance of DSR for Scenario A. Scenario A Traffic Injection Rate Average Gain Max Inst. Gain %Replaced DSR lOOpkts/sec 0.302 0.519 63.6 200pkts/sec -0.003 0.673 43.8 400pkts/sec 0.194 0.220 20.0 800pkts/sec 0.125 0.225 37.5 4.3.2.2 DSR Route Performance (Scenario A) Figure 4.5 and Table 4.2 summarize the route performance when DSR uses the CCDM. From the graph, the metric finds routes that reduce the average end-to-end delay by up to 52% and 67% for low congestion conditions of 100 pkts/s and 200 pkts/s, respectively. Under moderate-to-high congestion levels, the average end-to-end delay of some routes can be reduced by up to approximately 22%. The routes that are considered by the CCDM are on average 30%, 0%, 19%, and 13% faster than the routes chosen without delay, as demonstrated in Table 4.2 for increasing contention profiles. While these average gains in average end-to-end delay are respectable, the performance for a packet injection rate of 200 pkts/s requires examining. Here, DSR incorrectly considers routes because there is a discrepancy between the ranking of routes by the C C D M and by the simulated results; their RREP arrives with a CCDM that is low in comparison to the CCDM of the initial route. Since the DSR bases its routing decision on the CCDM, this route may replace the initial route. Such errors cause the negative gain shown in Figure 4.5. The cumulative average gains in route performance are illustrated in 4.6. If the source waits an additional 193 ms after receiving its first RREP, the route performance increases by an average of 19% for the 100 pkts/s case, 0% for 200 pkts/s, 3% for 400 pkts/s, and 2% for 800 pkts/s. If routes are replaced within 55 ms of receiving the RREP in the 200 pkts/s 4.3 Experiment 4: Integrating the Link-Layer Metric at the Routing Layer 1 0.8 0:6 0.4 •0.2 Q • •0 2 -0.4 -0.6 -0!8 L 0 .100 pkts/s 200 pkts/s 400 pkts/s .830 pkts/s Avg 100 pkts/s.= 0:30. Av^ 400 pkts/s » 0.19 .. . . ::-t.::. 'Avg 800;pkts/s = 0.13 Avg"2bbpi<ts/s = 6.00 20 40 , 6 0 80 100 Added belay.,atSburce>Node;(ms) 120 140 Figure 4.5. Instantaneous gain of routes at the source node in Scenario A. -IOC pkts/s -200 pkts/s -40C pkts/s -SCO pkts/s 0-4 0.35 0.3 •0.25 .0/2 0.15 0,1 0.05 0 ' 0 0 5 0 §bt 40 60 80. 100 120 H0i 160 183 200 . Added Delay: at Source Node (ms) Figure 4.6. Cumulative average gain at the source node for Scenario A. 4.3 Experiment 4: Integrating the Link-Layer Metric at the Routing Layer 52 2.5: ra: -1, 4 :Q:5. 100 Scenario A: Delay Performance Vs Congestion Level • DSR original • DSR CCDM - AODV original - AODV CCDM - DSR ideal • AODV ideal - | 1 TT 200 303 40G 500 600 Packet Injection Rate (pkts/s) Figure 4.7. Delay performance of AODV and DSR when using CCDM (Scenario A). case, there is a possibility of a degradation in performance. 4.3.2.3 Performance Evaluation of C C D M with AODV and DSR (Scenario A) The end-to-end delay performances of DSR and AODV under the various congestion condi-tions are illustrated in Figure 4.7. These average end-to-end delays are weighted according to the number of delivered packets in order to compare the average performance of the routing protocols. By integrating the CCDM in their routing decisions, both AODV and DSR reduce the average end-to-end delay of packets across all four congestion levels. The gain in average end-to-end delay performance is plotted in Figure 4.8. The CCDM proves more useful for low congestion levels for both AODV and DSR. Under these condi-tions, AODV reduces its average end-to-end delay by 40% and 14%, whereas DSR reduces it by 19% and 13%. For higher congestion levels, the routing protocols still experience a gain, but this gain is limited to approximately 2%. The minimum average end-to-end delays of the discovered paths are also included in 4.3 Experiment 4: Integrating the Link-Layer Metric at the Routing Layer 53 Scenario A: Performance Gain Vs. Congestion L'evei 0.45 r—i 1 — r —ti T — — — r •Packet;lnjectibmRate::(pkts/s) Figure 4.8. Performance gain of AODV and DSR when using CCDM (Scenario A). Scenario. A:' Route Discovery'TimeVs!-^Congestion Level 1800 r 1 r r 1 n r n L JL L L J J L L 1— 100 200 300 400 500 600 700. 800 Packet Injection Rate (pkts/s) Figure 4.9. Average added delay in route discovery for AODV and DSR when using CCDM (Scenario A). 4.3 Experiment 4: Integrating the Link-Layer Metric at the Routing Layer 54 Figure 4.7. By comparing the average end-to-end delays of the primary routes to these ideal delays, it is possible to determine how well a routing protocol finds routes that minimize average end-to-end delays. AODV performs better than DSR in terms of actual average end-to-end delay performance when the packet injection rate is greater than 100 pkts/s; however, because DSR achieves an average end-to-end delay that is similar to the ideal delay, it does better than AODV in finding the routes that minimize average end-to-end delay. Although under some congestion conditions the potential for gain is small for DSR, the C C D M still manages to further increase its end-to-end delay performance by 2%-3%. The route discovery times are plotted as a function of traffic injection rate in Figure 4.9. In this scenario, the discovery time for AODV is typically an order of magnitude larger than for DSR. Therefore, relative to their respective average discovery times, the added delays have a smaller effect on AODV than they do on DSR. The highest average gains in performance were enjoyed at low congestion levels, where AODV and DSR were able to decrease their average end-to-end delay by 40% and 19%, respectively, by increasing their discovery time by an average of 35 ms. 4.3.3 Simulation Results The network topology for Scenario B is plotted in Figure 4.10. With this layout, AODV and DSR discovered 52 and 31 routes, respectively, under the different traffic conditions. Congestion was introduced by using two sessions, which affect the nodes within the dotted circles. 4.3.3.1 AODV Route Performance (Scenario B) The instantaneous gains in route performance are plotted as a function of added delay at the destination in Figure 4.11. The CCDM is more active in this scenario than in Scenario A based on the higher number of opportunities for route replacement, as shown in Table 4.3. 4.3 Experiment 4: Integrating the Link-Layer Metric at the Routing Layer 55 100.; 0-. • * ,*f m »-• u ..* •r* ^  »). • '* • ' 100 . 200 3C0 100 ••SB- ,600; 700' ; 800 Figure 4.10. Node layout for Scenario B. The C C D M considers routes that can reduce the average end-to-end delay of packets by 87%, 65%, and 66%, for 100 pkts/s, 200 pkts/s, and 800 pkts/s, respectively. Routes that are considered by the CCDM increase the performance of the routes by an average of 56% for 100 pkts/s, 43% for 200 pkts/s, and 60% for 800 pkts/s. In the simulation for 400 pkts/s, many of the discovered routes involved a low number of received packets, which made the performance of the routes hard to evaluate. As a result, the results for this packet injection rate are not considered. The cumulative average gains in route performance are plotted as a function of added delay at the destination and at the source in Figures 4.12 and 4.13. By waiting an addi-tional 90 ms after the initial RREQ is received at the destination, the routes that the metric considers improves the end-to-end delay performance of the routes by an average of 54% for 100 pkts/s, by 35% for 200 pkts/s, and by 16% for 800 pkts/s. Moreover, under high congestion conditions, an added delay at the receiver can reduce the route discovery time by up to 500 ms. 4.3 Experiment 4: Integrating the Link-Layer Metric at the Routing Layer 56 : n 1 1 1 T! 1 •?>'• i00;pk'ts7sf °. 200 pkts/s G00:pkts/s >Avg.800 pids/s = 0.6 'Avg;i66pktste'=6:5 b._ el 1 " 1 . ' ' T A v g ^ O O pkts/s = 0:43-b o :oi i a a ^ L LI _ 0 2 0 20 40: r 50. _ 60 _ 100 120 140 Adde'So'eia^^ Figure 4.11. Instantaneous gain of routes at the destination node in Scenario B. Table 4.3. Route performance of AODV for Scenario B. Scenario B Traffic Injection Rate Average Gain Max Inst. Gain %Replaced AODV lOOpkts/sec 0.561 0.870 95.7 200pkts/sec 0.429 0.645 90.5 400pkts/sec N/A N/A N/A 800pkts/sec 0.601 0.601 54.5 4.3 Experiment 4: Integrating the Link-Layer Metric at the Routing Layer 0.7, 0:6 0.5 f 0.4 | 0.3 -100 pkts/s -200 pkts/s -800 pkts/s 0.2.h •am .40 ^ 60 80 100 AddetijbelayjatsSestina^ VI20 Ml. Figure 4.12. Cumulative average gain at the destination node for Scenario B. 0.7, 0:6} 05f .'.CO'. ra 04 f :< .:x< .. ™. 03 I 02 01 01--23 -1G0 pkts/s -2C0.pkts/s -830 pkts/s *20>,.w 40 60 30 100 :AddedrDeiay;.atiSburce!No'dei^ rnsj; 123 140 Figure 4.13. Cumulative average gain at the source node for Scenario B. 4.3 Experiment 4: Integrating the Link-Layer Metric at the Routing Layer 58 Table 4.4. Route performance of DSR for Scenario B. Scenario B Traffic Injection Rate Average Gain Max Inst. Gain %Replaced DSR lOOpkts/sec 0.463 0.729 66.7 200pkts/sec 0.445 0.565 50.0 400pkts/sec 0.422 0.574 53.8 800pkts/sec 0.349 0.478 23.1 4.3.3.2 DSR Route Performance (Scenario B) The instantaneous gains in route performance are plotted in Figure 4.14 as a function of added delay at the source. From the graph, it is possible to see that some routes considered by the metric decrease the average end-to-end delay of packets by up to 73%, 57%, 58%, and 48% for congestion with packet injection rates of 100 pkts/s, 200 pkts/s, 400 pkts/s, and 800 pkts/s, respectively. Compared to AODV, DSR has fewer opportunities for route replacement. The routes that are considered are on average 46%, 45%, 42%, and 35% faster than the initial route, for increasing levels of congestion. These results are summarized in Table 4.4. The cumulative average gain in route performance is shown as a function of added delay at the source in Figure 4.15. By using the CCDM, DSR can decrease the average end-to-end delay of packets by an average of 25%, 22%, 19%, and 8%, for 100 pkts/s, 200 pkts/s, 400 pkts/s, and 800 pkts/s, respectively, within 200 ms of receiving a first RRER 4.3.3.3 Performance Evaluation of CCDM with DSR and AODV (Scenario B) The end-to-end delay performances of DSR and AODV under the various congestion con-ditions are illustrated in Figure 4.16. Without link-layer information, DSR performs better than AODV for all four contention profiles. However, the average end-to-end delays of the ideal paths indicate that AODV has the potential to increase its performance by approxi-4.3 Experiment 4: Integrating the Link-Layer Metric at the Routing Layer 1 0.9 gii'-07 o 0.6 <rt -CO ' 2 0 4,' </}< •-• " / 0.3 0.2 0.1 m 100 pkts/s 200 pkts/s 400 pkts/s 800 pkts/s Avg. 100 pkts/s'= 0.46. Avg. 200 pkts/s - 0:44 j ' Avg>400 pkts'/s'=J.4'2', ' Avg! 800 pkts/s =. 6:35 0 20 ,40: 60 80 10C 123 *40 Added Delay at Source Node (ms) '160 180 200 Figure 4.14. Instantaneous gain of routes at the source node in Scenario B. •CD 07 E 0.6 ' 0.5 <? 0.4 OJ*' a 0.3 -CD'-| 0.2 o 0'. ;0 -.100 pkts/s -200 pkts/s 403 pkts/s -800: pkts/s a 20 40 60 60 100 123, 140 Added Delay at Socce Node (ms) 160 180 200: Figure 4.15. Cumulative average gain at the source node for Scenario B. 4.3 Experiment 4: Integrating the Link-Layer Metric at the Routing Layer 60 Scenario B: Delay Performance Vs. Congestion Level 2 .1 1. 1 1 1 -— - DSR original -DSRCCDM. ; B— -AODV original '-v; • • v • v.' • • •: •.• • :;.!;.• -AODV CCDM' ' - DSR ideal . "~ :—1~— —*- -AODV ;deal r ••/••/ —.—-X —4^ * i L J J J L L J u 100 200 300 403 500 600 700 800 Packet lnjec1ion;Ra1e (pkts/s) Figure 4.16. Delay performance of AODV and DSR when using CCDM (Scenario B). mately 70% to 90%. By including the CCDM in both routing protocols, AODV outperforms DSR under all presented congestion conditions. The average gains in the end-to-end delay performance of packets are demonstrated in Figure 4.17. From the figure, it is possible to see that the gains for this scenario be-have similarity to the gains in Scenario A: both AODV and DSR experience their greatest gain under low congestion levels and experience a reduction as congestion levels increase. AODV experiences remarkable gains of 79%, 53%, and 31% for packet injection rates of 100 pkts/s, 200 pkts/s, and 800 pkts/s, respectively. DSR's gains are significant at low levels of congestion at 19%, but decrease to 1% as congestion levels increase. Interestingly, both DSR and AODV manage to reduce their average end-to-end delays as the congestions levels increase. This suggests that shortest-path routing algorithms can perform relatively well in terms of end-to-end delay if they can avoid areas of high conges-tion. In Scenario A for instance, DSR manages to almost attain the ideal end-to-end delay for high levels of congestion. This can be explained by the route discovery process of these 4.3 Experiment 4: Integrating the Link-Layer Metric at the Routing Layer 61 Figure 4.17. Performance gain of AODV and DSR when using CCDM (Scenario B). 1800; 1600 1400 — 1200 1000 800 •5 600 400; 200 100 ScertaribiB: •!Rplite'Discovery Time Vs. Congestion Level -r 1 i -r T; r — : DSR original - • — D S R CCDM •^B—AODV original - •—AODV.CCDM 200, 300 400 500 600 Packet Injection Rate (pkts/s) •800 Figure 4.18. Average added delay in route discovery for AODV and DSR when using CCDM (Scenario B). 4.3 Experiment 4: Integrating the Link-Layer Metric at the Routing Layer 62 protocols. The RREQ that is sent through a congested network will be delayed at nodes that experience a high contention delay. Consequently, these routes will not be discovered first. Instead, the RREQ is expedited through nodes that have less contention delay. Thus, as congestion increases, the first routes to be discovered may adequately avoid the areas of high congestion to provide better end-to-end delay characteristics. The average route discovery delays for AODV and DSR are provided in Figure 4.18. Under low congestion conditions, where the routing protocols enjoyed their highest gains, the route discovery time is increased by an average of 50 ms. For high congestion levels, the average added delay becomes negligible for DSR; for AODV on the other hand, the average added delay becomes negative, indicating a decrease in route discover time by approximately 20 ms. 4.3.4 Simulation Results (Scenario C) The network topology for Scenario C is plotted in Figure 4.19. With this layout, AODV and DSR discovered 27 and 38 routes, respectively, under the different traffic conditions. Congestion was introduced by using three sessions, which affect the nodes within the dot-ted circles in the figure. Although simulations were run for congestion levels using 800 pkts/s, the low number of received packets made end-to-end delay results inaccurate. Con-sequently, only contention profiles from 100 pkts/s, 200 pkts/s, and 400 pkts/s are used for the analysis. 4.3.4.1 AODV Route Performance (Scenario C) The instantaneous gains in route performance are plotted as a function of added delay at the destination node in Figure 4.20. The C C D M considers routes that reduce the average end-to-end delay packets by 56% and 71% for packet injection rates of 100 pkts/s and 200 pkts/s, respectively, but does not consider different routes for the case of 400 pkts/s. With congestion levels of 200 pkts/s, large negative gains indicate that the metric makes incorrect 4.3 Experiment 4: Integrating the Link-Layer Metric at the Routing Layer 63 TOO-600:-400-300:-200 •.-100 • 100 :200 , 300 400- 600 600 ' 700- 800 Figure 4.19. Node layout for Scenario C. decisions. The low number of route replacement opportunities suggests that the CCDM is not as useful in this scenario as it is in scenarios A and B. The routes that are considered by the CCDM produce an average gain in route performance of 46% and —23%, as illustrated in Figures 4.21 and 4.22. Fortunately, because the percentage of routes that can be replaced is low, the error-causing routes only decrease the performance of routes by an average of 5% Table 4.5. Route performance of AODV for Scenario C. Scenario C Traffic Injection Rate Average Gain Max Inst. Gain %Replaced lOOpkts/se'c 0.463 0.557 13.3 200pkts/sec -0.225 0.705 23.1 400pkts/sec 0.000 0.000 0.0 4.3 Experiment 4: Integrating the Link-Layer Metric at the Routing Layer 64 •o, • o Avg:-100;pkts/s> 0:46 Avgi 400 pkts/s = 0:00 • Avg:200pkts/s =-0.22 J • 20 40 60 k 80 . AddeU'Deiayiati'b'estihaiidn'Nbd 120 Figure 4.20. Instantaneous gain of routes at the destination node in Scenario C. after 35 ms. Conversely, when network traffic has an average injection rate of 100 pkts/s, the CCDM reduces the average end-to-end delay of AODV's routes by an average of 6% after 55 ms. 4.3.4.2 DSR Route Performance (Scenario C) The instantaneous gains in route performance for DSR when it uses the CCDM are shown in Figure 4.23. The CCDM provides DSR with routes that can reduce the average end-to-end delay of packets by 61%, 19%, and 11% for increasing congestion levels. These gains however are again contrasted by negative gains from incorrect route consid-erations by the CCDM. However, on average the gain in route performance is still positive with values of 11%, 3%, and 10%, for congestion from 100 pkts/s, 200 pkts/s and. 400 pkts/s, respectively. The CCDM is more active with DSR than it is with AODV, as it has more route-replacement opportunities. The cumulative average gains in route end-to-end delay are plotted in Figure 4.24. Be-cause the number of route replacement opportunities is relatively small for congestion from 100 pkts/s and 200 pkts/s injection rates, the CCDM improves the average end-to-end delay of routes by an average of 2% or 3% after 20 ms. Although the average end-to-end delay performance actually degrades initially by using the CCDM for a traffic injection rate of 200 pkts/s, the average performance gain of the routes becomes 0% after 26 ms. 4.3 Experiment 4: Integrating the Link-Layer Metric at the Routing Layer OA 0.3 0.2 D.'1 o 0 -01 -0.2 -100 pkts/s -200 pkts/s -400-pkts/s •20 40 60 ' 80 100, AddedDelay ^ at .Destination. Node, (msj 120 Figure 4.21. Cumulative average gain at the destination node for Scenario C. 0.4 0.3 0:2 o> 0:1 E o -0.1 -0:2 -1G0 pkts/s -230 akts/s -400 pkts/s 20 40 63 80 Added-Delay at^ Source-Node;(msj 100, 120 Figure 4.22. Cumulative average gain at the source node for Scenario C. 4.3 Experiment 4: Integrating the Link-Layer Metric at the Routing Layer 66 ° 103 pkts/s ° 200 pkts/s * 400 pkts/s o Avg;.100 pkts/s = 0;11 AVg.200 pkts/s = 0.03 -. i_ _i_ j_ i I 20 Jib 60 60 100 123 Added Delay at Source Node (ms) Figure 4.23. Instantaneous gain of routes at the source node in Scenario C. Table 4.6. Route performance of DSR for Scenario C. Scenario C Traffic Injection Rate Average Gain Max Inst. Gain % Replaced lOOpkts/sec 0.115 0.609 20.0 200pkts/sec 0.029 0.190 29.4 400pkts/sec 0.103 0.108 27.3 •:0'.8 0.6 :0:4 E 0-2 -0.2 4.3 Experiment 4: Integrating the Link-Layer Metric at the Routing Layer 67 0 4 r 0:3. -100 pkts/s--200 pkts/s -400 pkts/s 1' 0:2 •CD reft a. 0.1 •m--\ -0.2 1 20 40 .60 80 Added Delay atSource Ncde (ms) 100. 120 Figure 4.24. Cumulative average gain at the source node for Scenario C. 4.3.4.3 Performance Evaluation of CCDM with AODV and DSR (Scenario C) The average end-to-end delay performances of DSR and AODV are compared in Figure 4.25. By including the CCDM in its routing decisions, DSR is able to reduce its average end-to-end delay by 6%, 1% and 3%, for congestion from packet injection rates of 100 pkts/s, 200 pkts/s, and 400 pkts/s, respectively. AODV also experiences its largest gain in performance at the lowest congestion levels, which is equal to 11%. As the congestion levels rise as the packet injection rate increases to 200 pkts/s, AODV's end-to-end delay performance degrades by 3%. Because DSR discovers routes that perform relatively well in terms of the ideal end-to-end delay, there is not much potential for gain. Nevertheless, the CCDM manages to further improve the end-to-end delay performance of DSR by 6%, 1%, and 3% for the three contention profiles, as demonstrated in Figure 4.26. AODV on the other hand, has a relatively large potential gain ranging between 65% and 75%. For the lowest congestion level, the CCDM manages to improve it by 11%. The performance of AODV degrades 4.3 Experiment 4: Integrating the Link-Layer Metric at the Routing Layer 68 Sc'ena'rioiG:: ;D,elay.Performance VS:-Gonge'stion Level 150 200 250: , .300: 350 :Packet Injection Rate (pkts/s) Figure 4.25. Delay performance of AODV and DSR when using CCDM (Scenario C). 0 9 h 08 0.7 I .0.6 . 03. ^ 0.5 Q; • ' ' -111 • * 3 ; 0.3 :LU •' 0.2 0.1 0 -0.1 - 50 Scenario©:: .Delay, Performance-Vs;,Congestion Xeyel -e— .DSR CCDM -B—AODV C,GDM.: =8 = 100 150 200 250, 3.00 350 Packet Injection Rate (pkts/s) 400 450 Figure 4.26. Performance gain of AODV and DSR when using CCDM (Scenario C). 4.4 Discussion and Suggestions for Future Work 69 1200 1000 h 800 i-T "' >-, » 600 400; 200' Scenario!?:; ;Route;Discoyery ''Time;. Vs '^Qonge's i^on'Level p. DSR original - • — D S R CCDM -»—AOD^origihal -3—AODV CCDM; 50 100 .150 200 250 200 350 .Packet Injectidn.Rate (pkts/s) '400 450 Figure 4.27. Average added delay in route discovery for AODV and DSR when using CCDM (Scenario C). by an average of 3% when the packet injection rate is 200 pkts/s due to incorrect route replacements described in Section 4.3.4.1. Because no extra routes are considered for route replacement at 400 pkts/s, there is no gain in performance. The route discovery times for both AODV and DSR are shown in Figure 4.27. These gains cause an increase in route discovery time for AODV and DSR of 5 ms and 32 ms, respectively, which is small compared to their route discovery times. 4.4 Discussion and Suggestions for Future Work 4.4.1 Discussion of Results Overall, both AODV and DSR profited from integrating the C C D M into their routing deci-sion process. The inclusion of this link-layer information was able to cause large reductions in average end-to-end delays with small increases in route discovery time. This was best 4.4 Discussion and Suggestions for Future Work 70 demonstrated in Scenario B, where AODV and DSR made route replacements that reduced the average end-to-end delay by up to 79% and 19%, respectively, for an increase of dis-covery time of approximately 55 ms. When using the CCDM, the routing protocols experienced their largest gains when the packet injection rate is 100 pkts/s. As the packet injection rate increased, the gain generally decreased, which suggests that the CCDM is most effective for lower levels of congestion. Nevertheless, based on the observation that all but one case produced positive average gains, it can be expected that the CCDM increases network performance in terms of average end-to-end delay. The C C D M improves the performance of MANETs by considering routes that routing protocols otherwise ignore. If a specific route discovery provides the C C D M with many routes to consider, it increases the probability that the routing protocol finds the ideal route. In the last experiment, it is observed that the C C D M produces relatively little gain at higher congestion levels even though there is a high potential for gain. This suggests that the routing protocol finds the ideal route with only some runs of the simulation even though network conditions are kept as constant as possible. 4.4.2 Areas for Further Work Increasing the Number of Discovered Routes The routing protocol may not find the ideal route for some route discoveries because of the flooding procedure that on-demand routing undertakes. A node does not forward a RREQ if it has already forwarded a RREQ from the same route discovery. In doing so, the routing protocol risks not finding the ideal route. The performance of MANETs may improve by allowing sources to trigger multiple route discoveries instead of just one. Inter-mediate nodes would treat these subsequent RREQs as they do the first, since they contain a different route discovery ID. Although multiple RREQs could possibly increase the end-to-end delay performance of the session at hand, its impact on existing active sessions should be investigated. 4.4 Discussion and Suggestions for Future Work 71 Finding the Number of Neighbours In order to use the C C D M , nodes must have a way of determining the number of ac-tive neighbours within their carrier-sensing range. Currently, there are two methods that could possibly be modified to meet this objective. One option is the Contention-Aware A d -mission Control Protocol (CACP). Nodes use high-power transmissions to communicate directly with their carrier-sensing neighbours, which has detrimental effects on energy-consumption and spatial reuse [10]. Decreasing spatial reuse decreases network through-put, which ultimately increases average end-to-end delay of routes. Using this method would be counter-productive since the objective is to decrease average end-to-end delay. A better option is Route Request Tail (RRT), proposed in [10], where the authors have developed a means of using carrier-sensed packet lengths to deduce the number of inter-fering neighbours. RRT was developed for finding the intra-flow contention count but may be modified to find the total number of contending neighbours within the carrier-sensing range of a transmitter. In terms of performance, RRT is as accurate as C A C P in calculating the number of contention nodes, and outperforms it and other proposed solutions in terms overhead, power, and delay. Determining the Impact of Inaccurate Information In this thesis, nodes are supplied with exact information regarding the number of active neighbours. In reality, neighbour discovery may not be as accurate, which may cause the C C D M to make errors when contemplating route replacement. It would be useful to determine the extent to which this error impacts the performance of the C C D M . The multiple triggering of route discoveries may help performance by taking the average of the C C D M when the same route is found several times. Developing a More Practical Metric The C C D M is designed assuming homogeneous traffic; it would be more practical to design a metric that considers traffic that is heterogeneous. Examining the Effects of Node Mobility It can be expected that the performance of the C C D M also degrades when nodes are 4.4 Discussion and Suggestions for Future Work 72 permitted to move in a network. This supposition is based on the fact that node movement causes changes in network topology, which dictates the number of neighbours within a node's interference range. It is difficult to determine the effects of mobility without imple-menting the C C D M in GloMoSim. Implementing the CCDM in GloMoSim If the CCDM were available in GloMoSim, it would allow for a thorough analysis of its performance. Firstly, the effects of inaccurate information and node mobility could be ex-amined, as discussed previously. Secondly, the ability of routing protocols to recover from link-breaks could be investigated. Lastly, it would allow for the analysis of simultaneous active paths to see how the CCDM impacts the performance of the entire network. Developing a More Accurate Metric Even when exact neighbour information is used in the calculation of the metric, the CCDM still makes the incorrect routing choices. These errors are illustrated by the neg-ative gain in Figure 4.26. The metric can be made more accurate by including additional information regarding the state of the network. If the metric is improved to the extent that it adequately predicts expected end-to-end delay, it would be useful in providing MANETs with QoS capabilities. Chapter 5 73 Conclusion This thesis proposes a best-effort approach to satisfying QoS requirements in MANETs. A link-layer metric based on the IEEE 802.11 DCF is used at the routing layer of on-demand routing protocols to make routing decisions based on link-layer contention rather than the number of hops. By using cross-layer information, routing protocols can avoid regions of congestion and provide routes with better end-to-end delay performance. The potential-gain calculations of Chapter 3 indicate that DSR and AODV could be im-proved to cope with congested networks. Previous load-spreading routing protocols have not been able to adequately achieve this because they are unable to define congestion ac-curately. Chapter 3 establishes that congestion can be accurately defined as the contention delay. As a result, an additive metric is presented that only requires the packet injection rate of the traffic and the number of active neighbours within the carrier-sensing range of the transmitter. Chapter 4 demonstrates that this metric can be easily integrated with existing on-demand routing protocols. By using DSR and AODV as examples, simulation results show that this metric can significantly improve end-to-end delay performance. This metric reduces average end-to-end delay by avoiding the congested regions of a network. This improvement permits more types of applications to be supported by MANETs, which in turn, adds functionality to devices that can connect wirelessly. As ad hoc networking establishes itself as a flexible and reliable alternative to more conventional networking methods, it is sure to become a valuable extension to the wireless networking solutions available today. Bibliography 74 [1] C. Barrett, M . Drozda, A. Marathe, and M . V. Marathe. Characterizing the interaction between routing and M A C protocols in ad-hoc networks. In Proc. of the 3rd ACM International Symposium on Mobile Ad Hoc Networking and Computing, pages 92-103, 2002. [2] J. Broch, D. B. Johnson, and D. A. Maltz. The dynamic source routing protocol for mobile ad hoc networks. Internet-Draft, draft-ietf-manet-dsr-03.txt, RFC 2026. Avail-able from http://www.monarch.cs.rice.edu/internet-drafts/draft-ietf-manet-dsr-03.txt, 1999. [3] J. Broch, D. Maltz, D. Johnson, Y.-C. Hu, and J. Jetcheva. A performance comparison of multi-hop wireless ad hoc routing protocols. In Proc. of the IEEE/ACM Mobicom, pages 85-87, 1998. [4] R Chatzimisios, A. C. Boucouvalas, and V. Vitsas. Packet delay analysis of IEEE 802.11 M A C protocol. Electronics Letters, 39(18):1358-1359, Sept 2003. [5] H. Hassanein and A. Zhou. Routing with load balancing in wireless ad hoc networks. In Proc. of the ACM MSWiM, pages 89-96, July 2001. [6] S.-J. Lee and M . Gerla. Dynamic load-aware routing in ad hoc networks. In Proc. of the IEEE International Conference on Commun. (ICC), volume 10, pages 3206-3210, June 2001. [7] Y. L i and H. Man. Three load metrics for routing in ad hoc networks. In Proc. of IEEE Vehicular Technology Conference (VTC-Fall), volume 4, pages 2764-2768, Sept 2004. [8] D. Malone, K. Duffy, and D. J. Leith. Modeling the 802.11 distributed coordina-tion function with heterogeneous finite load. In Workshop on Resource Allocation in Wireless Networks, April 2005. [9] C. E. Perkins, E. M . Belding-Royer, and S. R. Das. Ad hoc on-demand distance vector (AODV) routing. Internet-Draft, draft-ietf-manet-aodv-13.txt, RFC 3561. Available from http://bgp.potaroo.net/ietf/idref/rfc3561/, 2003. [10] K. Sanzgiri, I. D. Chakeres, and E. M . Belding-Royer. Determining intra-flow con-tention along multihop paths in wireless networks. In Proc. of the First International Conference on Broadband Networks (BROADNETS), pages 611-620, 2004. Bibliography 75 [11] S.-T. Sheu and J. Chen. A novel delay-oriented shortest path routing protocol for mobile ad hoc networks. In Proc. of the IEEE International Conference on Commun. (ICC), volume 6, pages 1930-1934, June 2001. [12] J.-H. Song, V. Wong, and V. C. M . Leung. Load-aware on-demand routing (LAOR) protocol for mobile ad hoc networks. In Proc. of the IEEE Vehicular Technology Conference (VTC-Spring), pages 1753-1757, April 2003. [13] J.-H. Song, V. W. S. Wong, and V. C. M . Leung. Efficient on-demand routing for mo-bile ad hoc wireless access networks. IEEE Journal on Selected Areas in Commun., 22(7): 1374-1383, Sept 2004. [14] K. Wu and J. Harms. Load-sensitive routing for mobile ad hoc networks. In Proc. of the 10th IEEE ICCCN, pages 540-546, Oct 2001. [15] K. Xu, M . Gerla, and S. Bae. How effective is the IEEE 802.11 RTS/CTS handshake in ad hoc network? In Proc. of the IEEE Global Telecommunications Conference (GLOBECOM), pages 72-76, 2002. [16] X . Zeng, R. Bagrodia, and M . Gerla. GloMoSim: A library for parallel simulation of large-scale wireless networks. In Proc. of the Workshop on Parallel and Distributed Simulation, volume 1, pages 154—161, 1998. 

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
http://iiif.library.ubc.ca/presentation/dsp.831.1-0065416/manifest

Comment

Related Items