Open Collections

UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Dynamic resource allocation in buffer-aided relay-assisted cellular networks Hajipour, Javad 2015

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata

Download

Media
24-ubc_2015_september_hajipour_javad.pdf [ 5.38MB ]
Metadata
JSON: 24-1.0165770.json
JSON-LD: 24-1.0165770-ld.json
RDF/XML (Pretty): 24-1.0165770-rdf.xml
RDF/JSON: 24-1.0165770-rdf.json
Turtle: 24-1.0165770-turtle.txt
N-Triples: 24-1.0165770-rdf-ntriples.txt
Original Record: 24-1.0165770-source.json
Full Text
24-1.0165770-fulltext.txt
Citation
24-1.0165770.ris

Full Text

Dynamic Resource Allocation in Buffer-Aided Relay-AssistedCellular NetworksbyJavad HajipourB.Sc., Electrical Engineering, Iran University of Science and Technology, Iran, 2005M.Sc., Electrical Engineering, Sharif University of Technology, Iran, 2007A THESIS SUBMITTED IN PARTIAL FULFILLMENT OFTHE REQUIREMENTS FOR THE DEGREE OFDOCTOR OF PHILOSOPHYinThe Faculty of Graduate and Postdoctoral Studies(Electrical and Computer Engineering)THE UNIVERSITY OF BRITISH COLUMBIA(Vancouver)August 2015c© Javad Hajipour, 2015AbstractThe increasing interest in wireless connectivity to the Internet has led to new technologiesin cellular networks to provide ubiquitous access to users. One of the promising solutions isto deploy wireless relays, equipped with buffers, in different parts of the cellular networks,to improve both coverage and capacity. In this thesis, the goal is to investigate resourceallocation in such networks, considering different challenges, system constraints and users’service requirements.First, based on simple reasoning, analytical investigations and intuitive generalizations,we show that the use of buffers at relays improves both throughput and average end-to-end packet delay. Extensive computer simulations confirm the validity of the presenteddiscussions and the derived results.Subsequently, we propose Channel-, Queue-, and Delay-Aware (CQDA) resource allo-cation policies, which provide quality of service (QoS) for both delay-sensitive and delay-tolerant users, in a multiuser orthogonal frequency division multiple access (OFDMA)network enhanced with buffering relays. Numerical results demonstrate significant im-provements in providing QoS through the proposed resource allocation policies comparedwith the existing algorithms.Moreover, we introduce a perspective based on which we divide the network area, ina relay-assisted OFDMA system, to smaller cells served by the base station (BS) and therelays. Using convex optimization and dual decomposition, we derive closed form expres-sions for iterative signaling among the serving nodes to decide about resource allocation.iiAbstractThe resulted framework provides insights for designing efficient algorithms for practicalsystems.Next, we introduce important parameters to be considered in the instantaneous prob-lem formulation for data admission and resource allocation in the relay-assisted OFDMAcellular networks. Taking into account several practical constraints, we propose novel andefficient algorithms for deciding about time slot, subchannel and power allocation in a dis-tributed manner. Numerical results confirm the effectiveness of the proposed parametersand algorithms in reaching the objectives and satisfying the constraints.Finally, we propose a novel scheduling policy, which provides queue stability and isefficient and fair in terms of delay. The proposed policy can be used in the scenarios withshared or independent channels at the BS and relays, leads to less overhead, and facilitatesdecentralized resource allocation.iiiPrefaceThis thesis is based on the research I have conducted under the supervision of Dr. VictorLeung. The result of this research was several articles that have been either accepted orpublished, or are under review. I developed the ideas for these articles and wrote themunder the supervision of Dr. Leung, who also helped in reviewing them. Except for theconference publication of Chapter 5, all the articles are co-authored also by Dr. Amr Mo-hamed from Qatar University. He provided valuable comments and feedback for the worksand reviewed them for submission. For the article related to Chapter 2, Rukhsana Rubyhelped in analysis as well as conducting the simulations and writing the manuscript. Forthe conference publication of Chapter 5, Dr. Ghasem Naddafzadeh helped in conductingthe simulations and Dr. Peyman Talebifard helped in preparing the Introduction sectionof the article. In the following, the list of these publications are provided.Publication related to Chapter 2• Javad Hajipour, Rukhsana Ruby, Amr Mohamed and Victor C. M. Leung, “Buffer-Aided Relaying Improves Both Throughput and End-to-End Delay”, Submitted to apeer reviewed journal.Publication related to Chapter 3• Javad Hajipour, Amr Mohamed and Victor C. M. Leung, “Channel-, Queue-, andDelay-Aware Resource Allocation in Buffer-Aided Relay-Enhanced OFDMA Net-ivPrefaceworks,” Accepted for Publication in Trans. on Veh. Technol., Mar. 2015.Publication related to Chapter 4• Javad Hajipour, Amr Mohamed and Victor C. M. Leung, “Dynamic DistributedResource Allocation in Relay-Assisted OFDMA Networks,” in Proc. of ICWMC,June 2012.Publication related to Chapter 5• Javad Hajipour, Amr Mohamed and Victor C. M. Leung, “Utility-Based EfficientDynamic Distributed Resource Allocation in Buffer-Aided Relay-Assisted OFDMANetworks,” Submitted to a peer reviewed journal.• Javad Hajipour, Ghasem Naddafzadeh, Peyman Talebifard and Victor C. M. Leung,“Power Efficient High-Rate Service Provisioning in Vehicular Networks,” in Proc. ofACM DivaNet, Sep. 2014.Publication related to Chapter 6• Javad Hajipour, Amr Mohamed and Victor C. M. Leung, “Efficient and Fair Throughput-Optimal Scheduling in Buffer-Aided Relay-Based Cellular Networks,” Accepted forPublication in IEEE Commun. Lett., May 2015.vTable of ContentsAbstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iiPreface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ivTable of Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viList of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xList of Abbreviations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xivAcknowledgments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xviiDedication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xviii1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.1 Related Works and Motivation . . . . . . . . . . . . . . . . . . . . . . . . 21.2 Research Questions and Objectives . . . . . . . . . . . . . . . . . . . . . . 61.3 Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91.4 Thesis Organization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122 Buffer-Aided Relaying Improves Both Throughput and End-to-End De-lay . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142.2 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15viTable of Contents2.3 Effect of Buffer Aided Relaying On the End-to-End Delay . . . . . . . . . 172.3.1 Relaying Systems with Deterministic Data Arrivals and BernoulliChannel Conditions . . . . . . . . . . . . . . . . . . . . . . . . . . 172.3.2 Relaying Systems with Bernoulli Data Arrivals and Channel Condi-tions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 202.3.3 General Relaying Systems . . . . . . . . . . . . . . . . . . . . . . . 252.4 Numerical Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 282.4.1 Bernoulli Data Arrivals and Channel Conditions . . . . . . . . . . 282.4.2 General Scenario . . . . . . . . . . . . . . . . . . . . . . . . . . . . 312.5 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 373 Channel-, Queue-, and Delay-Aware Resource Allocation . . . . . . . . 383.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 383.2 System Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 403.3 Quality-Of-Service-Aware Resource Allocation Problem . . . . . . . . . . 443.3.1 The Main Objectives . . . . . . . . . . . . . . . . . . . . . . . . . 443.3.2 Challenges of IRAP Formulation . . . . . . . . . . . . . . . . . . . 453.4 Quality-Of-Service-Aware Cross Layer Scheduling and Resource Allocation 493.4.1 CQDA Policies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 493.4.2 Enhanced CQDA Policies . . . . . . . . . . . . . . . . . . . . . . . 593.4.3 Practical Considerations . . . . . . . . . . . . . . . . . . . . . . . . 613.5 Performance Evaluation and Discussion . . . . . . . . . . . . . . . . . . . 623.6 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 784 Dynamic Distributed Resource Allocation Framework . . . . . . . . . . 794.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 794.2 System Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80viiTable of Contents4.3 Cross Layer Scheduling and Resource Allocation . . . . . . . . . . . . . . 824.3.1 Problem Formulation . . . . . . . . . . . . . . . . . . . . . . . . . 824.3.2 Dual Problem Formulation . . . . . . . . . . . . . . . . . . . . . . 854.3.3 Dynamic Distributed Resource Allocation . . . . . . . . . . . . . . 874.3.4 Solution of Main Dual Problem at the BS . . . . . . . . . . . . . . 884.4 Numerical Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 894.5 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 925 Utility-Based Efficient Dynamic Distributed Resource Allocation . . . 945.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 945.2 Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 975.2.1 System Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 975.2.2 Stochastic Problem Formulation . . . . . . . . . . . . . . . . . . . 1005.2.3 Transformed Problem and Virtual Queues . . . . . . . . . . . . . . 1025.3 Cross Layer Traffic Control and Resource Allocation . . . . . . . . . . . . 1045.3.1 Instantaneous Problem . . . . . . . . . . . . . . . . . . . . . . . . 1045.3.2 Traffic Control and Data Admission . . . . . . . . . . . . . . . . . 1065.3.3 Resource Allocation Challenges . . . . . . . . . . . . . . . . . . . . 1075.3.4 Efficient Dynamic Distributed Resource Allocation . . . . . . . . . 1115.3.5 Efficient Dynamic Centralized Resource Allocation . . . . . . . . . 1195.4 Performance Evaluation and Discussion . . . . . . . . . . . . . . . . . . . 1205.5 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1306 Efficient and Fair Throughput-Optimal Scheduling . . . . . . . . . . . . 1316.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1316.2 Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1336.2.1 System Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133viiiTable of Contents6.2.2 Background and Problem Statement . . . . . . . . . . . . . . . . . 1386.3 MMW Policy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1416.3.1 Motivation and The Main Idea . . . . . . . . . . . . . . . . . . . . 1416.3.2 Stability Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . 1456.4 Distributed and Semi-Distributed Implementations . . . . . . . . . . . . . 1466.4.1 Case1: Shared Channel . . . . . . . . . . . . . . . . . . . . . . . . 1466.4.2 Case2: Independent Channels and Highly Scalable Framework . . . 1476.5 Performance Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1486.6 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1607 Conclusions and Future Work . . . . . . . . . . . . . . . . . . . . . . . . . 1627.1 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1627.2 Suggestions for Future Work . . . . . . . . . . . . . . . . . . . . . . . . . 166Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169AppendicesA Assumptions for Channel Models . . . . . . . . . . . . . . . . . . . . . . . 178B Proof of Theorem 2.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181C Proof of Theorem 6.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183ixList of Figures1.1 (a) Relay-assisted cellular network (b) OFDMA system . . . . . . . . . . . 32.1 Simple queueing system . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162.2 Queueing model for (a) conventional relaying system (b) buffer-aided relay-ing system; (c) joint channel conditions . . . . . . . . . . . . . . . . . . . . 192.3 Markov chain for the number of packets in the BS buffer . . . . . . . . . . 212.4 Average end-to-end packet delay in the case of Bernoulli channel distributionwith s1 = s2 = 0.9 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 292.5 Average end-to-end packet delay in the case of Bernoulli channel distributionwith s1 = 0.5, s2 = 0.9 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 302.6 Average end-to-end packet delay in the case of Bernoulli channel distributionwith s1 = s2 = 0.5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 302.7 (a) BS queue size over time (b) relay queue size over time; at the arrivalrate of 50 packets/slot. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 322.8 CDF of end-to-end packet delays at the arrival rate of 50 packets/slot . . . 332.9 Effect of packet arrival rate at the BS on (a) average throughput in eachtime slot (b) average end-to-end packet delay. . . . . . . . . . . . . . . . . 342.10 (a) BS queue size over time (b) relay queue size over time; at the arrivalrate of 100 packets/slot. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 362.11 CDF of end-to-end packet delays at the arrival rate of 100 packets/slot . . 37xList of Figures3.1 System model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 413.2 Flowchart of the IRAP formulation . . . . . . . . . . . . . . . . . . . . . . 503.3 (a) Average PDR (b) CDF of the delay of the received packets; with 3subchannels, 6 close and 6 far users, all delay-sensitive. . . . . . . . . . . . 663.4 CDF of the delay of the received packets; with 3 allocatable subchannelsfrom a pool of subchannels, 6 close and 6 far users, all delay-sensitive. . . . 683.5 (a) Average throughput (b) average queue size of the delay-tolerant users;with 4 subchannels, 6 close and 6 far users, half of the users in each groupare delay-sensitive and the other half are delay-tolerant. . . . . . . . . . . . 693.6 (a) Average PDR (b) CDF of the delay of the received packets for the delay-sensitive users; with 4 subchannels, 6 close and 6 far users, half of the usersin each group are delay-sensitive and the other half are delay-tolerant. . . . 713.7 (a) Maximum packet delay of delay-sensitive users (b) average total through-put of delay-tolerant users; N = 10, K = 22, random distribution of usersin the cell area. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 733.8 Effect of data arrival rate of delay-sensitive users on (a) average PDR ofdelay-sensitive users (b) average total throughput of delay-tolerant users. . 753.9 Effect of data arrival rate of delay-tolerant users on (a) average PDR ofdelay-sensitive users (b) average total throughput of delay-tolerant users. . 774.1 System model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 814.2 Similarity of the model to multicell network . . . . . . . . . . . . . . . . . 844.3 CDF of system average throughput in each time slot, K=10 . . . . . . . . 914.4 System average queue size over time, K=10 . . . . . . . . . . . . . . . . . 924.5 Effect of increase in the number of users on the system average throughput 935.1 System model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98xiList of Figures5.2 Effect of parameter I on (a) average power consumption of BS and relays (b)virtual power queue size of BS over time; N = 7, |K0| = 6, |Km| = 1, m ∈ M. 1225.3 CDF of (a) system utility (b) total overflow from the buffers of relays; atthe data arrival rate of 20 packets/second . . . . . . . . . . . . . . . . . . . 1255.4 CDF of the system throughput at the data arrival rate of 20 packets/second 1265.5 Effect of increase in the packet arrival rate at the BS on (a) the systemaverage utility (b) average overflow from the buffers of relays . . . . . . . . 1275.6 Effect of increase in the packet arrival rate at the BS on the system averagethroughput . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1285.7 Effect of increase in the packet arrival rate on the amount of data admittedfor direct and indirect users . . . . . . . . . . . . . . . . . . . . . . . . . . 1295.8 Effect of parameter We on providing fair data admission for direct and in-direct users, at the arrival rate of 140 packets/second for every user . . . . 1306.1 System model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1346.2 (a) Parallel queues (b) combination of parallel and tandem queues. . . . . 1426.3 Multihop relay-based cellular network. Square, circle and triangle representthe BS, RS and user, respectively. . . . . . . . . . . . . . . . . . . . . . . . 1496.4 (a) Percentage of time slots each queue in the RS exceeds the queue threshold(b) maximum queue sizes in the RS, over time, when Qˆ = 14 kbits. . . . . 1516.5 (a) CDF of the average queue size for direct and indirect users (b) averagebit delay of all the users; the case of shared channel. . . . . . . . . . . . . . 1536.6 CDF of Jain’s fairness index for average bit delays of the users; the case ofshared channel. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154xiiList of Figures6.7 CDF of the average queue size for direct and indirect users; the case ofshared channel, where direct users are located close to the BS and indirectusers on the cell edge. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1556.8 Average bit delay for direct and indirect users; the case of shared channel,where direct users are located close to the BS and indirect users on the celledge. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1566.9 (a) System average queue size over time (b) system average throughput ineach time slot; the case of shared channel, where indirect users are locatedclose to the RS. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1576.10 (a) CDF of the average queue size for direct and indirect users (b) averagebit delay of all the users; the case of independent channels. . . . . . . . . . 1596.11 CDF of Jain’s fairness index for average bit delays of the users; the case ofindependent channels. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160xiiiList of AbbreviationsAF Amplify and ForwardBE Best EffortBER Bit Error RateBS Base StationCDF Cumulative Distribution FunctionCF Compress and ForwardCQDA Channel-, Queue-, and Delay-AwareCSI Channel State InformationDDRA Dynamic Distributed Resource AllocationDF Decode and ForwardEDCRA Efficient Dynamic Centralized Resource AllocationEDDRA Efficient Dynamic Distributed Resource AllocationEE Energy EfficiencyEJUMR Enhanced Joint Utility and Minimum RateESUMR Enhanced Separate Utility and Minimum RateFDRA Frequency Domain Resource AllocationFHDR Fixed Half-Duplex RelayingHoL Head of LineIRAP Instantaneous Resource Allocation ProblemxivList of AbbreviationsJUMR Joint Utility and Minimum RateLTE Long Term EvolutionLTE-A LTE-AdvancedMDB Maximum Differential BacklogMSB Maximum Sum BacklogMMW Modified Max-WeightMW Max-WeightOFDMA Orthogonal Frequency Division Multiple AccessPDR Packet Drop RatioPF Proportional FairQAM Quadrature Amplitude ModulationQCSI Queue and Channel State InformationQoS Quality of ServiceQSI Queue State InformationRC Rate ConstrainedRRM Radio Resource ManagementRS Relay StationRT Real TimeSA Subchannel AllocationSE Spectral EfficiencySNR Signal-to-Noise RatioSPAS Subchannel and Power Allocation StrategySSD Subchannel Sets DeterminationSTD Slot Type DeterminationSUMR Separate Utility and Minimum RatexvList of AbbreviationsTDS Time Domain SchedulingTPA Total Power AdjustmentWiMAX Worldwide Interoperability for Microwave AccessxviAcknowledgmentsFirst of all, I would like to express my deep gratitude to my supervisor, Dr. Victor Leung,for providing the opportunity to study the PhD program at UBC, which helped me tolearn a lot about my field of study and also about myself. I also want to thank him for hisinvaluable support and guidance over the past years, and for his patience on my delay todecide the research topic.I would like to declare special thanks to Dr. Amr Mohamed for providing the oppor-tunity to visit Qatar university for a period of time during my research. I am grateful forhis comments and advices through my research which helped in preparing my thesis.Also, I would like to extend my appreciation to my colleagues and friends for their helpsand supports during the past years.Most importantly, I feel indebted to my great family, my father, my mother and mybrothers for their everlasting love and support not only during my studies at UBC but alsothroughout my entire life.This work was supported by the Natural Sciences and Engineering Research Council(NSERC) of Canada.xviiDedicationTo my familyxviiiChapter 1IntroductionCellular networks have experienced tremendous increase in the demand for wireless con-nectivity in the past decade. The exponential growth in the data introduced in the internetand the desire and need for accessing it, any where and any time, have shifted the wirelessservices from voice dominant to data dominant ones. In particular, the advent of smartphones has accelerated this trend due to their promising capabilities and growing mar-ket of mobile applications. In order to address this trend, industrial and standardizationsectors have devised new solutions, through several generations of cellular networks suchas 3G and 4G. In addition to allocating more spectrum, novel transmission techniqueshave been standardized considering the fact that the frequency resources are very scarceand expensive. Among those, Orthogonal Frequency Division Multiple Access (OFDMA)is the key enabling factor in evolution towards high-speed data services, which has beenwidely adopted in contemporary wireless cellular networks such as IEEE 802.16 WorldwideInteroperability for Microwave Access (WiMAX) and Long Term Evolution (LTE) [1–4].In OFDMA, the system bandwidth is divided into multiple carriers (subchannels) and thedata is transmitted through these parallel channels, leading to robustness against multi-path fading, high spectral efficiency, multiuser diversity and flexibility in radio resourceallocation. However, users who are far away from the Base Station (BS) or have block-ages between the BS and themselves suffer from low data rates due to their weak wirelesslinks. Wireless relaying is an attractive mechanism to overcome this limitation, which hasgained significant attention among both industrial and academic researchers due to the1Chapter 1. Introductioncost effective and fast deployment possibility of Relay Stations (RSs) [5, 6]. RSs use airinterface for backhaul connectivity and are able to enhance the coverage, capacity, andenergy efficiency without the need for costly wired backhaul deployment. They performthis through relaying protocols such as amplify-and-forward (AF), compress-and-forward(CF), or decode-and-forward (DF).Scheduling and resource allocation are deciding factors for efficient use of wireless re-sources. Although the aforementioned advanced technologies lead to enhanced systemcapacity and coverage, intelligent scheduling and resource allocation algorithms are alsoneeded to exploit the potentials of such systems and address the challenges in providingsatisfactory service for users. In this thesis, our goal is to identify these opportunities andchallenges and address them through suitable policies.The rest of this chapter is organized as follows. In Section 1.1, we present the relatedwork on resource allocation in relay-assisted wireless networks and discuss the motivationof this thesis. Section 1.2 discusses the research questions and objectives, and Section 1.3briefly summarizes the contributions. Finally, Section 1.4 describes the thesis organization.1.1 Related Works and MotivationResource allocation and scheduling are important issues in wireless networks due to theincreasing demand of users for data traffic and the scarcity of radio resources. They becomemore complicated and challenging in relay-enhanced OFDMA networks, as the resourcesshould be shared between the BS and relays [7, 8]. To clarify this, Fig. 1.1 shows a relay-assisted cellular network and an OFDMA system. The main resources in such systems aretime slots, frequency subchannels and the power used on the subchannels. Depending onthe objectives of the network operators and the constraints imposed by system limitationsas well as the users’ service requirements, different policies are needed for allocating these2Chapter 1. IntroductionabcBSRS(a) (b)Figure 1.1: (a) Relay-assisted cellular network (b) OFDMA systemresources to different links of the network. There has been extensive research going on inthis area in the past years and remarkable work has been done to address the challenges [9–31].In particular, [9] suggested adaptive resource usage by utilizing access hop reuse andadaptive frame segmentation to improve the system capacity. In [10] cross layer schedulingin a relay network was studied as an optimization problem for maximizing the receivedgoodput and, based on dual decomposition, a distributed algorithm was proposed. [11]proposed a distributed algorithm for power and subchannel allocation in a system withfull-duplex and hybrid relaying methods. [12] studied the resource allocation problem inthe presence of cognitive relays, and [13] proposed semi-distributed algorithm for resourceallocation with low overhead and low computational complexity. In [14], the authors in-vestigated the selection of the relays and subcarrier and power allocation for them, takinginto account link asymmetry and imperfection in Channel State Information (CSI). Authorsof [15] analyzed the effect of opportunistic scheduling and spectrum reuse on the systemspectral efficiency and provided insights about the tradeoffs in the design of resource al-location and interference management algorithms. More other works studied frequencyreuse schemes to improve the system capacity [16, 17]. In [18], authors considered sub-channel reuse for transmissions from relays and based on a game-theoretic framework, they3Chapter 1. Introductiondesigned distributed resource allocation schemes for transmissions from relays. The gametheoretic approach is also used in [19–22] for channel allocation, interference control andjoint consideration of relay selection and power allocation.Providing quality of service (QoS) is another important concern for meeting the di-verse requirements of user services in broadband wireless networks and has been addressedin [23–25]. [23] considered Best Effort (BE) and Rate-Constrained (RC) services and, basedon convex optimization and dual problem formulation, introduced a QoS price concept forrelay selection and subchannel allocation. In a similar service environment, authors of [25]studied joint optimization of the BS and relay power allocation in addition to relay selec-tion and subchannel assignment; accordingly they introduced power and QoS prices andsolved the problem using two level dual decomposition method. The work in [24], on theother hand, considered dynamic resource allocation for supporting the BE and Real-Time(RT) services simultaneously and, using a utility-based optimization problem, proposed anefficient relay selection and subchannel allocation algorithm to meet the stringent delayrequirements of the RT users.The common assumption in most of the literature in this area is that the relays do nothave buffer to store packets for later transmission. Therefore, they have to operate in a“prompt” manner and forward their received data immediately in the following transmis-sion interval. However, using relays that have buffering capability can make the resourceallocation more flexible and enhance the system capacity. This has been investigated inseveral works and the throughput gain has been proved [32–38]. This gain is achieved dueto the fact that the relay can store a user’s data packets in a buffer when the channelcondition between the relay and user is poor, and forward them when the channel be-comes good. Motivated by this, several other works have considered the application ofbuffer-aided relaying in different areas [39–45].4Chapter 1. IntroductionBased on the above mentioned, the combination of OFDMA and buffer-aided relaying isa promising solution for ubiquitous high-data-rate coverage in cellular networks. However,any improvement in the system performance always comes at a cost. In the case of bufferingrelays, this cost has been investigated in [32–35, 38] and described as the increased delayof data packets in the relays’ buffers. The arguments in these works are based on theassumption that the users have infinitely backlogged buffers in the BS, meaning that theyalways have data to transmit. However, this approach does not take into account the factthat in realistic scenarios, especially with the emerging data services, data packets arrivein a random and burst pattern at the BS buffers. Therefore, the effect of this on the packetdelays needs more investigation to clarify the overall tradeoffs that need to be consideredin the system design.On the other hand, employing buffer-aided relays brings new challenges for schedulingand resource allocation in cellular networks and necessitates more investigations, to identifythe issues and design sophisticated algorithms for addressing them. Recently there has beengrowing interest in this area [46–53]. [46] proposed a throughput-optimal policy, calledMaximum Sum Backlog (MSB), to provide delay fairness in buffer-aided relay-assistedcellular networks. The authors of [47] considered quasi full-duplex relaying where a relaycan receive and transmit simultaneously on orthogonal channels. Based the Queue andChannel State Information (QCSI) and using Max-Weight (MW) scheduling policy [54,55], they proposed a fairness-aware resource allocation algorithm to provide ubiquitouscoverage as well as load balancing. Similar work was done in [48] with the difference thatit considered half-duplex relaying, and proposed iterative algorithms for transmissions overtwo consecutive subframes (BS transmission in the first one and relay transmission in thesecond one) using queue-length coupling. In [49] mobile-relay-enhanced networks werediscussed and a channel-and-queue-aware algorithm was proposed to utilize the system5Chapter 1. Introductionpotentials. [51] studied joint time, subchannel and power allocation in LTE-A systems,where each time slot can either be used for transmissions on the BS-to-relays and BS-to-users links or the BS-to-users and relays-to-users links. [52] proposed a three step algorithmfor joint optimization of long-term fairness and overall network throughput. In [53], authorsconsidered a three node network, with one source, one relay and one destination, and basedon Markov Decision Process (MDP), they studied the optimal link selection policy formaximizing the throughput in a system with finite relay buffer.However, none of the above works considered service provisioning in buffer-aided relay-assisted wireless networks in the presence of users with stringent QoS requirements such asdelay guarantees. Several other objectives and issues which already have been addressed inthe relay networks without buffering, described earlier in this section, remain unansweredin the scenarios with buffering relays. In this thesis, we aim at identifying these issues andaddressing them through sophisticated algorithms, which take into account the stochasticnature of data arrivals at the BS and RSs’ buffers in addition to the randomness of wirelesschannel conditions. In the next section, we describe in detail the research questions andobjectives addressed in this thesis.1.2 Research Questions and ObjectivesResearch on resource allocation in buffer-aided relay-assisted cellular networks is newand needs time to address the challenges that arise in this area. One of the importantissues is to identify the main tradeoffs related to using buffers at relays. This is importantas it affects the decision on using relays with or without buffers in the practical systems. Inthe works in this area such as [32, 35, 38], only the queueing delay in relays is investigatedand it is deemed that the use of buffers improves the throughput at the cost of increaseddelay. However, the question is what is the effect on the end-to-end packet delay, i.e.,6Chapter 1. Introductionthe delay that packets experience since their arrival at the BS buffer until delivery to thedestination? Even though the packets experience delay in the relays’ buffers, the queueingat the BS buffer should also be taken into account, as it affects the delay perceived by theend users. What are the tradeoffs considering this whole picture? We address this questionin Chapter 2 and provide a ground for discussing the performance of resource allocationalgorithms in buffer-aided relay-assisted cellular networks which are investigated in thesubsequent chapters.The main concerns of resource allocation in cellular networks can be classified into twocategories. One is service oriented, which aims at efficient use of system resources andproviding QoS for the users with specific service requirements. The other one is implemen-tation oriented and is concerned about the efficiency and low-complexity in implementingthe resource allocation algorithms.To address the aforementioned concerns, we first consider service provisioning in buffer-aided relay-assisted networks for the users with stringent service requirements. In this con-text, we take the user satisfaction more important than the low-complexity concerns for thealgorithms. We note that the resource allocation in a multiuser system for providing QoSis a complicated and challenging task, taking into account the strict delay requirements ofdelay-sensitive services and average throughput requirements of delay-tolerant ones. Thisis especially important due to the fact that the queuing delay in the relays needs to betaken into account in meeting the deadlines of the delay-sensitive packets. Because of this,the QoS-aware algorithms already proposed for prompt relays in the works such as [23–25] cannot be easily extended to the cases with buffering relays. Also, in the literatureon resource allocation, the Instantaneous Resource Allocation Problem (IRAP) is usuallyassumed given and the goal is to design suitable algorithms for solving that. However, in amultihop network with buffering relays, the IRAP formulation for QoS provisioning is itself7Chapter 1. Introductiona challenge. For this purpose, the definition of utility function, for average throughput pro-visioning, and the constraints imposed, for meeting packet transmission deadlines, need tobe investigated in addition to the algorithm used later for solving the formulated problem.In Chapter 3 we discuss this in detail and propose sophisticated policies to address it.Next, we consider the services with less stringent requirements and address the low-complexity and efficiency concerns about resource allocation algorithms. Noting the com-putational burden of these algorithms, it is always desirable to divide resource allocationtasks among the entities involved in serving the users, whenever service requirements ofthe users allow. A distributed resource allocation needs suitable framework for identifyingthe affecting variables and the required messaging between the serving entities. In theliterature, there has been a lot of works for the scenarios with prompt relays. However,for buffer-aided relaying systems, it is still needed to derive such a framework taking intoaccount the flexibilities brought by the use of buffers at the relays. We address this inChapter 4.Moreover, it is needed to take into account the constraints that might be imposed onresource allocation in practical systems. In particular, the time limitations for decidingabout the allocation of the resources in the presence of fast varying channel conditionsshould be taken into account in designing efficient distributed algorithms. While the dis-tributed frameworks derived based on mathematical tools provide an insight on the af-fecting factors, low-complexity resource allocation algorithms are necessary for practicalconsiderations when the users’ services are not very challenging as in the case of BE ser-vices. Furthermore, fair data admission for BE services and also constraints on averagepower consumption of the serving nodes in cellular networks require careful attention tothe resource allocation problem formulation. Even though the Lyapunov drift-plus-penaltypolicy, studied in [51, 55], provides a useful framework for data admission and satisfying8Chapter 1. Introductionthe constraints defined in average sense, it needs more consideration for use in cellularnetworks. We investigate these issues and challenges in Chapter 5.Finally, noting that the users connected to relays experience more delay than the onesconnected to the BS, it is important to design efficient scheduling algorithms for providingfairness in terms of delay among the users with different number of hops. The well knownMW scheduling policy [54, 55], which is usually used for stabilizing the queues and achievingthe optimum throughput, has been mostly considered in multihop mesh networks wherea packet can be routed through different paths to reach the destination. However, in therelay-based cellular networks with one path for packet transmission from the source BS tothe destination, MW can lead to discrimination among the users with one hop and twohops distance from the backbone network, in terms of delay. Although MSB [46] tries toaddress this, it can lead to instability in the scenarios with shared channels for the BS andrelays. Therefore, new policies are needed to address fairness in terms of delay in multihopcellular networks. We study this in Chapter 6.1.3 ContributionsIn this section, we outline the contributions in this thesis to address the above mentionedissues. The main contributions are summarized as follows:• In Chapter 2, we provide insights on the effect of buffering relays on the end-to-end delay of users’ data, from the time they arrive at the source until delivery tothe destination. We also analyze end-to-end packet delay in the relay networks withBernoulli data arrivals and channel conditions, and prove that the data packets expe-rience lower average end-to-end delay in buffer-aided relaying system compared withthe conventional one. Furthermore, using intuitive generalizations, we clarify thatthe use of buffers in relays improves not only throughput, but ironically the average9Chapter 1. Introductionend-to-end packet delay. Through computer simulations, we validate our analyticalresults for the systems when the data arrival and channel condition processes followBernoulli distribution. Moreover, via the extensive simulations under the settings ofpractical systems, we confirm our intuition for general scenarios.• In Chapter 3, we propose novel Channel-, Queue-, and Delay-Aware (CQDA) poli-cies for providing QoS in OFDMA networks enhanced with buffering relays. CQDApolicies take into account the QoS requirements of both delay-sensitive users withthe goal of meeting packet deadline constraints, and delay-tolerant users who needguarantees on their average throughput. We provide a framework for “time domainscheduling” and “frequency domain resource allocation”, based on which, the pro-posed CQDA policies formulate the IRAP and decide about routing and resourceallocation. These policies take different approaches to determine the set of users con-sidered in the utility function, the delay budget division between the BS and relays,the routing path of delay-sensitive users’ packets as well as the values of minimumrate requirements for serving their queues. At the end, they use an iterative algo-rithm to solve the resulted IRAPs. Numerical results show significant improvementsin throughput and delay performance of the proposed resource allocation mechanismscompared with the existing algorithms.• In Chapter 4, we propose a novel framework for distributed resource allocation in abuffer-aided relay-assisted OFDMA system. This framework models the network as amulticell scenario with small serving areas where each of the relays and BS serves oneof these areas using shared subchannel and power resources. It provides an insight forreducing signaling overhead and computation burden on the BS in practical systemswith buffering relays. We formulate the power and subchannel allocation problem as aconvex optimization problem and propose Dynamic Distributed Resource Allocation10Chapter 1. Introduction(DDRA) algorithm, where the BS and relays decide about the allocation of the systemresources by passing messages among themselves and based on the QCSI. Simulationresults confirm that the proposed algorithm is able to utilize the system resourceswell, make the system queues stable and lead to high throughput.• In Chapter 5, we propose effective parameters for instantaneous problem formula-tion, which adapt the well-known Lyapunov drift-plus-penalty policy to buffer-aidedrelay-assisted cellular networks. One of these parameters is the extra weight for thelinks of the relayed users from the BS and relays. The other parameter is the im-portance coefficient for the virtual power queues corresponding to the constraints onthe average power consumption of the BS and relays. These parameters enable fairdata admission for BE services and facilitate satisfying the average power constraints.Moreover, we identify the challenges that arise even when equal power allocation ofthe BS and relays is considered on the subchannels of an OFDMA system. We intro-duce a low-complexity strategy to break the ties in power and subchannel allocation.Using that, we design efficient and low-complexity distributed and centralized re-source allocation methods for buffer-aided relay-assisted OFDMA networks, whichtake into account several practical constraints. Specifically, the proposed EfficientDynamic Distributed Resource Allocation (EDDRA) scheme is suitable for use inpractice as it imposes less overhead on the system and splits the resource allocationtasks among the BS and relays. Extensive simulation results show the effectiveness ofthe proposed parameters in meeting the objective and the constraints of the studiedproblem. We also show that the proposed EDDRA scheme has close performance tothe proposed centralized one and outperforms an existing centralized algorithm.• Finally, in Chapter 6, we propose novel throughput-optimal scheduling policy whichstabilizes the system queues and at the same time is efficient and fair in terms of11Chapter 1. Introductionqueueing delay. We show that MW or MSB, proposed in the literature, are either un-fair or unstable in the shared channel scenarios. We modify MW policy and proposea new version of throughput-optimal algorithms which we refer to as Modified Max-Weight (MMW). In MMW, by defining a suitably large threshold, a link’s weight isproportional to just the corresponding local queue size either in the BS or RS, al-most all the time. This makes MMW suitable for use in both shared and independentchannel scenarios, with either centralized or decentralized network implementations.MMW alleviates the need to report any information about the queue sizes of relayedusers, almost all the time. Also, it can adjust a parameter to further improve delayfairness between the relayed and direct users. Numerical results confirm that MMWleads to similar delay for direct and relayed users and also has low signaling overhead.1.4 Thesis OrganizationThis thesis is organized as follows. First we discuss the effect of buffering relays on theend-to-end packet delay, through mathematical analysis and intuitive generalizations inChapter 2. The goal is to dispel the concern that the use of buffer in relays would increasethe delay in the system. In Chapter 3, we discuss the challenge in IRAP formulationfor providing QoS in the presence of users with heterogeneous service requirements. Wepropose novel CQDA policies for deciding about the parameters of utility function and theproblem constraints. In Chapter 4, we provide a novel perspective for resource allocationand use convex optimization to derive closed form equations for distributed power andsubchannel allocation. Data admission control and efficient distributed resource allocationis proposed in Chapter 5, where we take into account several practical constraints. Chapter6 discusses efficient throughput-optimal algorithms in relay-based cellular networks, takinginto account the fairness in terms of queueing delay. The conclusion and some potential12Chapter 1. Introductionfuture work are presented in Chapter 7. Finally, the Appendices present the assumptionsfor the channel models and the proofs for the theorems.13Chapter 2Buffer-Aided Relaying ImprovesBoth Throughput and End-to-EndDelay2.1 IntroductionWireless relays are promising solutions for enhancing the capacity and coverage of cellularnetworks. Usually in the literature in this area, it is assumed that the relaying is performedin two consecutive subslots of a transmission interval; i.e., in the first subslot, the BStransmits to the relay and in the second one, the relay forwards the received data to thedestination. Recently it has been shown that using the buffering technique at the relay canimprove the system throughput [32, 35, 38, 56]. This is achieved due to the fact that thebuffering capability allows the relay to store the packets when the channel condition is badand transmit when it is good. The drawback for this capability is usually deemed to be theincrease in the packet delays due to queueing in the relay, and the works in [32, 35, 38, 56]have tried to investigate and discuss the trade off between throughput and delay. Theseinvestigations are based on the assumption of infinitely backlogged buffers in the source,i.e., the BS, and consider the queueing delay only at the relay buffer without taking intoaccount the queue dynamics at the BS. However, the perceived delay at the destination is14Chapter 2. Buffer-Aided Relaying Improves Both Throughput and End-to-End Delayaffected by the queueing both at the BS and the relay. Therefore, it is needed to investigatethe delay since the packets arrive at the BS until delivery to the destination.In this chapter, we aim at filling the above mentioned gap and clarifying the tradeoffsin using buffer-aided relaying. For this, we first present simple reasoning and discuss thecause of queue formation in a simple queueing system. Based on that we provide an insighton the delay performance in buffer-aided and conventional relaying. Then, we study thedelay performance in the case of Bernoulli data arrivals and channel conditions and derivethe closed form equations for average end-to-end packet delay, based on which we provethat the average end-to-end packet delay in buffer-aided relaying is in fact lower than thatin the conventional relaying. Then, we discuss general scenarios and through intuitivegeneralizations, we conclude that the buffering relays in fact improve throughput as wellas the average end-to-end packet delay. Using simulations, we verify our analysis anddemonstrate the validity of the presented perspective. To the best of our knowledge, thisis the first work that discusses the effect of buffering relays on the overall waiting time ina relaying network and provides the aforementioned insight and conclusion.The rest of this chapter is organized as follows. Section 2.2 provides a background onthe queueing delay based on a simple queueing system. In Section 2.3 we study end-to-enddelay performance of conventional and buffer-aided relaying networks. Section 2.4 providesnumerical results and finally, the conclusion is presented in Section 2.5.2.2 BackgroundIn this section, we study a simple queueing system and discuss the cause of packet delays,to provide a basis for the next section which studies the end-to-end packet delay in relayingnetworks.Let consider a single buffer, as shown in Fig. 2.1, which is fed by a deterministic data15Chapter 2. Buffer-Aided Relaying Improves Both Throughput and End-to-End DelayServerQueueFigure 2.1: Simple queueing systemarrival process and served by a single server. We assume that time is divided into slotswith equal lengths, indexed by t ∈ {1, 2, ...}. The total number of data packets that arriveat the buffer is N . Starting from t = 1, one packet arrives per time slot. Therefore, the lastpacket arrives at t = N . For simplicity, we assume that the arrivals occur at the beginningof time slots. The server might be active or inactive in each time slot. When it is active,it can serve only one packet per time slot, where the service implies delivering the packetto the destination.We note that if the server is active in each time slot t ∈ {1, ..., N}, each packet will beserved immediately after its arrival. In this case, there is no queue formed in the bufferand consequently, each packet experiences an overall delay of one time slot, which is dueto the time spent in the server. Accordingly, the packets will arrive at the destination atthe beginning of time slots t ∈ {2, ..., N + 1}. However, if the server is inactive in the firsttime slot, the first packet has to wait in the buffer until time slot 2, to get served. Then,in time slot 2, when the second packet arrives, the server is busy with serving the firstpacket. Therefore, the second packet also experiences one slot delay in the queue and oneslot delay in the server. In the similar manner, all the following packets incur the samequeueing and service delays. In other words, the delayed operation of the server causes thenonzero queueing delay for the first packet, which is transferred to the subsequent packetsas well.Based on the above discussion, if the server is inactive in time slot x ∈ {1, ..., N}, itadds one slot to the queueing delay (and the overall waiting time) of every packet arrivedin slot x or afterward. In general, the packet which arrived in time slot t will experience16Chapter 2. Buffer-Aided Relaying Improves Both Throughput and End-to-End Delaya queueing delay of nt and will be delivered in time slot t+ nt + 1, where nt indicates thenumber of slots before and including t in which the server was inactive. It is clear that thecause of queue formation in such systems is the interruption in the operation of the server,which is translated to queueing delays of the data packets.2.3 Effect of Buffer Aided Relaying On theEnd-to-End DelayIn this section, first we consider that data arrives in a deterministic manner and theavailability of the channels follows Bernoulli distribution, and provide an insight on theend-to-end delay performance for conventional and buffer-aided relaying systems. Then,we analytically derive the end-to-end delay for these systems, where both the data arrivalprocess and the availability of the channels follow Bernoulli distribution. Finally, we discussgeneral cases and present the intuitions about the end-to-end delay performance. Note thatin all the following discussions it is assumed that the channel conditions of the BS andrelay are uncorrelated.2.3.1 Relaying Systems with Deterministic Data Arrivals andBernoulli Channel ConditionsLet consider a relay network, with one source node, i.e., the BS, one relay node and onedestination (or user) node, where the relay works based on DF technique. It is assumedthat there is no direct link between the BS and the user, and the transmissions are doneonly through the relay. There is only one channel in the system, which can be used foreither transmissions from the BS to the relay or from the relay to the user. Each timeslot is divided into two subslots, where the BS and relay can transmit in the first and17Chapter 2. Buffer-Aided Relaying Improves Both Throughput and End-to-End Delaysecond subslots, respectively. We use c1 and c2 to indicate the BS channel condition (forthe link between the BS and relay) and relay channel condition (for the link between therelay and user), respectively. These variables can either be “Good” or “Bad” , meaningrespectively that it is possible to transmit one or zero packet on the corresponding channel.The probability of being “Good” is s1 and s2 for the BS and relay channel, respectively.Fig. 2.2(a) shows the queueing model for a conventional relaying system, where the relaydoes not have buffer, and therefore, it has to transmit its received data immediately inthe next subslot. The server 1 and server 2 indicate the wireless channel from the BS torelay and from the relay to user, respectively. On the other hand, Fig. 2.2(b) indicates arelaying network, where the relay has a buffer which allows it to store the data packetsand transmit whenever its channel is good. In both of the figures, the rectangle enclosedaround the servers is to abstract the overall serving behavior of the system from the timethat the BS starts to transmit data packets until their delivery to the user. Note thatthe works in [32, 35, 38, 56], in fact study the delay by considering only the time a packetspends inside this rectangle, and do not take into account the waiting time in the BS queue,which occurs before the transmission from the BS to the relay.In the following, we consider the data arrivals in the BS buffer as the deterministicprocess, with N packets, mentioned in the previous subsection. Taking the overall servicebehavior of the systems into account, we discuss the overall waiting time of data packetsin both the conventional and buffer-aided relaying systems. The overall waiting time is infact the end-to-end delay, from the time that a packet arrives at the BS buffer until it isdelivered to the user.Fig. 2.2(c) shows the different states for the joint conditions of the BS and relay chan-nels, in which G and B indicate “Good” and “Bad” conditions, respectively. We note thatthe system with conventional relaying serves the packets only when c1c2 = GG, and with18Chapter 2. Buffer-Aided Relaying Improves Both Throughput and End-to-End DelayServerQueueServer(a)ServerQueueServer(b)c1c2 probabilityGG s1s2GB s1(1− s2)BG (1− s1)s2BB (1− s1)(1− s2)(c)Figure 2.2: Queueing model for (a) conventional relaying system (b) buffer-aided relayingsystem; (c) joint channel conditionsthe probability of s = s1s2. In the other three cases, i.e., when either or both of c1 and c2are “Bad”, the packets remain in the BS buffer and are not transmitted. Therefore, basedon the discussions in the previous subsection, the overall server in the system is inactivewith the probability ofunb = P (GB) + P (BG) + P (BB) = 1− s = 1− s1s2 (2.1)where unb indicates the interruption probability for the overall server in the system withoutbuffering in the relay. Considering this, in each time slot, the probability of “increase ofone slot” in the overall waiting time of the packets present in that time slot or arrived afterthat is unb = 1− s1s2. Here, the increase in the overall waiting time is due to the increasein the BS queueing delay of those packets.Now consider the system where the relay has a buffer. We note that if the channelconditions are as BB in time slot x, similar to the system with conventional relaying,there will be an increase of one slot in the overall waiting time of the packets present in thetime slot x or arriving afterward. However, for the channel conditions as GB and BG, thecase is different. In order to clearly investigate these states, first we consider the followingexample:• In time slot t = 1, the channel conditions are as GB. Therefore, in the first subslot,19Chapter 2. Buffer-Aided Relaying Improves Both Throughput and End-to-End Delaypacket 1 will be transmitted from the BS to relay; but due to the “Bad” channelcondition of relay, it will not be transmitted to the user in the second subslot andwill be stored in the relay’s buffer.• In time slot t = 2, the channel conditions are as BG. Therefore, in the first subslot,there will not be any transmission from the BS to relay and the overall waiting timeof the packets 2, ..., N will be increased by one slot. However, due to good conditionof the relay channel, packet 1 will be transmitted from the buffer of the relay to theuser in the second subslot.In the above example, it is observed that packet 1 is served by the relay in time slot t = 2and therefore, it is delivered to the user at time slot t = 3. This has become possible dueto the queueing of that packet in the relay’s buffer. Note that with conventional relaying,however, in the above example, packet 1 would remain in the BS queue in both time slotst = 1 and t = 2, and the overall waiting time would increase by two slots for all thepackets. Based on the above discussion and considering the nonzero probability of havingchannel conditions as GB and BG in two consecutive time slots, it can be concluded thatub < unb, where ub is the interruption probability of the overall server in the buffer-aidedrelaying system. In other words, the buffering capability in the relay reduces the overallwaiting time for the data packets. This is achieved due to the fact that the queue size inthe BS is reduced, and the data packets transferred to the relay buffer enable the efficientuse of the relay channel.2.3.2 Relaying Systems with Bernoulli Data Arrivals andChannel ConditionsNow, we consider relaying networks where both data arrivals and channel conditions followBernoulli distribution. We assume that in each time slot, the probability of one packet20Chapter 2. Buffer-Aided Relaying Improves Both Throughput and End-to-End DelayFigure 2.3: Markov chain for the number of packets in the BS bufferarrival at the BS buffer is a, and, as before, the probability of “Good” channel conditionfor the BS and relay is equal to s1 and s2, respectively. It is assumed that a < s1s2and therefore, the system queues are stable in the case of conventional and buffer-aidedrelaying [55, Chapter 2]. In the following, when we use subscript b and nb for the variables,we refer to them in the case with buffering and without buffering in the relay, respectively.Buffer Aided Relaying SystemBased on [57, Section 7.5], Fig. 2.3 shows the Markov chain model for the queue dy-namics at the BS buffer for the buffer-aided relaying network, where each state representsthe number of packets in the queue. Let pn, n ∈ {0, 1, · · · }, denote the probability thatin steady state, there are n packets in the BS queue. Note that due to equilibrium in thesteady state, we have:p0 = [1− a(1− s1)] p0 + s1(1− a)p1,pn = a(1− s1)pn−1 + [1− {a(1− s1) + s1(1− a)}] pn + s1(1− a)pn+1, n = 1, 2, · · ·Based on the above equations, the probability of each state can be written aspn = ρnp0, n = 0, 1, 2, · · · , (2.2)21Chapter 2. Buffer-Aided Relaying Improves Both Throughput and End-to-End Delaywhereρ = a(1− s1)s1(1− a). (2.3)Considering the fact that∞∑n=0pn = 1, we have:p0 = 1− ρ. (2.4)Therefore, when a new packet arrives at the BS buffer, the expected number of packetsit will see in the queue can be expressed byE(QBb ) =∞∑n=0npn =ρ1− ρ =a(1− s1)s1 − a. (2.5)Note that when a new packet arrives at the BS buffer, its expected delay until departingcan be split into two parts. The first part is the expected time that it has to wait until thepackets already in the queue are served, i.e., E(QBb )E(TBb ), where E(TBb ) is the expecteddelay imposed due the service of each packet when it is in the head of queue. The secondpart is the expected time since the packet itself gets to the head of the queue until itsservice is completed, which is denoted as E(TB∗b ). Therefore, the waiting time of a packetin the BS, E(DBb ), can be written as E(DBb ) = E(QBb )E(TBb )+E(TB∗b ). This is in fact thewell known mean value approach which holds for queueing systems with memoryless dataarrival processes [58, Section 4.3].The interpretation of E(TBb ) is as follows. The delay caused due the service of a packetin the head of the queue is 1 slot with the probability of s1 (this is in the case that the BSchannel is good at the time that the packet gets to the head of queue). It is (1 + 1) slotswith the probability of (1− s1)s1, (2 + 1) slots with the probability of (1− s1)2s1, (k + 1)slots with the probability of (1− s1)ks1, and so on. Therefore, the expected delay caused22Chapter 2. Buffer-Aided Relaying Improves Both Throughput and End-to-End Delaydue the service of a packet in the head of queue is given byE(TBb ) = s1 + (1 + 1)(1− s1)s1 + (2 + 1)(1− s1)2s1 + · · ·=∞∑k=0(1− s1)ks1.(k + 1)= s1∞∑k=0(1− s1)kk + s1∞∑k=0(1− s1)k= s1(1− s1)[dds1(−∞∑k=0(1− s1)k)]+ s111− (1− s1)= −s1(1− s1)dds11s1+ s11s1= 1− s1s1+ 1= 1s1(2.6)On the other hand, we can compute E(TB∗b ) as follows. Considering that the packet isat the head of the queue, its delay until the departure from the BS is equal to 0.5 with theprobability of s1, (1+ 0.5) with the probability of (1− s1)s1, (2+ 0.5) with the probabilityof (1− s1)2s1, (k+0.5) with the probability of (1− s1)ks1, and so on. Hence, the expectedwaiting time of the packet when it has no queue in the front isE(TB∗b ) = 0.5s1 + (1 + 0.5)(1− s1)s1 + (2 + 0.5)(1− s1)2s1 + · · ·=∞∑k=0(1− s1)ks1.(k + 0.5)= s1∞∑k=0(1− s1)kk + 0.5s1∞∑k=0(1− s1)k= 1− s1s1+ 0.5. (2.7)23Chapter 2. Buffer-Aided Relaying Improves Both Throughput and End-to-End DelayBased on the above discussions, the expected delay of a packet in the BS is equal toE(DBb ) = E(QBb )E(TBb ) + E(TB∗b )= a(1−s1)s1−a1s1 +1−s1s1 + 0.5= 1s1[a(1−s1)s1−a + 1]− 0.5= 1−as1−a − 0.5. (2.8)We note that in each time slot, either one or zero packet departs the BS. Therefore, thepacket departures from the BS can be modeled as a Bernoulli process. Due to the stabilityof the queues, the data departure rate from the BS is equal to the data arrival rate in itsbuffer. Consequently, the probability that one packet departs the BS, or, equivalently, theprobability that one packet arrives at the relay buffer is equal to a. As a result, the averagedelay that a packet experiences in the relay can be computed in the similar manner as theaverage delay in the BS buffer, which is expressed byE(DRb ) =1− as2 − a− 0.5. (2.9)Based on (2.8) and (2.9), the average waiting time of a packet in the buffer-aidedrelaying system is given byE(Db) = E(DBb ) + E(DRb ) =1− as1 − a+ 1− as2 − a− 1. (2.10)Conventional Relaying SystemNote that in the conventional relaying system, there is no buffering delay at the relayand the service probability for serving the BS buffer is s1s2. Therefore, the average numberof packets in the BS can be obtained by replacing s1 with s1s2 in (2.5). Similarly, theaverage delay caused for a packet due to the service of the packets in front of it can be24Chapter 2. Buffer-Aided Relaying Improves Both Throughput and End-to-End Delaycomputed based on (2.6) and by using s1s2 instead of s1. On the other hand, the delaythat a packet experiences when it gets to the front of the queue can be obtained basedon (2.7) and by replacing 0.5 with 0.5 + 0.5. This is because it takes the whole time slotfor the packet to be transmitted from the BS to the destination. Considering these, theend-to-end delay in the conventional relaying system can be expressed byE(Dnb) =1− as1s2 − a. (2.11)In order to compare the delay performance of the conventional and buffer-aided relayingsystems, Theorem 2.1 states and proves the main result of this subsection.Theorem 2.1 Consider a relaying network where the data arrival process at the BS andthe channel availability process follow Bernoulli distribution. Then, the average end-to-end packet delay in the buffer-aided relaying system is less than or equal to that in theconventional one. In other words, we have:E(Db) ≤ E(Dnb), (2.12)where the equality holds only in the case that the channels are always in “Good” condition,i.e., s1 = s2 = 1.The proof of Theorem 2.1 is given in Appendix B.2.3.3 General Relaying SystemsNow consider a general scenario, where the data arrival and channel condition processesfollow general distributions. We use rbr(t), rrd(t), and rbd(t) to show the achievable trans-mission rate in time slot t between the BS and relay, the relay and destination, and the25Chapter 2. Buffer-Aided Relaying Improves Both Throughput and End-to-End DelayBS and destination, respectively. Without buffering, the BS needs to transmit to the relayin the first subslot and then, the relay has to forward it immediately in the next subslot.We know that in this case, the end-to-end achievable rate between the BS and the user isrbd(t) = 12 min{rbr(t), rrd(t)}. Due to this, the transmission rate in each slot is limited bythe link with the worst channel condition in that time slot.However, when the relay has a buffer, there is no necessity for the immediate forwardingof the data and the above mentioned limitation is relaxed; therefore, the BS has theopportunity for transmitting continuously to the relay when the channel condition fromthe BS to relay is good. Then, the relay can store them in the buffer to transmit whenthe channel from the relay to user is good. Because of this, the buffering makes it possibleto improve the system throughput [32, 35, 38, 56]. Improvement in the throughput isequivalent to the improvement in the end-to-end service rate of the data arrived at theBS buffer. In other words, the increase in the system throughput means that more datais transferred from the BS to the user, or equivalently, the same data is transferred fromthe BS to the user in a less amount of time. Therefore, on average, packets experiencelower end-to-end delay, i.e., the delay since their arrival at the BS until delivery to thedestination.Based on the above discussion, we make the conclusion as follows. Although buffer-aided relaying results in queueing delay in the relay, it also facilitates data transfer from theBS to the user and leads to a large reduction in the queueing delay at the BS. Therefore,the overall effect is the improvement of the average end-to-end packet delay. In summary,we state this as follows.Proposition: Using buffer at the relay improves the system throughput, and therefore,it reduces the average end-to-end packet delay.We note that when buffering is used in the relay, a scheduling policy is required to stabi-26Chapter 2. Buffer-Aided Relaying Improves Both Throughput and End-to-End Delaylize the system queues. Specifically, in each subslot, this policy should decide on allocatingthe channel to the BS or relay such that the system queues remain bounded. For this,throughput-optimal algorithms should be considered. A scheduling policy is throughput-optimal if it makes the system queues stable, if the stability is feasible at all with any otherpolicy [59]. Note that this definition assumes infinite capacities for the system buffers andtakes into account the fact that data arrive finitely and in a random pattern at the BSbuffer. Therefore, having stable queues ensures that the average data departure rates of allthe buffers are equal to their average data arrival rates and consequently, the arrived pack-ets at the BS are delivered to their destination with a finite average delay [60]. As a result,the maximum possible throughput is obtained which is equal to the average data arrivalrate at the BS (assuming no packets are lost or dropped). An important throughput-optimal policy in wireless networks with fixed number of queues and stationary ergodicdata arrival processes is the well-known MW method [54, 55, 60]. MW aims at maximizingthe weighted rates of the links, where the weight of a link is considered proportional to thedifference in the queue sizes at the two ends of the link1. MW is an attractive schedulingpolicy for stabilizing the queues in buffer-aided relay networks as it works by utilizing justthe instantaneous QCSI and does not require information about the probability distribu-tion of packet arrival processes and channel states. Considering the above mentioned, wesummarize the costs of buffer-aided relaying in the following remark.Remark: Note that the costs for the improvements brought by buffer-aided relayingare the requirement for a memory to buffer data at the relay, and the necessity for ascheduling algorithm to keep the queues stable.1It is assumed that the data packets exit the system in the destination and therefore, the queue size atthe destination is zero.27Chapter 2. Buffer-Aided Relaying Improves Both Throughput and End-to-End Delay2.4 Numerical ResultsTo verify the presented discussions, we have conducted extensive Matlab simulations over10000 time slots. We have investigated the cases that the data arrival and channel conditionprocesses follow Bernoulli distribution, as well as general cases with the settings of apractical system. We present the simulation results in the following and discuss the effectof buffer-aided relaying on the end-to-end packet delay. As stated before, the end-to-enddelay of a packet is considered as the time elapsed since the packet arrives at the BS bufferuntil delivery to the user.2.4.1 Bernoulli Data Arrivals and Channel ConditionsIn order to validate the analysis provided in subsection 2.3.2, in Figs. 2.4, 2.5 and 2.6we present the average packet delay obtained from both the mathematical analysis andthe simulation. In each of these figures, we have fixed the values of s1 and s2 and haveevaluated the effect of increase in a on the average end-to-end packet delay. In order tomaintain the stability of the system queues, we have considered a < s1s2. The graphsof mathematical analysis are plotted using (2.10) and (2.11). On the other hand, thegraphs of simulation results are plotted based on the delays of packets in the simulatedconventional and buffer-aided relaying systems with Bernoulli data arrivals and channelconditions. Fig. 2.4 displays the case of high probability for good channel conditions at theBS and relay, i.e., s1 = s2 = 0.9. It is clear that the simulation results are quite close to theanalytical ones, which confirms the validity of the mathematical analysis. Moreover, theresults confirm that the buffer-aided relaying has lower packet delays compared with theconventional relaying. As expected, both of the systems incur larger delay as the packetarrival probability increases. However, the delay in the conventional relaying increasesfaster compared with that in the buffer-aided relaying.28Chapter 2. Buffer-Aided Relaying Improves Both Throughput and End-to-End Delay0.1 0.2 0.3 0.4 0.5 0.6 0.71.41.61.822.22.42.62.8Probability of Packet ArrivalAverage End−to−End Packet Delay [ms]  Buf Aided−SimulationBuf Aided−AnalyticalConv−SimulationConv−AnalyticalFigure 2.4: Average end-to-end packet delay in the case of Bernoulli channel distributionwith s1 = s2 = 0.9Furthermore, Fig. 2.5 and Fig. 2.6 show the results for the cases that either or both ofthe channels have relatively lower probability of being in good condition. It is observedthat when the channels have lower probability of being in good condition, the conventionalrelaying results in significantly higher delays even at the lower data arrival rates. In par-ticular, the performance difference of these relaying systems is larger in Fig. 2.5 comparedwith Fig. 2.4 and the largest in Fig. 2.6. This is because when the probability of goodchannel conditions is low, in the case of conventional relaying, the BS has to wait for along time before having both the channels favorable for transmission. However, in the caseof buffer-aided relaying, the BS can transmit to the relay even when the relay channel isbad. Then, the relay can buffer the received data and transmit in its subslots wheneverits channel is good.29Chapter 2. Buffer-Aided Relaying Improves Both Throughput and End-to-End Delay0.1 0.15 0.2 0.25 0.3 0.35234567Probability of Packet ArrivalAverage End−to−End Packet Delay [ms]  Buf Aided−SimulationBuf Aided−AnalyticalConv−SimulationConv−AnalyticalFigure 2.5: Average end-to-end packet delay in the case of Bernoulli channel distributionwith s1 = 0.5, s2 = 0.90.1 0.12 0.14 0.16 0.18 0.2 0.22051015202530Probability of Packet ArrivalAverage End−to−End Packet Delay [ms]  Buf Aided−SimulationBuf Aided−AnalyticalConv−SimulationConv−AnalyticalFigure 2.6: Average end-to-end packet delay in the case of Bernoulli channel distributionwith s1 = s2 = 0.530Chapter 2. Buffer-Aided Relaying Improves Both Throughput and End-to-End Delay2.4.2 General ScenarioNote that the mathematical analysis presented in subsection 2.3.2 and the numerical resultsshown in subsection 2.4.1 are for Bernoulli data arrivals and channel conditions and providean insight on the effect of using buffer in relay on the average end-to-end packet delay. Inorder to verify the discussions presented in subsection 2.3.3 for general data arrival andchannel condition processes, we consider a scenario with more realistic settings. For that,the simulation parameters are selected as follows. We consider a single cell with radius1000 m where the BS is located at the center, the relay is located at the distance of 1/2cell radius from the BS and the user is on the cell edge (at the distance of 1/2 cell radiusfrom the relay). BS, relay and user antenna heights are considered 15 m, 10 m and 1.5 mrespectively, and the path loss model is based on [61]. The carrier frequency is 1900 MHz,the channel bandwidth is considered equal to 180 kHz and the time slot duration is set to1 ms. The transmission power at the BS and relay is 23 dBm and the noise power spectraldensity is assumed -174 dBm/Hz. User data traffic is assumed Poisson with packet sizesequal to 1 kbits. It is assumed that the channel fading is flat over the system bandwidthand constant during each time slot; however, it can vary from one slot to another. For thelink between the relay and user, Rayleigh channel model is used, and for the link from theBS to relay, Rician channel model is used with κ factor equal to 6 dB [62]. In the caseof conventional relaying, the transmissions at the BS and relay are done in consecutivesubslots. For buffer-aided relaying, we have used MW policy [54, 55, 60] to decide in anadaptive way, about the transmission in each subslot either from the BS or relay buffer.Fig. 2.7 shows the BS and relay queue sizes over time at the data arrival rate of 50packets/second. It is observed that with buffer-aided relaying, although there is queueingin the relay buffer, the BS queue size in each time slot is reduced significantly. This isbecause the BS has more flexibility to use its channel and transmit to relay when it is in31Chapter 2. Buffer-Aided Relaying Improves Both Throughput and End-to-End Delay0 2000 4000 6000 8000 1000001000200030004000500060007000Time SlotBS Queue Size [Bits]  ConvBuf Aided(a)0 2000 4000 6000 8000 1000001000200030004000500060007000Time SlotRelay Queue Size [Bits]  ConvBuf Aided(b)Figure 2.7: (a) BS queue size over time (b) relay queue size over time; at the arrival rateof 50 packets/slot.32Chapter 2. Buffer-Aided Relaying Improves Both Throughput and End-to-End Delay0 50 100 150 20000.20.40.60.81End−to−End Packet Delay [ms]CDF  ConvBuf AidedFigure 2.8: CDF of end-to-end packet delays at the arrival rate of 50 packets/slotgood condition, which leads to more data departures from its queue compared with the caseof conventional relaying. On the other hand, having buffer at the relay allows it to queuethe received data when the channel from the relay to the user is not good and transmitwhen it is favorable. On the whole, the channel variations are used more opportunisticallyand this results in the faster transfer of data from the BS to the user and lower end-to-endpacket delays in buffer-aided relaying compared with conventional relaying.The aforementioned effect can be observed through the cumulative distribution function(CDF) of the end-to-end packet delays depicted in Fig. 2.8. Fig. 2.8 indicates that eventhough some packets experience higher end-to-end delay in the case of buffer-aided relayingsystem compared with the conventional one, average end-to-end packet delay is less inbuffer-aided relaying system. In particular, in this scenario, the average end-to-end packetdelays are 11 ms and 30 ms in buffer-aided and conventional relaying systems, respectively.Fig. 2.9 displays the effect of increase in the packet arrival rate on the throughput33Chapter 2. Buffer-Aided Relaying Improves Both Throughput and End-to-End Delay50 60 70 80 90 1005060708090100110Arrival Rate [Packet/Second]Average Throughput [Bits/Slot]  ConvBuf Aided(a)50 60 70 80 90 1000200400600800100012001400Arrival Rate [Packet/Second]Average End−to−End Packet Delay [ms]  ConvBuf Aided(b)Figure 2.9: Effect of packet arrival rate at the BS on (a) average throughput in each timeslot (b) average end-to-end packet delay.34Chapter 2. Buffer-Aided Relaying Improves Both Throughput and End-to-End Delayand delay performance. It is observed that, up to the arrival rate of 60 packets/second,conventional relaying is able to serve the arrived data and result in the same amount ofthroughput. However, after that, due to low capacity, it leads to queue instability. Theeffect of this is that the data departure rate of the queues are not equal to their data arrivalrates and therefore, the average throughput is less than the average data arrival rate at theBS, and the packets experience large end-to-end delays. In contrast, buffer-aided relayingis able to provide average throughput equal to the average data arrival rate at the BS, inall the packet arrival rates, and therefore, leads to very low end-to-end packet delays.In order to have a complete picture, we also present the system performance at thearrival rate of 100 packets/second. Fig. 2.10(a) shows that in conventional relaying, theBS queue grows unbounded; this is due to the low capacity of relaying channel, whichis unable to serve all the arrived data. This leads to large end-to-end packet delays asdepicted in Fig. 2.11. On the other hand, as shown in Fig. 2.10(b), buffer-aided relayingleads to queueing in the relay buffer, which helps to utilize the channel variations efficiently.It allows to transfer the data from the BS buffer to relay buffer and from relay buffer touser, when the corresponding channels have good conditions, and therefore, leads to lowend-to-end packet delays. In particular, in this scenario, the average end-to-end packetdelays are 22 ms and 1250 ms, respectively in buffer-aided and conventional relaying.The above results confirm that using buffer at relay, improves the throughput as wellas the average end-to-end packet delay in the system.35Chapter 2. Buffer-Aided Relaying Improves Both Throughput and End-to-End Delay0 2000 4000 6000 8000 1000000.511.522.533.5 x 105Time SlotBS Queue Size [Bits]  ConvBuf Aided(a)0 2000 4000 6000 8000 1000001000200030004000500060007000Time SlotRelay Queue Size [Bits]  ConvBuf Aided(b)Figure 2.10: (a) BS queue size over time (b) relay queue size over time; at the arrival rateof 100 packets/slot.36Chapter 2. Buffer-Aided Relaying Improves Both Throughput and End-to-End Delay0 1000 2000 3000 400000.20.40.60.81End−to−End Packet Delay [ms]CDF  ConvBuf AidedFigure 2.11: CDF of end-to-end packet delays at the arrival rate of 100 packets/slot2.5 SummaryIn this chapter, we have studied the effect of buffering at relay on the end-to-end delayperformance of the system. Through the discussions about the queueing delay, we haveexplained the cause of delay in a simple queueing system. Based on that, we have providedan insight on the overall queueing delay in conventional and buffer-aided relaying networks.Then, through analytical investigation and intuitive generalization, we have concludedthat employing buffer at the relay improves the average end-to-end packet delay. Usingnumerical results, we have verified our analysis and discussions, and have shown that usingbuffers at the relay leads to higher system throughput and lower average end-to-end packetdelay.37Chapter 3Channel-, Queue-, and Delay-AwareResource Allocation3.1 IntroductionIn the previous chapter, we showed that exploiting buffer in relay node leads to improve-ments in the system throughput as well as the average end-to-end packet delay. Due tothese advantages, it is expected that buffer-aided relay networks will attract a lot of at-tention for using in cellular networks. Recently, there has been some research on resourceallocation in such networks, and novel algorithms for scheduling and subchannel allocationhave been proposed [47–49]. However, it is still needed to investigate in such scenarios,the QoS provisioning for delay-sensitive services with strict delay requirements, togetherwith delay-tolerant users having average throughput requirements. There has been QoS-aware algorithms already proposed for prompt relays [23–25]; however, for buffer-aidedrelays, new algorithms need to be designed to take into account the queuing delay in therelays in meeting the deadline of the delay-sensitive packets. Also the works such as [23–25] have considered only one of the above QoS requirements, and therefore, the scenarioswhere users with different QoS requirements are present in the network necessitates moreinvestigation. As it will be discussed in Section 3.3, in order to meet the deadlines ofdelay-sensitive packets, minimum transmission rates need to be considered over the links;however, depending on the routing path from the BS to end users, the links with minimum38Chapter 3. Channel-, Queue-, and Delay-Aware Resource Allocationrate requirements would be different. Due to this and also depending on the method tocompute these minimum rate requirements, different constraints would be imposed on thenetwork in each scheduling period. All of these will affect the capacity to provide averagethroughput guarantees for delay-tolerant users. In this chapter, our goal is to investigatethe above issues and propose potential policies for addressing them.In particular, we study QoS-aware routing and subchannel allocation in the downlinkof time-slotted OFDMA networks enhanced with buffering relays. The main contributionsof our work are summarized as follows:• We consider a heterogeneous service environment, where the goal is to meet the packetdeadlines of delay-sensitive users and at the same time maintain the queue stabilityin the system, to ensure that the average throughput seen by each delay-tolerant usermatches the average data arrival rate at the BS.• We present a framework for Time Domain Scheduling (TDS) and Frequency DomainResource Allocation (FDRA), through which we show that the IRAP formulation formeeting the stated QoS objectives is itself a challenge. Based on this framework, weidentify the issues that need to be addressed in formulating IRAP and the design ofscheduling and routing algorithms.• We propose novel CQDA policies to define the parameters needed for IRAP formula-tion in each time slot. Specifically, these policies take several steps to determine theutility function and minimum rate constraints over links, and at the last step, theyuse an iterative algorithm to solve the problem they have formulated.• Using extensive simulations, we evaluate the performance of the proposed policies andshow that they are able to utilize the system resources well to reach the objectivesstated.39Chapter 3. Channel-, Queue-, and Delay-Aware Resource AllocationThe rest of this chapter is organized as follows. Section 3.2 describes the system modelfor a buffer-aided relay-enhanced OFDMA network. In Section 3.3, we outline the resourceallocation objectives and discuss the challenges for formulating IRAP. Section 3.4 presentsthe proposed CQDA policies and practical considerations. Simulation results are providedin Section 3.5, and Section 3.6 gives the conclusions.3.2 System ModelAs shown in Fig. 3.1, we consider the downlink of a time-slotted OFDMA system ina single cell with K users, M relays and N subchannels. Each subchannel consists ofseveral subcarriers to reduce the overhead of signaling about the channel conditions andassignments. Users, relays and subchannels are indexed respectively by k ∈ {1, ..., K},m ∈ {1, ...,M} and n ∈ {1, ..., N}. Users who are close to the BS and have good linkquality are served only by the BS. We refer to these users as “close” users and group themin the set`K. On the other hand, the users far from the BS are also assigned to one ormore relays, from which they can receive high signal strength and, therefore, high datarate. We refer to these users as “far” users and group them in the setaK. These terms areused instead of “direct” and “relayed” because we assume that all the users have a directconnection to the BS, whereas “relayed user” might be misinterpreted as a user that hasno direct connection to the BS. Further, we have used the term “direct” in the following tospecify the link type and it might cause ambiguity if the term “direct user” was used. Weuse Km to denote the set of users that can receive service from m = 0, 1, ...,M , where m = 0indicates the BS and therefore, K0 = {1, ..., K}. Similarly, we use Mk, k = {1, ..., K}, torefer to the serving nodes of the user k, which could be one or more relays and/or the BS;e.g. M2 = {0, 1, 6} indicates that user 2 is able to receive data from the BS and relays 1and 6. We assume that the sets of users and relays defined above are determined at the40Chapter 3. Channel-, Queue-, and Delay-Aware Resource Allocation3,4fl0=m3=m4=k3,4dl0,4dlFigure 3.1: System modelbeginning of users’ connections to the network and remain unchanged.The transmission links are uniquely classified into two types. First are the links betweena serving node m, m = 0, ...,M , and the users that can receive data from it, i.e., k ∈ Km.The variables that are defined for this type of links have the superscript d beside the letterm, which indicates that the link is a “direct” one from node m to the user; e.g., ld,mkdenotes the direct link from node m to user k and ed,mkn indicates the channel gain of thislink on subchannel n. The second type of links are the far users’ feeder links between theBS and relays, for which we use the superscript f to indicate that the link is a “feeder”link between the BS and relay m, m = 1, ...,M . It is important to note that in this case,the letter m does not show the transmitting node but the receiving one; e.g., lf,mk denotesthe feeder link of user k from the BS to relay m and ef,mkn denotes the channel gain of thislink on subchannel n. We assume that the channel gains of the links vary over time andfrequency, but remain constant on each subchannel during a time slot.It is assumed that the BS and relays are equipped with buffers; the BS buffers are fedby exogenous packet arrival processes and served by transmissions from the BS to usersor relays. On the other hand, buffers in relays are fed by the BS transmissions on feederlinks and are served by transmissions to their users. We use Qmk (t), k ∈ Km, to denotethe number of bits queued in the buffer of user k in node m, m = 0, ...,M , at time slot t.Also we assume that the relays are quasi full-duplex and have the ability to receive and41Chapter 3. Channel-, Queue-, and Delay-Aware Resource Allocationtransmit at the same time slot but on different subchannels. As a result, in each time slotsome subchannels can be used for transmissions on direct links (from the BS or relays tousers) and some for feeder links (from the BS to relays). However, there is no frequencyreuse inside the cell and each subchannel can only be assigned to one link2.For simplicity of the system model, it is assumed that a fixed power p is used fortransmission on any subchannel by any node and this power is equally distributed on thesubcarriers forming the subchannel. Assuming that M-ary quadrature amplitude modula-tion (QAM) is used for transmissions, the achievable transmission rate in each time slotbetween node m and user k on subchannel n can be computed as follows [63]:rd,mkn = BT log2(1 + ped,mknΓkBν0), (3.1)where B and T are the bandwidth of a subchannel and the time slot duration, respectively.ν0 denotes the noise spectral density and Γk is the signal-to-noise ratio (SNR) gap due tothe limited number of coding and modulation schemes, which is related to bit error rate(BER) of user k (BERk) through equation Γk = − ln(5BERk)1.5 [63]. In a similar way, basedon ef,mkn and Γk, we can define rf,mkn as the achievable transmission rate of the feeder linkof user k between the BS and relay m on subchannel n. Note that due to different BERrequirements of the users, the achievable rate on the feeder link from the BS to relay m fortwo different users can be different. Using (3.1), the transmitted number of bits in eachslot on the link from node m to user k is equal to:rd,mk =N∑n=1xd,mkn rd,mkn (3.2)where xd,mkn denotes the subchannel allocation indicator, which equals one if subchannel n2Although in cooperative transmissions, the BS and relays can transmit simultaneously on the samesubchannel, for simplicity we do not consider this possibility.42Chapter 3. Channel-, Queue-, and Delay-Aware Resource Allocationis used for transmission on the link from m to user k, or zero otherwise. Similarly rf,mkcan be computed based on xf,mkn and rf,mkn . Note that in the following, depending on thecontext, we may use eln, rln, xln to denote respectively ed,mkn , rd,mkn , xd,mkn when l ∈ {ld,mk }, oref,mkn , rf,mkn , xf,mkn when l ∈ {lf,mk }.We consider a heterogeneous traffic scenario, where some users have delay-toleranttraffic and others have delay-sensitive one. A delay-tolerant traffic does not have strictdeadlines for its packets, but requires an average throughput guarantee to make sure itspackets are delivered to the destination with a finite average delay. On the other hand, thepackets of delay-sensitive traffic have a maximum allowed delay, after which the packetsbecome expired and therefore, dropped from the corresponding buffers. We assume thatthe packet arrivals of all traffic streams at the BS are stationary and ergodic processes andhave finite mean and variance. We use Ka and Kb to denote the set of delay-sensitive anddelay-tolerant users, respectively. For any k ∈ Ka, Wˆk indicates the maximum alloweddelay for its packets. It is assumed that each packet of delay-sensitive traffic is tagged witha number that is updated every time slot and shows the packet’s delay since its arrival inthe BS queue. We use Imk (t), Wmki , and Lmki to denote, respectively, the number of packetsat time slot t, the delay, and the size of i-th packet in the queue of user k in node m. Thepackets are indexed in the same order as their arrival; therefore, a packet with index 1 isalso referred as the Head of Line (HoL) packet. It is clear that Qmk (t) =∑Imk (t)i=1 Lmki. Weassume that the buffers have infinite capacity and therefore, no packet drop occurs due tobuffer overflow. As a result, the number of packet drops is always zero for delay-tolerantusers.We assume that a centralized scheduler in the BS has perfect knowledge about thequeue sizes and packet delays in the BS and the relay buffers as well as channel states ofall the links, based on which it decides about the allocation of subchannels.43Chapter 3. Channel-, Queue-, and Delay-Aware Resource Allocation3.3 Quality-Of-Service-Aware Resource AllocationProblemIn this section, first we explain the main objectives and then discuss the challenges informulating the corresponding IRAP.3.3.1 The Main ObjectivesAs explained in the previous section, we consider both delay-tolerant and delay-sensitiveusers. Delay-sensitive users have strict delay constraints for their packets; therefore, oneof the objectives of the scheduler is to allocate enough resources to these users in orderto make sure that their packets arrive in the receiver on time, or at least to keep thenumber of expired packets low. On the other hand, although the delay-tolerant users donot have strict deadlines for their packets, the scheduler aims at providing them with anaverage throughput equal to the average data arrival rates at their queues in the BS. Thisis equivalent to guaranteeing that their packets would experience finite average queueingdelays in the buffers of the BS and relays [55]. Therefore, the other objective is to makesure that the queues belonging to delay-tolerant users are stable, i.e., their sizes remainbounded3. Based on the above, the main objectives can be summarized as providing thefollowing, subject to the system’s physical constraints (which was mentioned in the previoussection, i.e., the limitations on the system resources and their usage):1) Wmki ≤ Wˆk, m ∈ {0, ...,M}, k ∈ Km ∩ Ka, ∀t, i ∈ {1, ..., Imk (t)}, t = 0, 1, ... (3.3a)2) limτ→∞sup 1ττ−1∑t=0E[Qmk (t)] < ∞, m ∈ {0, ...,M}, k ∈ Km ∩ Kb (3.3b)3Note that the queues of delay-sensitive users are always stable due to drops of expired packets.44Chapter 3. Channel-, Queue-, and Delay-Aware Resource Allocationwhere (3.3a) means that the delay of any packet of a delay-sensitive user k in its queuesin the BS and any serving relay should be less than the threshold Wˆk, and (3.3b) isbased on the definition of the strong stability [55] and states its satisfaction for the queuesof delay-tolerant users in the BS and relays. In this equation, E[.] indicates expectedvalue. Note the underlying assumption that an admission control entity in a higher layerexists to make decisions about admitting users, while taking their QoS requirements intoaccount. Based on those, it defines the above objectives and then the scheduler aims atprovisioning them. Therefore, if for example an admitted user’s data arrival rate is higherthan another, the scheduler will need to provide more service to it to keep its queuesstable; or if a deterministic delay bound is also desired for admitted delay-tolerant users,the objective (3.3a) will be applied on them instead of (3.3b) and the scheduler will needto treat them as delay-sensitive users by tagging their packets with their delay. Also, sinceour goal is to investigate the service provisioning for users with QoS requirements, we donot consider BE users and the subject of fair service provisioning for them. These typesof users can be served based on PF scheduling, by considering dedicated subchannels forthem or by the resources left unused after allocating the subchannels to the users with QoSrequirements.3.3.2 Challenges of IRAP FormulationThe objectives stated above are to be satisfied in the long term. For this purpose, itis needed to translate these objectives into IRAPs over the time slots to come. In otherwords, a dynamic scheduling mechanism should monitor the channel states, queue sizesand the delays of the packets in each time slot, and based on their instantaneous values, itshould first formulate the specific problem to be addressed in that time slot and then usean algorithm to solve it.Regarding the second objective, it has been shown in [54, 55] that in a multi-hop wireless45Chapter 3. Channel-, Queue-, and Delay-Aware Resource Allocationnetwork, applying the well-known MW policy can make all the queues stable, as long as itis feasible to do so by any other algorithm. MW reaches this objective by considering theIRAP as maximizing the sum of the weighted rates of the links in each time slot, subjectto the constraints of the system. The weight of a link is considered equal to the differenceof the corresponding queue sizes in the beginning and the end points of that link4.However, in our system the challenge is how to define and include the constraints thatthe delay-sensitive users impose on the network (through the first objective), while reserv-ing enough capacity for stabilizing the queues of delay-tolerant users. In the literature onsingle-hop networks, in order to meet the deadlines of delay-sensitive packets and maxi-mize the throughput of delay-tolerant users, resource allocation is usually divided into timedomain and frequency domain stages: A time domain scheduler determines a minimumtransmission rate for delay-sensitive users in each slot, and then a frequency domain sched-uler allocates the subchannels to first satisfy the minimum rate requirements and then tomaximize the throughput of delay-tolerant users [64]. However those works do not considerthe queue stability of the delay-tolerant users and also it is not clear how to extend theirpolicies to two-hop networks.To clarify the above mentioned, we present the QoS-aware cross layer scheduling andresource allocation with the following framework for formulating an optimization problemby the BS, and discuss the challenges. In fact, for translating the objectives in (3.3a)and (3.3b) into IRAPs over time slots, BS considers this framework in each time slot:1. Determine Kom ⊆ Km, ∀m, and Ry,mk , y = d,m = 0, ...,M, k ∈ Km, y = f,m =1, ...,M, k ∈ Km, for the following FDRA problem, in a way to meet the packetdeadlines and provide queue stability. Kom is the set of users to be considered in theutility function (3.4a) and Ry,mk is the minimum rate constraint for the transmission4It is assumed that the data packets exit the system when they reach their users and therefore, thequeue sizes in user nodes are zero.46Chapter 3. Channel-, Queue-, and Delay-Aware Resource Allocationon link ly,mk (from the queue in the beginning point of the link), also denoted byRl, l ∈ {ly,mk } for simplicity.2. Solve the instantaneous frequency domain resource allocation problem stated in (3.4).maxxM∑m=0∑k∈Kom∑n∈Nxd,mkn wd,mk rd,mkn +M∑m=1∑k∈Kom∑n∈Nxf,mkn wf,mk rf,mkn , (3.4a)s.t. C1 :∑n∈Nxy,mkn ry,mkn ≥ Ry,mk , (3.4b)C2 :∑n∈N[xd,0kn rd,0kn +∑m∈Mkxf,mkn rf,mkn ] ≤ Q0k, ∀k, (3.4c)C3 :∑n∈Nxd,mkn rd,mkn ≤ Qmk , m ∈ {1, ...,M}, k ∈ Km, (3.4d)C4 :M∑m=0∑k∈Kmxd,mkn +M∑m=1∑k∈Kmxf,mkn ≤ 1, ∀n, (3.4e)C5 : xd,mkn ∈ {0, 1}, ∀n, ∀m ∈ {0, ...,M}, k ∈ Km, (3.4f)C6 : xf,mkn ∈ {0, 1}, ∀k ∈ Km, xf,mkn = 0, ∀k /∈ Km, ∀m ∈ {1, ...,M}, ∀n (3.4g)In (3.4), x = {xd,mkn } ∪ {xf,mkn } is the set of all indicator variables. We consider the weightsequal to wd,mk = αd,mk βkQmk , wf,mk = αf,mk βk(Q0k −Qmk ) to provide stability (this is achievedby considering the queue backlogs) and also prioritize users’ links based on their locationsand QoS requirements (this is considered by αy,mk βk coefficients). C1 states the minimumrate constraints that need to be determined in order to meet the packet deadlines, and C2and C3 state the maximum number of bits that can be actually sent from the queues ofthe BS and relays, respectively. C4 ensures that each subchannel is allocated exclusivelyto a single link and constraint C5 and C6 define the possible values for channel allocationindicators of direct and feeder links, respectively.47Chapter 3. Channel-, Queue-, and Delay-Aware Resource AllocationIt is seen that for the formulation of the IRAP, its parameters need to be determinedfirst. We note that the utility function (3.4a) has been chosen based on the MW policy (alsoknown as backpressure routing method) [54]. This way, subchannels would be allocated tothe links that have good channel conditions and/or large differential queue backlogs at thetwo ends. As a result, data packets will automatically be routed through the links thathave good channel conditions and/or less queue congestion in the next hop. Also, if a queuedoes not get serviced for a while (e.g., when the channel conditions of its serving links arenot good) and its size increases, its weight will grow large; this will lead to providing serviceto it, which in turn will reduce its size. The issue here is how the set of users Kom, ∀m, inthe utility function (3.4a), should be determined. Should they include only delay-tolerantusers or delay-sensitive users as well?The other issue is how the minimum rate requirements of a delay-sensitive user shouldbe defined. Should they be defined in every time slot or just in some specific time slots?Also, we note that the queues of users exist both in the BS and relays and it is notknown beforehand how long the delay of the packets will be in each queue. Therefore, oneother important thing that should be decided is the route of the delay-sensitive packetsfor far users. In other words, should these packets be transmitted directly from the BSor through relays? The decision about this will determine on which links the minimumrate constraints (3.4b) should be imposed. The other challenge is the decision for thedelay budget division between the BS and relays for the delay-sensitive users, in case theirpackets are routed through the relays. This is important for computing the values of theminimum transmission rate requirements mentioned above.All the above issues are inter-related and will affect the IRAP formulation. To the bestof our knowledge, due to the individual delay requirements for delay-sensitive packets,stochastic nature of data arrivals and channel variations as well as complexity of network48Chapter 3. Channel-, Queue-, and Delay-Aware Resource Allocationarchitecture, design and analysis of an optimal method is highly intractable. Therefore,in the next section, we will propose different suboptimal policies for addressing the statedissues in IRAP formulation and solving the resulting IRAPs. Then, in Section 3.5, we willuse simulations to verify the performance of the proposed policies.3.4 Quality-Of-Service-Aware Cross LayerScheduling and Resource AllocationIn this section, we propose novel QoS-aware resource allocation policies for addressing thechallenges mentioned in the previous section. We present in detail the steps that thesepolicies take to formulate and solve the IRAP, and discuss the practical considerations forthem.3.4.1 CQDA PoliciesCQDA policies first decide about the needed parameters for the utility function and min-imum rate constraints, to formulate the IRAP in each time slot; due to their differentapproaches, each of these policies results in different parameters and, therefore, a differentinstance of the problem (3.4). Then, these policies use a similar iterative subchannel al-location algorithm to solve their formulated problem. Fig. 3.2 shows the main steps andsubsteps that these policies take in every time slot. Based on the QCSI and packet delaysin each time slot, the BS can use CQDA policies to decide about the routing and schedulingof the users’ packets, for reaching the main objectives stated in (3.3).The approaches that the proposed policies have for parameter decision and IRAP for-mulation can be summarized as follows.49Chapter 3. Channel-, Queue-, and Delay-Aware Resource AllocationDetermine the set of users for utility function, i.e., Kom, and the weights of the linksDefine the minimum rate constraints:• Determine the delay budgets over the links of delay-sensitive users from the BS andrelays• Determine the routing path for the packets of the delay-sensitive users in the BS• Compute the minimum rate requirements on the links of delay-sensitive usersFormulate the IRAP and solve itFigure 3.2: Flowchart of the IRAP formulationa) Policy 1: Separate Utility and Minimum Rate (SUMR) Definitions. Inthis policy, the utility function in every time slot is defined only based on the channel andqueue states of the delay-tolerant users in that time slot. On the other hand, the minimumrate requirements of the delay-sensitive users are computed and applied whenever thereare packets in their queues in the BS or relays. In fact, the problem formulation approachof this policy targets the provisioning of (3.3a) and (3.3b) separately: by guaranteeing theservice of delay-sensitive users through the minimum rate constraints only and the serviceof the delay-tolerant users through the utility maximization.b) Policy 2: Joint Utility and Minimum Rate (JUMR) Definitions. Thispolicy defines the utility function in each time slot based on the channel and queue statesof all the users in that time slot, but with higher priorities for delay-sensitive users. Inthis policy, the minimum rate requirements are defined and considered only when thereare packets that have reached the thresholds determined in step 2, and have not yet beentransmitted under the effect of utility maximization in time slots before. The approach this50Chapter 3. Channel-, Queue-, and Delay-Aware Resource Allocationpolicy uses for problem formulation in fact aims at provisioning of (3.3a) and (3.3b) jointly:by guaranteeing the service of all the users through the utility maximization, except whenthe delay-sensitive packets are due, in which case the minimum rate constraints are alsoapplied.In the following we explain in detail all the steps that these policies take to formulatethe IRAP in each time slot, as well as the specific differences that they have in each step.Step 1) Determine the set of users for utility function and the weights ofthe links.a) In the SUMR policy, Kom = Km ∩ Kb, ∀m. This way, the links considered in theutility function will be all that belong to delay-tolerant users. For the α coefficientsin the weights of the links in (3.4a), this policy considers the following:αy,mk =1rd,mk, y = d,m = 0, ∀k112 min(rf,mk ,rd,mk ), y ∈ {d, f}, m = 1, ...,M, k ∈ Km(3.5)where ry,mk is the average achievable rate for the link ly,mk on a single subchannel(considering the mean channel gain, i.e., without the effect of small scale fading).By adjusting the weights of the links through the above values for α’s, we try tocompensate for the effect of distance on transmission rates and provide some fairness.For example if user k has a larger distance from the BS and the channel conditionfor direct transmissions from the BS to it is low on average, it will have a larger αd,0kand, therefore, will be selected more often to get service. Note that according to thesecond line in (3.5), the feeder link from the BS to relay and the direct link from thatrelay to the user are assigned the same value of α. This value is computed based onthe average achievable rate of the link that has lower channel conditions on average,as it is the one that limits the overall two-hop transmission rate; the coefficient 12 is51Chapter 3. Channel-, Queue-, and Delay-Aware Resource Allocationdue to the fact that the wireless resources are used twice in two-hop transmissions.SUMR considers β coefficients equal to one for all the delay-tolerant users, as theybelong to the same traffic class5.b) In the JUMR policy, Kom = Km, ∀m. Since the minimum rate requirements in thispolicy are defined only when the HoL packets reach the determined threshold overtheir routing link (clarified in the next step), the utility is thus defined also for thequeues of delay-sensitive users to be served enough before reaching the packet dead-lines. Furthermore, in order to prevent the cases in which the HoL packets in severalqueues reach a deadline and require minimum rates that might not be met altogether,the delay-sensitive users’ queues should be provided with higher service rates thanthose of delay-tolerant users. For this purpose, other than setting α coefficients asin (3.5), a higher priority is also considered for them, which is applied on the weightsof their links through setting their β coefficients to a value several times larger than1. In our simulations, we have found that using the inverse of the maximum alloweddelay of each user, i.e., βk = 1Wˆk , ∀k ∈ Ka, gives a good performance. This is becauseWˆk is usually in the order of about one hundred millisecond, the inverse of whichgives delay-sensitive users more than five times priority over delay-tolerant users.Step 2) Define the minimum rate constraints. For defining the minimum rateconstraints, each of the policies need to decide about the delay thresholds for transmissionsover the links, the routing paths and the values of minimum rate requirements for delay-sensitive packets. Therefore, this step can be further split into the substeps explained inthe following.Substep 2-1) Determine the delay budgets over the links of delay-sensitive users fromthe BS and relays5The β coefficients can be considered differently, in cases that users are also classified based on othermetrics than the traffic classes, like different service plans.52Chapter 3. Channel-, Queue-, and Delay-Aware Resource AllocationIn this substep, the goal is to consider a rule for delay budget division of delay-sensitivepackets on the two hops. This is to make sure that in the case of routing through relays,the delay-sensitive packets will have enough time left for transmissions from the relay totheir users before expiring. Based on this, in the next substeps, the policies will decideabout the routing and the values of minimum rate requirements.For this purpose, we consider a delay threshold corresponding to the links of the users:define Dy,mk as the delay threshold for the transmissions on link ly,mk , from the queue in thebeginning point of the link. Also, depending on the context, we represent this by Dl. Dueto the fact that the packets are tagged with their delay since the arrival time in the BS (notthe arrival time in the current queue, which may be in a relay), the thresholds for directtransmissions from the BS and relay queues are thus equal to the packets’ thresholds, i.e.,Dd,mk = Wˆk for m = 0, ...,M, k ∈ Km ∩ Ka. Consequently, the delay budget division canbe specified by defining a delay threshold for the feeder links between the BS and relays,as explained in the following.a) SUMR shares the delay budget between the BS and relays in two-hop transmissionsbased on the average transmission rates on the link from the BS to relay and the linkfrom relay to user. Specifically, we haveDf,mk = round(Wˆkrd,mkrf,mk + rd,mk), m = 1, ...,M, k ∈ Km ∩ Ka. (3.6)The intuition behind (3.6) is the fact that the transmission time of a packet is pro-portional to the inverse of its transmission rate; if the length of packet i in the queueof the user k in the BS is L0ki, an estimate of the average time for its (continuous)transmission from the BS to relay m ∈ Mk and from relay m to the user k, on asingle (dedicated) subchannel, would be τ1 = L0kirf,mkand τ2 = L0kird,mk, respectively. Not-ing that τ1τ1+τ2 =rd,mkrf,mk +rd,mk, SUMR uses this estimated ratio for computing the delay53Chapter 3. Channel-, Queue-, and Delay-Aware Resource Allocationbudget share of the feeder link.b) JUMR considers Df,mk = Wˆk − 1, m = 1, ...,M, k ∈ Km ∩ Ka for transmissionson feeder links. This is due to the fact that JUMR tries to serve the queues ofthe delay-sensitive users mostly through the utility maximization (as described inthe previous step), and therefore, the routing path and the transmission rates areautomatically specified by subchannel allocation to the links (this will be clarifiedlater). Consequently, it does not need to work based on the delay budget division,except in the last moments of packet deadline; i.e., based on the above setting forDf,mk , JUMR starts to decide about the routing and the minimum rates for the queuesof far delay-sensitive users in the BS, one slot before the deadline. This way, if thetwo-hop transmission is decided in the next substep, there will be time for that: onetime slot for BS to relay transmission and one time slot for relay to user transmission.Substep 2-2) Determine the routing path for the packets of the delay-sensitive users inthe BS 6The main goal in this substep is to decide if the packets in the queues of far delay-sensitive users in the BS should be transmitted through the direct or feeder links fromthe BS. Based on this decision, the minimum rate requirements of those packets are thenimposed on the direct or feeder links from the BS. For the packets in the queues of closedelay-sensitive users in the BS and the queues of far delay-sensitive users in the relays,there is no need for routing and the transmissions are done on the corresponding directlinks between the BS/relay and those users. We show this bysmk = ld,mk , m = 0, k ∈`K ∩ Ka; m = 1, ...,M, k ∈ Km ∩ Ka (3.7)6Note that for the delay-tolerant users, packets are always routed automatically as a result of thesubchannel allocations based on the utility maximization.54Chapter 3. Channel-, Queue-, and Delay-Aware Resource Allocationwhere smk indicates the link over which a minimum rate will be determined for transmissionsfrom the queue of user k in node m. In order to propose a routing method for the packetsin the queues of far delay-sensitive users in the BS, we define the following routing metric:ρmk =rd,mkWmk, k ∈aK ∩Ka, m ∈ Mk (3.8)where Wmk is the average delay of the data bits present in the queue of user k in node min the current time slot, which we define asWmk =∑Imk (t)i=1 LmkiWmki∑Imk (t)i=1 Lmki, k ∈aK ∩Ka, m ∈ Mk (3.9)Based on this, the routing algorithm of each policy is presented in the sequel:a) SUMR performs the routing of the far delay-sensitive users in every time slot,based on their packet delays in the BS queues, as follows:– If Wˆk −W 0k1 = 0, k ∈aK∩Ka, consider the direct transmission from the BS andset s0k = ld,0k .– If Wˆk − W 0k1 > 0, k ∈aK ∩ Ka, find mˆ = argmaxm ρmk . If mˆ = 0 consider thedirect transmission from the BS and set s0k = ld,0k ; otherwise, consider forwardingto relay mˆ and set s0k = lf,mˆk .The logic behind the routing metric and algorithm is as follows. If the HoL packetin the queue of user k in the BS has not been sent yet and has reached its deadline(which might happen due to high load of the network or bad channel conditionsof the user links), there is no point in using the routing metric and forwarding itto any relay, because if this is done, it will be dropped from the relay in the nextslot. Therefore, the only chance to transmit the packet is the current time slot and55Chapter 3. Channel-, Queue-, and Delay-Aware Resource Allocationthrough the direct transmission from the BS. On the other hand, if there is still timeto transmit the packet, then based on the routing metric, every path has a chance tobe selected. The routing metric takes into account the effect of average transmissionrate as well as the queuing delay and gives higher priority to the serving nodes thathave larger average transmission rates and/or lower average delay.b) In the JUMR policy, except for the cases that the packets of far delay-sensitiveusers have reached their corresponding delay thresholds on the feeder links, they willbe routed in a manner similar to the delay-tolerant users and based on the utilitymaximization (e.g., when a subchannel is allocated to a feeder link, the packet isrouted automatically through the corresponding relay); only when Wˆk − W 0k1 = 1,k ∈aK∩Ka, JUMR uses the routing metric and follows the same method as mentionedabove for SUMR.Subtep 2-3) Compute the minimum rate requirements on the links of delay-sensitiveusersa) In the SUMR policy, whenever there are packets in the queues of delay-sensitiveusers in any serving node, a minimum rate constraint is considered on the linksselected (in previous substep) to serve them. This minimum rate requirement isdynamically computed in every time slot, based on the delay threshold correspondingto the selected transmission link (i.e., feeder or direct), and the sizes and delays ofthe packets in those queues. The details are as follows.– If Wˆk −W 0k1 = 0, k ∈aK ∩ Ka, use the size of the HoL packet as the minimumrate requirement on the direct link from the BS to that user, i.e., Rs0k = L0k1.– For the transmissions on the other links selected to serve the delay-sensitive56Chapter 3. Channel-, Queue-, and Delay-Aware Resource Allocationusers’ queues, use the following equation:Rsmk = maxi∈{1,...,Imk }zmk,c(i), where zmk,c(i) =∑ij=1 Lmkj(Dsmk −Wmki + 1)(3.10)The given equation is based on the one proposed in [65] for guaranteeing thepacket delays with low energy consumption in a single channel system; in oursystem since the power is fixed over the subchannels, this will lead to a lownumber of needed subchannels. In equation (3.10), zmk,c(i) is the fixed transmis-sion rate that can serve packets 1 to i (totally with size∑ij=1 Lmkj bits) in thequeue of user k in node m over the time left until the deadline of the i-th packeton the selected link (i.e., Dsmk −Wmki +1 time slots). Taking the maximum fromthe zmk,c(i) for all the packets in that queue ensures that each individual packetgets transmitted before its deadline.b) In the JUMR policy, similar to SUMR, if there is a packet of far delay-sensitiveusers in the BS, the deadline of which has arrived, Rs0k = L0k1; For the transmissionson any other link (specified in substep 2-2), the following equation is used:Rsmk =Lmk1, if Dsmk −Wmk1 = 00, otherwise(3.11)Step 3) Formulate the IRAP and solve it. After determining the values of Kom’s,w’s, and Rl’s through the aforementioned steps in each time slot, an instance of the prob-lem (3.4) is obtained, which is a mixed integer nonlinear programming problem, whichneeds an exhaustive search to get the optimal solution and may be prohibitive in the costof computation. Instead, we propose an iterative algorithm to allocate subchannels to thelinks of the users. The detailed procedure is shown in the FDRA algorithm. This algorithm57Chapter 3. Channel-, Queue-, and Delay-Aware Resource AllocationAlgorithm 3.1 FDRA1: Initialize N = {1, ..., N} and qmk = Qmk , m ∈ {0, ...,M}, k ∈ Km2: While N 6= ∅ and (∑l∈{ly,mk }Rl > 0)3: Find (m∗, k∗) = argminm,k(Wˆk −Wmk1)4: l∗ = sm∗k∗5: Initialize rl∗ = 06: While N 6= ∅ and rl∗ ≤ Rl∗7: Find n∗ = argmaxn∈Nel∗n8: xl∗n∗ = 19: qm∗k∗ = qm∗k∗ −min(rl∗n∗ , qm∗k∗ )10: N = N − n∗, rl∗ = rl∗ + rl∗n∗11: End12: Rl∗ = 013: End14: While N 6= ∅ and (M∑m=0∑k∈Komqmk > 0)15: Compute ud,mkn = αd,mk βkqmk rd,mkn , m = 0, ...,M , k ∈ Kom16: Compute uf,mkn = αf,mk βk(q0k −Qmk )rf,mkn , m = 1, ...,M , k ∈ Kom17: Find (y∗, m∗, k∗, n∗) = arg maxy,m,k,nuy,mkn18: xy∗,m∗k∗,n∗ = 119: If y∗ = d20: qm∗k∗ = qm∗k∗ −min(qm∗k∗ , ry∗,m∗k∗n∗ )21: Else22: q0k∗ = q0k∗ −min(q0k∗ , ry∗,m∗k∗n∗ )23: End If24: N = N − n∗25: Endinitializes the variables qmk based on Qmk and updates their values in each iteration, to takeinto account the effects of subchannel allocations in the next iterations. The reason for notusing Qmk instead is to prevent any ambiguity, as the actual update of the queue sizes in thesystem should be done after the completion of the subchannel allocations, determinationof the transmission rates over the links, and packet drops from the queues.Note that the FDRA algorithm consists of two main blocks. Block 1 allocates subchan-58Chapter 3. Channel-, Queue-, and Delay-Aware Resource Allocationnels through lines 2 to 13 to meet the minimum rates specified, and block 2 allocates thesubchannels through lines 14 to 25 to maximize the utility function. Since SUMR addressesdelay-sensitive users by defining a minimum rate requirement for them whenever they havepackets in their queues in the BS or relays and defines the utility function based on thedelay-tolerant users, it thus uses block 1 for subchannel allocations to delay-sensitive usersand block 2 for delay-tolerant users. On the other hand, JUMR allocates subchannelsto all of the users based on block 2 and uses block 1 only when there are packets thathave reached the defined deadlines on the links and (therefore) a minimum rate has beendetermined for them.In all the policies, after allocating the resources and transmitting the data, if there isstill any packet that has reached its deadline and not yet been transmitted (due to lackof resources), it is dropped from its queue. Then, based on the arrived, transmitted anddropped packets, the sizes and delays of the remaining packets are updated.3.4.2 Enhanced CQDA PoliciesIn this subsection, we propose other variants for SUMR and JUMR policies, by makingsome modifications in the steps they take. As illustrated in the next section, these versionsof the policies show improved performance compared to the original SUMR and JUMRpolicies. However, the original policies are still useful as they have better performancethan the existing methods and need less computation or information than the enhancedpolicies.a) Enhanced SUMR (ESUMR). This policy is similar to SUMR in steps 1 and 2;however, in step 3 it uses different prioritizing rule to provide the minimum rates.59Chapter 3. Channel-, Queue-, and Delay-Aware Resource AllocationSpecifically, it replaces line 4 in the FDRA algorithm with the following:l∗ =sm∗k∗ , if Wˆk∗ −Wm∗k∗1 = 0argmaxl{ r˜lrlRl}, otherwise(3.12)where r˜l = 1N∑Nn=1 rln is the average of the achievable rates of the link l on thesubchannels in the current time slot (and therefore gives a measure of the averagecurrent fading conditions that the link l has on different subchannels).ESUMR aims at being more opportunistic. By using (3.12), it provides the minimumrates first for the links serving the packets that have reached their deadlines; afterallocating subchannels to them, it gives priority based on the second line in (3.12),to utilize the opportunities of favorable channel states (through r˜lrl ) while taking intoaccount the queue and delay states (through Rl, which has been computed by (3.10)and is based on the packets’ sizes and delays).b) Enhanced JUMR (EJUMR). JUMR serves the delay-sensitive users mostly throughmaximizing the utility and giving higher priorities to them compared with delay-tolerant users. However, if the data arrival rates of the delay-tolerant users are large,their queues will get backlogged more frequently and their weights can overshadowthe priority of the delay-sensitive users. EJUMR aims at preventing this, by takinginto account the data arrival rates of all the traffic classes. In particular, it definesthe β coefficients in step 1 using the following equation:βk =1Wˆkmax(1, λˆbλk), k ∈ Ka (3.13)where λk is the average arrival rate in the queue of user k in the BS, and λˆb = maxk∈Kbλk.According to (3.13), when the arrival rates of the delay-tolerant users are more than60Chapter 3. Channel-, Queue-, and Delay-Aware Resource Allocationthat of a delay-sensitive user k, EJUMR increases the βk in proportion to the ratioof λˆb and λk. This way, user k is able to maintain its priority through 1Wˆk .3.4.3 Practical ConsiderationsIn this subsection, we discuss the concerns that might arise about the assumptions andcomputational complexity of the proposed policies. First we note that the relays withbuffering capability have been considered in other works [33, 47], and in practice, theyare no different from nodes with store-and-forward capabilities commonly employed inwireless mesh or ad hoc networks. Also, the quasi full-duplex relaying can be realizedin practice by using two antennas and two radios in the relay, a directional antenna forfeeder links from the BS and an omnidirectional antenna for direct links to users [47].Proper configuration and installation of these two independent radio frequency chains canminimize any potential interference between transmissions and receptions on subchannelsover the two links at either sides of the relay, which are close in frequency. Furthermore, theproposed policies can also be easily extended for half-duplex relays; one simple extensionwould be to follow steps 1 and 2 of the proposed policies, to formulate the problem.Then in step 3, set xd,mkn = 0, m = 1, ...,M, k ∈ Km for first half of the time slot andxf,mkn = 0, m = 1, ...,M, k ∈ Km for second half, to respectively exclude the direct linksfrom relays and feeder links to relays in FDRA algorithm.As it can be seen through the proposed policies, the BS uses the information about thequeue sizes, packet delays and channel states to formulate and solve the resource allocationproblem. Queue sizes and packet delays in the BS can be easily obtained in practice fromlocal information in the BS. Obviously the BS also has information about the queue sizesand packet delays in relay buffers, as it knows the transmission sizes from all the queuesand over all the links in the previous time slot. On the other hand, for CSI, the users and61Chapter 3. Channel-, Queue-, and Delay-Aware Resource Allocationrelays would need to send feedback to the BS about the quality of their links on differentsubchannels. In practice this is done by reporting the index of achievable modulation andcoding schemes on the subchannels. Due to the limited number of these schemes, theoverhead will be lower compared with reporting on a continuous range. Also, to furtherreduce the overhead, the BS can determine a threshold and ask the users and relays toreport only channel states of the subchannels that have better quality than the specifiedthreshold. We note that by considering perfect channel information, our policies can beused as a benchmark to which other policies can be compared. In the case of imperfectchannel information, the performance of the proposed policies may be degraded; however,the relative improvement compared with the existing algorithms would hold, as perfectinformation is assumed in performance evaluations for existing algorithms as well.Define Λ = maxk∈K|Mk| ≤ M as the maximum number of serving nodes a user mighthave. The computations of steps 1 and 2, which are related to defining the parameters ofIRAP formulation, and through lines 2-13 of the FDRA algorithm in step 3 are relativelyinsignificant and may be ignored. Thus the computational complexity of the proposedpolicies are mainly affected by the lines 14-25 of the FDRA algorithm and is of O(KΛN2).This is a polynomial time complexity that makes the proposed polices suitable for practicalimplementations.3.5 Performance Evaluation and DiscussionIn this section, we investigate the performance of the policies described in the previoussection. We consider a system with cell radius equal to 1000 m and with 6 relays locatedat a distance of 2/3 cell radius from the BS, with uniform angular distance from each other.BS, relay and user antenna heights are considered 15 m, 10 m and 1.5 m respectively andthe path loss model is based on [61]. The noise power spectral density is assumed -17462Chapter 3. Channel-, Queue-, and Delay-Aware Resource AllocationdBm/Hz and BER requirements for delay-sensitive and delay-tolerant users are considered10−4 and 10−6, respectively. We consider a large subchannel bandwidth, 180 kHz, thatis equal to that of resource blocks in the LTE standard, as it leads to a lower overheadfor channel states reporting by receivers, less computations for resource allocations, and alower overhead for announcing subchannel allocations decisions. Also for similar reasons,we consider a time slot duration equal to that of an LTE subframe, i.e., 1 ms. The carrierfrequency is assumed 1900 MHz. For the transmit power at the BS or a relay on eachsubchannel, we down-scale the 43 dBm that is typical for a BS and maximum for a relayin LTE [66] by 100 times or 20 dB, which corresponds to the case of equal distribution ofpower on 100 LTE resource blocks or subchannels. Note that several works in the literaturehave considered equal maximum power for the BS and relays [23, 24, 66]. However, evenin the cases that they do not have equal maximum power, our policies can be applied intwo possible ways. Firstly, the BS and relays can allocate the same power to serve theusers with QoS requirements and use the remaining power for BE services. Second, ourproposed policies can be applied based on the relays’ lowest power, such that subchannelallocations are made with underestimated achievable rates. Then each of the BS andrelays can distribute their maximum power on the subchannels allocated to them, whichwill result in a better performance than the underestimated one.For the links from the BS or relays to users, we consider a channel model with Rayleighfading, and for the links between the BS and relays, a Rician channel model is used withκ factor equal to 6 dB [62]. Packet arrivals at the BS queues of delay-tolerant users aremodeled as Poisson processes with the packet sizes equal to 1 kbits and various arrivalrates that will be explained later. For the delay-sensitive users, we use the “MPEG Video”traffic model based on [64] such that the frame interarrival time is exponentially distributedwith 40 time slots, number of packets in a frame is 4 and the packet interarrival time in63Chapter 3. Channel-, Queue-, and Delay-Aware Resource Allocationa frame is 2 ms. Packet sizes are exponentially distributed with 1 kbits, which result in avideo data rate of 100 kbps, and their delay constraint is 150 ms.For comparison purposes, we also consider QoS provisioning in a system with no relay aswell as a system with prompt relays (i.e., relays without buffer) in which, in the first half of atime slot, the BS transmits to relays and in the second half, relays forward the received datato the users. For providing QoS in the system with prompt relays, we adapt the dynamicresource allocation algorithm in [24] for non-cooperative transmissions, referred as DRTin the figures. In DRT algorithm, relay selection and subchannel allocation are performedthrough utility maximization, where the utility is defined based on the packet delays inthe BS queues and the end-to-end relay channels’ transmission rates, and minimum rateprovisioning. For the case with no relay, referred to as “DRT-norel” in the figures, thealgorithm of [24] is adapted such that all the users can receive data only from the BS.Also we have considered Proportional Fair (PF) scheduling in a system with bufferingcapability in relays. For this purpose, we adapt the algorithm in [67] such that the firsthalf and second half of time slots are allocated to the BS or relays based on their needsto serve their queues, and subchannel assignment to users are performed based on theend-to-end PF metrics of the users.To investigate the effectiveness of the proposed policies in providing delay guaranteesfor delay-sensitive users and average throughput for delay-tolerant users, first we considersome special scenarios with extreme limiting factors. Then we provide general results basedon more typical scenarios. Note that except for the last scenario, the EJUMR policy isnot illustrated in the figures, as it has the same performance as JUMR in these scenarios.This will become clearer as we explain the results.In order to see the delay guarantee performance for both the users with strong and weakwireless links, we consider a scenario with 3 subchannels and 12 users, all delay-sensitive;64Chapter 3. Channel-, Queue-, and Delay-Aware Resource Allocation6 of the users are close to the BS at 50m, and are connected directly to the BS; the other6 are far from the BS at 1000m and located at an equal angular distance from each other,such that they have the same distance from the two nearest relays. Therefore, in additionto the BS, each of the 6 far users is also connected to two relays, which provide signals withhigher average SNR than that from the BS. Fig. 3.3 shows the average Packet Drop Ratio(PDR) of the mentioned close and far users, as well as the CDF of the delay of the receivedpackets (i.e., the dropped packets are not included). It is observed from Fig. 3.3(a) thatthe cases with no relay, prompt relays, as well as PF scheduling in a system with bufferingrelays are able to keep the PDR reasonably low or at zero for the close users, but sufferfrom high PDRs for far users. SUMR is able to reduce the PDR remarkably for the farusers at the cost of a small increase in PDRs for close users. Overall, SUMR improvesthe PDR performance. ESUMR and JUMR have even better performance and result inzero PDR for all the users. Also, Fig. 3.3(b) shows that the packets that are successfullyreceived by their destinations (i.e., they have not been dropped) experience lower delaysby the proposed policies, especially with ESUMR and JUMR.These results might look strange at the first glance, because when using two-hop trans-missions, the packets might experience some queuing delay in the buffers of relays, andthey might be expected to have higher delays and drop ratios compared with the promptrelays. However, what happens is the opposite, and the reason is as follows. Since thebuffering capability in the relays makes it possible to take advantage of the channel vari-ations opportunistically, such a system has a larger capacity. The proposed policies areable to exploit this well and provide high service rates for the queues so that the overalleffect is the faster transfer of packets from the BS to users or equivalently, lower PDRs anddelays. Specifically, for routing the packets of far delay-sensitive users, whenever they havenot reached their deadlines, SUMR and ESUMR decide based on (3.8); therefore, they are65Chapter 3. Channel-, Queue-, and Delay-Aware Resource AllocationDRT−norel DRT PF SUMR ESUMR JUMR00.10.20.30.40.50.60.70.80.91Average PDR of Delay−Sensitive Users  CloseFar(a)0 25 50 75 100 125 15000.10.20.30.40.50.60.70.80.91Packet Delay of Delay−Sensitive Users [ms]CDF  DRT−norelDRTPFSUMRESUMRJUMR(b)Figure 3.3: (a) Average PDR (b) CDF of the delay of the received packets; with 3 sub-channels, 6 close and 6 far users, all delay-sensitive.66Chapter 3. Channel-, Queue-, and Delay-Aware Resource Allocationable to transmit the packets through the nodes that have higher transmission rates and/orlower queuing delays. Also, in the case of routing through relays, using (3.6), they givemost portion of the delay budgets to the link that has lower average rates. Based on theseand using (3.10) they compute the minimum rate constraints of the links dynamically. Allof the above make it possible to formulate IRAP in accordance with the potentials of thesystem with buffering relays. Using the FDRA algorithm then, they are able to utilize thesystem subchannels efficiently. However, since SUMR starts the provisioning of minimumrates based on line 4 in FDRA algorithm, it restricts the utilization of link diversity. Onthe other hand, ESUMR replaces line 4 with (3.12), which enables it to consider the chan-nel conditions and minimum rate requirements together (except when there is an urgentsituation for packet transmission), and function even more opportunistically. This way,it first transmits packets on the links with better conditions and/or more rate require-ments. Therefore, it does not encounter the cases with many due packets that cannot beserved altogether in a single time slot. As the above figures show, JUMR has the bestperformance among the proposed policies. This is achieved because in formulating the in-stantaneous problem, JUMR works mostly through utility maximization, and activates theminimum rate constraints only when necessary. As a consequence, routing and schedul-ing of the delay-sensitive packets is automatically done based on subchannel allocationsthrough lines 14-25 in the FDRA algorithm. Since the utility function is based on MWpolicy, JUMR is able to perform the routing and subchannel allocations in a manner thattakes advantage of the maximum capacity property of MW as much as possible, whichtranslates into lower delays and PDRs over time.Note that the improvement in the proposed algorithms is partly due to the fact thathaving buffers makes it possible to utilize the channel diversities better. Therefore, it isexpected to have diminishing gains as the channel diversities increase. To show this, we67Chapter 3. Channel-, Queue-, and Delay-Aware Resource Allocation0 25 50 75 100 125 15000.10.20.30.40.50.60.70.80.91Packet Delay of Delay−Sensitive Users [ms]CDF  SUMR−Np=3ESUMR−Np=3JUMR−Np=3SUMR−Np=4ESUMR−Np=4JUMR−Np=4SUMR−Np=5ESUMR−Np=5JUMR−Np=5SUMR−Np=6ESUMR−Np=6JUMR−Np=6SUMR−Np=7ESUMR−Np=7JUMR−Np=7Figure 3.4: CDF of the delay of the received packets; with 3 allocatable subchannels froma pool of subchannels, 6 close and 6 far users, all delay-sensitive.have conducted simulations for the scenario described above, with the difference that the 3subchannels that can be allocated to users are selected from a pool of Np subchannels. Fig.3.4 shows the delay CDF of the received packets with the proposed policies for differentvalues of Np. The improvement in the delays, especially for SUMR, is largest when Npchanges from 3 to 4. After that, while there is still reduction in the delays, this reductionbecomes less as Np increases.Next, we consider a similar scenario with the difference that there are 4 subchannelsand also half of the users in each group, close and far, have delay-tolerant traffic withthe average rates equal to 100 kbps. Fig. 3.5 illustrates the average throughput in eachtime slot and the average queue size over time, for delay-tolerant users. It is observedthat the proposed policies are able to keep the queues of the delay-tolerant users stableand, therefore, provide them with the average throughput equal to the average data arrivalrates in their queues in the BS. With DRT-norel and DRT, the average queue sizes increase68Chapter 3. Channel-, Queue-, and Delay-Aware Resource AllocationDRT−norel DRT PF SUMR ESUMR JUMR020406080100120Average Throughput of a Delay−Tolerant User [bits/slot]  CloseFar(a)0 0.5 1 1.5 2 2.5 3x 10400.511.522.53x 106Time SlotAverage Queue Size of Delay−Tolerant Users [Bits]  DRT−norelDRTPFSUMRESUMRJUMR(b)Figure 3.5: (a) Average throughput (b) average queue size of the delay-tolerant users; with4 subchannels, 6 close and 6 far users, half of the users in each group are delay-sensitiveand the other half are delay-tolerant.69Chapter 3. Channel-, Queue-, and Delay-Aware Resource Allocationsteadily, which leads to instability and therefore, lower throughput for the close and farusers. With PF, while the queues of close users remain stable and they get an averagethroughput equal to their data arrival rates at the BS, the queues of far users suffer frominstability resulting in lower throughput for these users. In order to get a complete imageof the results, the PDR and the delay CDF of the received packets for delay-sensitive usersare also shown in Fig. 3.6. All the policies, except for DRT-norel (which has the smallestcapacity) and PF (which is not QoS-aware), are able to keep the PDR zero in this scenario.DRT achieves this at the cost of queue instability for delay-tolerant users (especially the farusers, as shown in Fig. 3.5(a)), but the proposed policies utilize the buffer-aided relayingsystem’s capacity well to satisfy both the delay guarantee and queue stability objectives,for both close and far users. In particular, considering the product of delay-tolerant users’queue sizes and transmission rates in the utility function (3.4a) enables CQDA policies toserve the queues of delay-tolerant users in the BS and relays when their queue sizes arelarge and/or their links have good channel conditions, which contribute to their stability.The delay guarantees of delay-sensitive packets are also obtained due to the same reasonsas in Fig. 3.3.In the following we consider some general scenarios, where users are randomly dis-tributed in the cell area with uniform distribution. The number of users is set to K = 22(as before, half of them are delay-sensitive and half, delay-tolerant) and the number ofsubchannels is set to N = 10. We have conducted simulations for 100 different distributionof the user locations (resulting in totally 1100 different individual locations for a delay-sensitive user and the same number for a delay-tolerant one), each over 10000 time slots.The simulation results obtained in this scenario showed that except for DRT-norel and PF(which resulted in user PDRs in the ranges [0, 0.8] and [0, 1], respectively), DRT and ourproposed policies were able to keep PDR equal to 0. We have not shown the figure for70Chapter 3. Channel-, Queue-, and Delay-Aware Resource AllocationDRT−norel DRT PF SUMR ESUMR JUMR00.10.20.30.40.50.60.70.80.91Average PDR of Delay−Sensitive Users  CloseFar(a)0 25 50 75 100 125 15000.10.20.30.40.50.60.70.80.91Packet Delay of Delay−Sensitive Users [ms]CDF  DRT−norelDRTPFSUMRESUMRJUMR(b)Figure 3.6: (a) Average PDR (b) CDF of the delay of the received packets for the delay-sensitive users; with 4 subchannels, 6 close and 6 far users, half of the users in each groupare delay-sensitive and the other half are delay-tolerant.71Chapter 3. Channel-, Queue-, and Delay-Aware Resource Allocationthis, as it did not give much more information than what has been mentioned. Instead,Fig. 3.7 displays the CDF of the maximum delay of the received packets for delay-sensitiveusers as well as the CDF of the average total throughput of delay-tolerant users in eachtime slot. It is observed that the proposed policies are able to meet the deadline of allthe delay-sensitive packets and at the same time provide queue stability and, therefore,average total throughput rate equal 1100 bits per time slot, which is equal to the sum ofaverage data arrival rates at the queues of 11 delay-tolerant users in the BS. On the con-trary, DRT-norel and DRT lead to instability and, therefore, less throughput than what isinjected into network, as they favor the delay-sensitive users and try to meet the deadlinesof their packets. PF is able to provide stability in some realizations of the users’ locationsin the cell area, but in some other realizations it leads to instability and throughputs farbelow the data arrival rates. It is firstly due to the fact that PF does not use the systemresources efficiently and secondly because it tries to provide fair service to all the userswithout taking into account the different requirements of them.Fig. 3.7(a) also shows that the packets of delay-sensitive users experience more delaywith JUMR than with SUMR, ESUMR or even DRT. The reason is that the IRAP for-mulations of SUMR, ESUMR and DRT consider the serving of delay-sensitive users bydefining minimum rate constraints; this causes the FDRA algorithm (through lines 2-13)to allocate the subchannels in every time slot, first to delay-sensitive users and then, if anyleft, to delay-tolerant users. On the other hand, with the JUMR problem formulation, thedelay-sensitive users’ service are mostly considered based on utility maximization; there-fore, when the utility of delay-tolerant users get higher than that of delay-sensitive usersin some time slots (because of possibly larger queue sizes or channel gains), subchannelsare allocated first to them (through lines 14-25 in FDRA). This postpones the service ofdelay-sensitive packets and results in larger delays for them.72Chapter 3. Channel-, Queue-, and Delay-Aware Resource Allocation0 25 50 75 100 125 15000.10.20.30.40.50.60.70.80.91Maximum Packet Delay of Delay−Sensitive Users [ms]CDF  DRT−norelDRTPFSUMRESUMRJUMR(a)0 200 400 600 800 1000 120000.10.20.30.40.50.60.70.80.91Average Total Throughput of Delay−Tolerant Users [bits/slot]CDF  DRT−norelDRTPFSUMRESUMRJUMR(b)Figure 3.7: (a) Maximum packet delay of delay-sensitive users (b) average total throughputof delay-tolerant users; N = 10, K = 22, random distribution of users in the cell area.73Chapter 3. Channel-, Queue-, and Delay-Aware Resource AllocationNote that PDR=0 for some policies in the results above only shows the relative effec-tiveness of these policies compared with other policies and is not an indicator of optimality.As the system load increases, the PDR might become nonzero. This is clarified throughthe following evaluations. In a scenario same as above, we fix the data arrival rate ofdelay-tolerant users at 100 kbps and investigate the effect of increasing the traffic loadof delay-sensitive users. Fig. 3.8 shows that as the arrival rate of delay-sensitive packetsincreases, DRT-norel leads to higher PDR and almost stops providing data throughputfor delay-tolerant users at the video rates larger than 125 kbps. With PF, due to QoSunawareness and inefficiency, PDR of delay-sensitive users increases and throughput ofdelay-tolerant users decreases. However, due to buffering capability in relays and largersystem capacity, the slop of changes is low. On the other hand, with the increase in thedata arrival rate of delay-sensitive users, DRT, SUMR and ESUMR define larger values forminimum rate requirements, and use larger amount of resources (subchannels over time) tomeet the packet deadlines and keep the PDR equal to zero. The consequence is that DRT(which is unstable even at the video arrival rates of 100 kbps), reduces the service rates ofdelay-tolerant users quickly and provides very low throughput to them at the video rates of225 kbps. However, SUMR and ESUMR are able to keep the queues of delay-tolerant usersstable and support their data arrival rate (i.e., provide the throughput equal to 100 kbps)as long as the data arrival rate of delay-sensitive users is not larger than 150 kbps. JUMRresponses differently, due to its different approach. Although it results in the instability ofthe delay-tolerant users’ queues as the video rate gets larger than 150 kbps, the reductionin the throughput of delay-tolerant users is less, compared with other policies. Instead,JUMR penalizes the delay-sensitive users by dropping their packets.Now we consider a similar scenario, with the difference that the data arrival rates ofthe delay-sensitive users are fixed at 100 kbps and the effect of the increase in the traffic74Chapter 3. Channel-, Queue-, and Delay-Aware Resource Allocation100 125 150 175 200 22500.050.10.150.20.250.30.350.40.450.5Arrival Rate of Delay−Sensitive Users [kbps]Average PDR of Delay−Sensitive Users  DRT−norelDRTPFSUMRESUMRJUMR(a)100 125 150 175 200 225020040060080010001200Arrival Rate of Delay−Sensitive Users [kbps]Average Total Throughput of Delay−Tolerant Users [bits/slot]  DRT−norelDRTPFSUMRESUMRJUMR(b)Figure 3.8: Effect of data arrival rate of delay-sensitive users on (a) average PDR ofdelay-sensitive users (b) average total throughput of delay-tolerant users.75Chapter 3. Channel-, Queue-, and Delay-Aware Resource Allocationload of delay-tolerant users is investigated. In this case, it is also possible to illustrate theperformance of EJUMR policy more clearly. Fig. 3.9 shows that as the data arrival rateof delay-tolerant users increases, PF results in higher PDRs for delay-sensitive users. Thisis due to the fact that it takes into account the queue sizes in allocating the resources andleads to more service to delay-tolerant users (due to the increase in their data arrival ratesand queue sizes), the result of which is increase in these users’ throughputs. DRT-norelshows no change in the PDR of delay-sensitive users and in the throughput of delay-tolerantusers. This happens because the arrival rates of video traffic are fixed and DRT-norel usesa fixed amount of resources for them. Therefore, a fixed amount is left for delay-tolerantusers, which is not enough to stabilize the system even under a data arrival rate of 100 kbps.As a result, it provides a fixed throughput for delay-tolerant users, no matter how muchhigher the arrival rate of their traffic is. Due to similar reasons, DRT, SUMR and ESUMRconsume a fixed (but different) amount of resources for delay-sensitive users and thanksto larger capacities achieved from using the relays, they are able to meet the deadline ofall the delay-sensitive packets. However, while DRT is unstable even at the arrival rate of100 kbps, SUMR and ESUMR are able to keep the system stable up to the arrival rate of150 kbps by matching the throughput of delay-tolerant users to the arrival rates of theirtraffic.With JUMR and EJUMR, the system also remains stable and keeps the PDR equal tozero, up to a data arrival rate of 150 kbps; but as the arrival rate increases further, theyprovide higher throughput than SUMR and ESUMR and instead, drop some of the delay-sensitive packets as well. As mentioned in Section 3.3, EJUMR takes into account the dataarrival rates of the users when defining their weights in the utility function; therefore, itis able to maintain the priority of delay-sensitive users better than JUMR, which resultsin lower PDRs. Based on this and the previous observations, it is inferred that EJUMR76Chapter 3. Channel-, Queue-, and Delay-Aware Resource Allocation100 125 150 175 200 22500.050.10.150.20.250.30.35Arrival Rate of Delay−Tolerant Users [kbps]Average PDR of Delay−Sensitive Users  DRT−norelDRTPFSUMRESUMRJUMREJUMR(a)100 125 150 175 200 2250200400600800100012001400160018002000Arrival Rate of Delay−Tolerant Users [kbps]Average Total Throughput of Delay−Tolerant Users [bits/slot]  DRT−norelDRTPFSUMRESUMRJUMREJUMR(b)Figure 3.9: Effect of data arrival rate of delay-tolerant users on (a) average PDR ofdelay-sensitive users (b) average total throughput of delay-tolerant users.77Chapter 3. Channel-, Queue-, and Delay-Aware Resource Allocation(which has the same performance as JUMR in the previous scenarios) is better able toutilize the system resources to provide system stability and throughput for delay-tolerantusers, while providing delay guarantees for delay-sensitive users. When the system startsto get unstable due to traffic overload, EJUMR degrades the service of both delay-sensitiveusers and delay-tolerant users; however, to return the system to a stable state, it will needa smaller reduction in the service quality of users, compared with the other policies, sinceit has a larger capacity.3.6 SummaryIn this chapter, we have provided a novel framework for formulating and solving QoS-awareresource allocation in OFDMA networks enhanced with buffering relays, with the objectiveto guarantee throughput and stringent packet delays, respectively for delay-tolerant anddelay-sensitive users. We have shown that due to the multihop structure of the networkand heterogeneous requirements of user services, the frequency domain resource allocationproblem is not clear. We have presented the challenges and proposed CQDA policies toaddress them using different approaches, namely SUMR and JUMR. SUMR aims at serv-ing the delay-sensitive users by defining minimum rate constraints and the delay-tolerantusers by defining utilities for them, whereas JUMR defines utilities for all the users andactivates minimum rate constraints only when getting close to packet deadlines of delay-sensitive users. We have also proposed enhanced versions of these policies, as ESUMR andEJUMR. Using extensive simulations, we have evaluated the performances of the proposedpolicies in different scenarios. The results show significant improvements in terms of packetdelay guarantees and throughput provisioning, compared with the systems without relays,systems assisted by relays without buffering capability, as well as systems with bufferingrelays but QoS-unaware.78Chapter 4Dynamic Distributed ResourceAllocation Framework4.1 IntroductionResource allocation in relay-assisted cellular networks might be implemented in a central-ized or distributed way. In the case of centralized implementation, the scheduler requiresthe information about users’ queue states in the BS and the channel conditions of all thesystem links. Obviously, it has the information about the queue states in the relays, as itknows the history of the transmissions from all the queues in the system. On the otherhand, distributed resource allocation is of more interest in cellular networks. This is firstlydue to the reduced computational burden on the BS. Secondly, it is because of the factthat the signaling overhead of CSI is lower in the distributed scenarios.In the previous chapter, due to the heterogeneous service requirements of users andstrict deadlines of delay-sensitive packets, we were concerned more about QoS provisioningrather than implementation complexity and therefore, we proposed centralized policies. Inthis chapter, we consider only delay-tolerant services and aim at designing a frameworkfor distributed resource allocation. We formulate power and subchannel allocation asan optimization problem and by introducing some concepts, we show its similarity to amulticell OFDMA scenario with smaller cells. Moreover, to make the problem tractable, wetransform it into a convex optimization problem and using dual decomposition, we propose79Chapter 4. Dynamic Distributed Resource Allocation Frameworka dynamic distributed iterative algorithm, called DDRA, where the BS and relays solvetheir own problem based on their links’ QCSI and some global variables exchanged amongthem. DDRA provides a novel framework for exploiting the system’s power and subchannelresources in an adaptive way over time, with lower overhead of the CSI feedback and lowercomputational complexity at the BS compared with optimal centralized scheduling whichrequires global CSI at the BS.The rest of this chapter is organized as follows. In Section 4.2, we outline the systemmodel for a relay-assisted OFDMA network. In Section 4.3, we formulate the resourceallocation algorithm design as an optimization problem and solve it by dual decomposition,where distributed closed-form solutions for power and subcarrier allocation are derived.Simulation results are studied in Section 4.4, and finally, the conclusion is presented inSection 4.5.4.2 System ModelWe consider a single cell time slotted OFDMA system in the downlink, with K users andM relays. Users are randomly distributed in the cell area, K1 of them being served directlyby the BS while others receive data through one of the relays. As it is shown in Fig. 4.1,we assume that each user has been assigned to either the BS or any of the relays based ona criteria such as average SNR, distance from the BS and relays, etc.BS and relays are equipped with buffers, where the BS has one for each user but relayshave one for each of only the users connected to them. Users’ packets arrive at the BSbuffer according to their traffic model and are queued until transmission to the directlyconnected users or to the relays serving other users. Relays do not need to transmit thereceived packets immediately in the next time slot and it is possible to keep them in thebuffers and serve them based on the scheduling policy. This gives flexibility to the scheduler80Chapter 4. Dynamic Distributed Resource Allocation FrameworkFigure 4.1: System modelto utilize the resources more opportunistically by postponing the transmission until theuser gets higher priority or better channel. We use QBk , k = 1, . . . , K to denote the queuesize of user k in the BS, and QR(k)k , k = K1 + 1, . . . , K to denote the queue size of user kin its serving relay, R(k).We assume that the transmission bandwidth is divided into N subchannels where eachsubchannel can be used exclusively by the BS or relays in one of the groups of the links, i.e.,BS-to-users, BS-to-relays and relays-to-users. Any relay has the ability to transmit on somesubchannels and at the same time receive data from the BS on other subchannels. Thechannel conditions of all the links are assumed time variant and frequency selective, butconstant during one time slot and over one subchannel. We use eBkn to indicate the channelgain-to-noise ratio of the link between the BS and user k on subchannel n. Similarly, eR(k)knand eBR(k)n denote the channel gain-to-noise ratio on subchannel n for the link betweenR(k) and user k and the link between the BS and R(k), respectively. Assuming that M-ary QAM modulation is used for transmission, the achievable transmission rates can becomputed based on the following [63]:rBkn = xBkn log2(1 + pBkneBknΓk), k = 1, . . . , K1, (4.1)where, without loss of generality, the bandwidth of a subchannel has been assumed equal81Chapter 4. Dynamic Distributed Resource Allocation Frameworkto 1. rBkn is the achievable transmission rate between the BS and user k on subchannel n.xBkn denotes subchannel allocation indicator, which is one if subchannel n is used by theBS to transmit data to user k, k = 1...K1, and zero otherwise. pBkn is the power allocatedby the BS to user k on subchannel n. Γk is the SNR gap due to the limited number ofcoding and modulation schemes and is related to bit error rate of user k (BERk), throughequation Γk = − ln(5BERk)1.5 . In a similar way we can define xR(k)kn ,pR(k)kn and rR(k)kn for thelinks from the relays to users and xBR(k)kn , pBR(k)kn , and rBR(k)kn for the links from the BS torelays. In each time slot, a resource allocation algorithm aims at efficient use of the systemresources, i.e., power and subchannles, while considering the QoS for the users in terms ofBER and queue stability. This is discussed in detail, in the next section.4.3 Cross Layer Scheduling and Resource AllocationIn this section, we formulate the cross layer scheduling and resource allocation problemand then, using some definitions and modifications, we propose a new perspective withsimplified convex optimization problem.4.3.1 Problem FormulationIn each time slot, the resource allocation policy considers the optimization problem in (4.2)to decide about the allocation of system power and subchannels. In this problem, wBk ,wBR(k)k , and wR(k)k are the weights of the users over the links from the BS to users, the linksfrom the BS to relays and the links from the relays to users, respectively. C1 states thetotal power constraint for the BS and the relays, where Pt ≥ 0 is the total available powerin the system. C2 indicates that each subchannel can be allocated to only one link. C3and C4 define the feasible values for suchannel allocation indicators and the powers usedon the subchannels, respectively.82Chapter 4. Dynamic Distributed Resource Allocation FrameworkP : maxp,xK1∑k=1N∑n=1wBk rBkn +K∑k=K1+1N∑n=1wBR(k)k rBR(k)kn+K∑k=K1+1N∑n=1wR(k)k rR(k)kn , (4.2a)s.t. C1 :N∑n=1(K1∑k=1pBkn +K∑k=K1+1(pBR(k)kn + pR(k)kn )) ≤ Pt, (4.2b)C2 :K1∑k=1xBkn +K∑k=K1+1(xBR(k)kn + xR(k)kn ) ≤ 1, ∀n, (4.2c)C3 : xBkn, xBR(k)kn , xR(k)kn ∈ {0, 1}, ∀k, n, (4.2d)C4 : pBkn, pBR(k)kn , pR(k)kn ≥ 0, ∀k, n, (4.2e)The problem (4.2) is a mixed integer nonlinear programming problem which needs an ex-haustive search to find the optimal solution. Note that it is feasible because the systemtotal power is non-negative. In order to make the problem tractable, we relax the subchan-nel assignment variables xBkn, xBR(k)kn , xR(k)kn to be real value between zero and one, insteadof a Boolean, i.e., 0 ≤ xBkn, xBR(k)kn , xR(k)kn ≤ 1, which is known as time or tone sharing [68].Furthermore, we consider the buffers of users k = K1, . . . , K in their relays, as virtualusers that are directly connected to the BS. In other words, we interpret the links betweenthe BS and relays as the direct links between the BS and some virtual users. As shownin Fig. 4.2, this perspective helps us to divide the serving area (single cell) into smallerareas (multi cells) served by M + 1 nodes, where node 0 is the BS with K users and hasthe complicated Radio Resource Management (RRM) capability and acts as a central con-troller while nodes m,m = 1, . . . ,M , are the relays with their own users, totally K −K1users, and act as antennas distributed in the serving area and connected wirelessly to thecontroller. We denote the set of users of node m with Qm; in particular Q0 = 1...K. Each83Chapter 4. Dynamic Distributed Resource Allocation FrameworkFigure 4.2: Similarity of the model to multicell networknode has the buffers of its own users and transmits data independently; however, in thebeginning of each slot they all communicate with a central controller in node 0 to decideabout their shares of power and subchannels and prevent interference on other nodes.We use the following notations for each user:emkn =eBkn, m = 0, k = 1, . . . , K1eBR(k)n , m = 0, k = K1 + 1, . . . , KeR(k)kn , m = 1, . . . ,M, k ∈ Qmxmkn and pmkn can be defined in a similar way. We define D = {(p,x)|0 ≤ pmkn ≤ Pt, xmkn ∈[0, 1]} as the domain of the problem. Due to tone sharing, SNR is equal to pmknemknxmkn; thisSNR is because of viewing pmkn as the energy per time slot that node m uses for user k onsubchannel n [68]. As a result, the rates are computed by rmkn = xmkn log2(1 +pmknemknxmknΓk).Assuming that the system is stabilizable, using MW [54, 55], the queue stability canbe provided by defining the weights of users as follows:wmk =QBk , m = 0, k = 1, . . . , K1QBk −QR(k)k , m = 0, k = K1 + 1, . . . , KQR(k)k , m = 1, . . . ,M, k ∈ Qm(4.3)84Chapter 4. Dynamic Distributed Resource Allocation FrameworkUsing the framework mentioned above, resource allocation problem is represented as:maxp,x ∈DM∑m=0∑k∈QmN∑n=1wmk xmkn log2(1 +pmknemknxmknΓk), (4.4a)s.t. C1 :M∑m=0∑k∈QmN∑n=1pmkn ≤ Pt, (4.4b)C2 :M∑m=0∑k∈Qmxmkn ≤ 1, ∀n (4.4c)Note that the ordinary OFDMA networks can be considered as a special case of thisformulation where M=0; in that case, the virtual users are the real users directly connectedto the BS.Problem 4.4 is convex and the strong duality holds [68, 69] (This can be verified bydefining p˜mkn =pmknxmknand substituting in the objective and constraints). Therefore, usingdual decomposition, an iterative algorithm can be designed to solve the problem.4.3.2 Dual Problem FormulationIn this subsection, we formulate the dual problem for the resource allocation optimizationproblem. For this, we first obtain the Lagrangian function of primal problem. Afterrearranging the terms, the Lagrangian can be written as:L(p, x, µ, δ) =M∑m=0∑k∈QmN∑n=1wmk xmkn log2(1 +pmknemknxmknΓk)−M∑m=0∑k∈QmN∑n=1µpmkn−M∑m=0∑k∈QmN∑n=1δnxmkn+ µPt +N∑n=1δn (4.5)85Chapter 4. Dynamic Distributed Resource Allocation Frameworkwhere µ ≥ 0 is the Lagrangian multiplier associated with the total power constraint, andδ ≥ 0 is the Lagrangian multiplier vector for the subchannel allocation constraints. Thedual problem is given by:minµ,δ≥0maxp,x∈DL(p, x, µ, δ) (4.6)Similar to the method in [68], the dual problem can be solved by a centralized iterativealgorithm in the BS. In this case, since the BS has the information of the previous trans-missions, it would have the Queue State Information (QSI) of all the relays and therefore,there is no need for QSI signaling. On the other hand, it is needed to have all the userssend feedback to their serving nodes about the CSI of their links. Then, the relays shouldinform the BS about the CSI of their local links, i.e., the links between themselves andtheir users. Moreover, they should send feedback to the BS about the CSI of the linksbetween the BS and themselves.Alternatively, using dual decomposition and concept of pricing, we propose an iterativedistributed algorithm where in each iteration, the BS and relays solve their own problembased on the global variables and the weights and channel conditions of their local links. Inthis case, the relays should notify the BS about their queue sizes, to be used in the weightsof the links between the BS and relays. For that, it suffices to report their modified queuesizes since the previous report, which leads to QSI signaling of at most in the order ofO(min(K − K1, N)). This is because there are totally K − K1 queues in the relays andin each time slot, at most N different queues can be served by the system subchannels.On the other hand, CSI reporting is similar to the case of centralized resource allocationexcept that the relays do not need to inform the BS about the CSI of their local links; thisreduces the CSI signaling overhead by the order of O((K −K1)N).In the following subsection, we study the distributed scheme and solve the dual problemin (4.6) by decomposing it into two parts: the first part is the local subproblem to be solved86Chapter 4. Dynamic Distributed Resource Allocation Frameworkby each of the serving nodes, BS and relays, and the second part is the main dual problemto be solved by the BS.4.3.3 Dynamic Distributed Resource AllocationBy dual decomposition, the dual problem is decomposed into a main global problem andM +1 local subproblems, which can be solved iteratively. In each iteration, using the dualvariables which are global for all the nodes, the BS and relays solve their local subproblembased on their QCSI. Then relays report their results to the BS and the BS updates thedual variables and broadcasts them to relays. In this way, dual variables act as prices thatthe BS adjusts to control the demands. The local subproblem in each node is given by:maxp,x∈DLm(p, x, µ, δ),whereLm(p, x, µ, δ) =∑k∈QmN∑n=1wmk xmkn log2(1 +pmknemknxmknΓk)−∑k∈QmN∑n=1µpmkn−∑k∈QmN∑n=1δnxmkn (4.7)where the Lagrange multipliers µ and δ are provided by the BS. Using the Karush-Kuhn-Tucker conditions we obtain:∂Lm∂pmkn= wmk xmknemknln 2(xmknΓk + pmknemkn)− µ = 0 (4.8)87Chapter 4. Dynamic Distributed Resource Allocation FrameworkAs a result, power allocation for subchannel n is obtained by:pmkn∗(x, µ, δ) = xmknp˜mkn(µ),p˜mkn(µ) = min(Pt,( wmkµ ln 2 +ln(5BERk)1.5 emkn)+)(4.9)where (a)+ = max(a, 0). After substituting pmkn∗ into (4.7), we have:Lm (x, µ, δ) =∑k∈QmN∑n=1xmknV mkn, (4.10)V mkn = wmk log2(1 +p˜mknemknΓk)−(µp˜mkn + δn)Define V m∗n = maxk∈Qm{Vmkn}; then, (4.10) is maximized if subchannel assignment variables arecomputed as follows:xmkn∗(µ, δ) =1, V mkn = (V m∗n )+,0, V mkn < (V m∗n )+,(4.11)In some time slots, more than one users might have V mkn = (V m∗n )+. This happens mostlyfor the virtual users of the BS that represent the links belonging to the same group, i.e.,between the BS and a particular relay, as these links have the same channel condition overa subchannel. In such cases, the tie can be broken arbitrarily. According to (4.3), (4.9)and (4.11), queue sizes of users, their channel conditions and required BER affect theirshare of power and subchennals.4.3.4 Solution of Main Dual Problem at the BSUsing the information about the power and channel allocation variables reported by relaysand based on the subgradient method [70], the BS updates the dual variables through the88Chapter 4. Dynamic Distributed Resource Allocation Frameworkfollowing iterations and report them to the relays.µ(ν + 1) =[µ(ν)− ξ1(ν)(Pt −M∑m=0∑k∈QmN∑n=1pmkn)]+,δn(ν + 1) =[δn(ν)− ξ2(ν)(1−M∑m=0∑k∈QmN∑n=1xmkn)]+, ∀n(4.12)In (4.12), ν is the iteration index, and ξ1(ν) and ξ2(ν) are the step sizes at the ν-th iterationfor updating µ and δn variables, respectively. The number of iterations can be optimizedto reach fast convergence, by choosing suitable step sizes and initial values [10]. In thisalgorithm, the overhead of messages reported by relays is in the order of O(NM) multipliedby the number of iterations, which is considerably lower than that of a centralized algorithmin the networks with high number of users.Based on the above discussions, we note that DDRA reduces the computational burdenof the BS and at the same time takes into account the QoS requirements of the users.Therefore, it provides a novel framework for distributed resource allocation in buffer-aidedrelay-assisted OFDMA networks.4.4 Numerical ResultsTo evaluate the system performance, we have considered a system with cell radius equalto 1000 m, 3 relays and 20 subchannels. Relays are located at a distance of 2/3 cell radiusfrom the BS, with uniform angular distance from each other. Antenna heights for theBS, relay and users are considered 15 m, 5 m and 1.5 m respectively and the path lossmodel is based on [61]. The noise power spectral density is assumed -174 dBm/Hz andBER requirements for users is considered 10−6. The carrier frequency is 1900 MHz, and89Chapter 4. Dynamic Distributed Resource Allocation Frameworkthe bandwidth for subchannels and the duration of time slots are considered equal to 15kHz and 1 ms, respectively. Data packet arrivals at the BS buffers are based on Poissonprocess, with packet sizes equal to 1 kbits and the average packet interarrival time equalto 30 ms.For the links from the BS or relays to users, Rayleigh channel model is used, whilethe links from the BS to relays are modeled with Rician channel with κ factor equal to 6dB [62]. In the following, the results are presented in terms of system throughput as wellas average queue sizes in the system. For baseline, we have used the Partial ProportionalFair (PPF) method proposed in [71] in which power is equally allocated over subchannelsand relays are prompt, i.e., they transmit in a time subslot immediately after the receptionsubslot. We have adjusted PPF for our scenario by considering the availability of data inthe users’ queues in the BS; we call it Queue-Aware PPF (QAPPF), as it computes theachievable rates of users based on their queue size and channel conditions.Fig. 4.3 displays the CDF of system average throughput in each time slot, in the caseof 10 users in the system. It is observed that DDRA is able to provide higher throughputwith higher probability, compared with QAPPF. This is due to the fact that the use ofbuffers in the relays brings more flexibility to resource allocation. Therefore, DDRA isable to utilize time diversity and allocate system power and subchannels efficiently. Thejumps in the graph of DDRA is because of the fact that it is able to empty the buffers insome time slots. Then, when a new packet arrives at a buffer, it is transmitted completely;therefore, many transmissions occur in the units of a packet size, which are reflected as thejumps in the CDF graph. Also, we observe that this happens more in the case of highersystem power, as it is possible to transmit with higher rates and have higher probabilitiesof empty buffers.90Chapter 4. Dynamic Distributed Resource Allocation Framework0 500 1000 1500 2000 250000.20.40.60.81System Average Throughput [Bits/Slot]CDF  DDRA−Pt=31dBmQAPPF−Pt=31dBmDDRA−Pt=34dBmQAPPF−Pt=34dBmFigure 4.3: CDF of system average throughput in each time slot, K=10To have a clearer picture, Fig. 4.4 demonstrates the average queue size over time, inthis scenario. While the queue sizes grow unbounded with QAPPF, DDRA is able to keepthe system queues stable. This is due to the fact that, according to (4.3), DDRA giveshigher weights to the users with larger queue sizes, and using (4.9) and (4.11), it is ableto allocate resources adaptively based on the queue sizes and channel conditions. As aresult of queue stability, the average data departure rates of the queues are equal to theiraverage arrival rates and therefore, as displayed in Fig. 4.3, DDRA is able to provide higherthroughput.91Chapter 4. Dynamic Distributed Resource Allocation Framework0 200 400 600 800 100000.511.52x 104Time SlotAverage Queue Size [Bits]  DDRA−Pt=31dBmQAPPF−Pt=31dBmDDRA−Pt=34dBmQAPPF−Pt=34dBmFigure 4.4: System average queue size over time, K=10Next, we investigate the effect of increase in the number of users on the system through-put. Fig. 4.5 shows the average system throughput in each time slot. It is observed thatas the number of users increases, DDRA leads to higher throughput. This is due to thefact that the total average data arrival rate in the system increases, and DDRA is ableto stabilize the system and deliver the arrived data to their destinations. However, in thecase that the system power is lower, its capacity is less; therefore, the capacity starts toget saturated when the number of users is more than 16. In this case, the system can notsupport the data arrival rates at the BS buffers and therefore, leads to lower throughput.4.5 SummaryIn this chapter, we have provided a novel framework for distributed resource allocation in arelay-assisted OFDMA network, with the assumption that relays are able to buffer data and92Chapter 4. Dynamic Distributed Resource Allocation Framework10 12 14 16 18 20 22 24100200300400500600700800Number of UsersAverage Throughput [Bits/Slot]  DDRA−Pt=31dBmQAPPF−Pt=31dBmDDRA−Pt=34dBmQAPPF−Pt=34dBmFigure 4.5: Effect of increase in the number of users on the system average throughputtransmit in a later time. By defining the links between the BS and relays as virtual users,we have presented a new perspective and showed the similarity of the system to a multicellnetwork. We have formulated the resource allocation problem as a convex optimizationproblem and using dual decomposition, we have proposed the iterative DDRA algorithm,where each of the BS and relays solve their own problem based on some global variablesand the information about the corresponding queue and channel states of their links. Theclosed form equations derived for power and subchannel allocation reveals the adaptivecharacteristic of our resource allocation algorithm to queue sizes and channel conditions ofthe users. Numerical results confirm that DDRA leads to significant improvement in thesystem performance in terms of average throughput and queue stability.93Chapter 5Utility-Based Efficient DynamicDistributed Resource Allocation5.1 IntroductionIn the previous chapter, we proposed a framework for distributed resource allocation inbuffer-aided relay-assisted OFDMA networks, which provided insights for reducing thecomputational burden on the BS and the overhead of CSI signaling in the system. In thischapter, we study distributed resource allocation for BE users while taking into accountmore constraints that usually arise in practical systems.In most of the existing studies on OFDMA relay networks, the resource allocation for BEusers is formulated with the assumption that the buffers in the BS are infinitely backloggedand there are always data in the BS buffers. However, in practice, the data arrivals in theBS buffers are random and depending on the system objective and constraints, a dataadmission policy is needed to control the traffic flow into the network. In this regard,network operators take into account the average performance metrics such as averagedata admission and throughput as well as the average power constraints, in formulatingthe resource allocation problem. On the other hand, system hardware and technologystandard specifications usually impose other constraints that need to be taken into accountinstantaneously in every time slot, to lead to a proper operation of the system.To consider the above mentioned in the design of resource allocation algorithms, stochas-94Chapter 5. Utility-Based Efficient Dynamic Distributed Resource Allocationtic network optimization is required. One of the useful and effective tools for addressingthis class of problems is the well-known Lyapunov drift-plus-penalty policy which providesan efficient framework for transforming the stochastic problem into an instantaneous oneto be solved in each time slot [51, 55]7. However, using this policy without considering theconstraints and challenges that arise in relay-assisted cellular networks can lead to unsat-isfactory results. Specifically, in the drift-plus-penalty policy, the average of the variablesis defined over infinite time horizon, which requires some considerations for achieving thedesired objectives and satisfying the constraints in practical systems. Also, the lower powerof the relays and the two-hop connection of some users necessitates careful attention inproviding fair data admission for all the users. Moreover, it is of great interest to designlow-complexity and efficient algorithms for deciding about the allocation of the systemresources without incurring significant degradation in the system performance. To thebest of our knowledge, none of the existing works on OFDMA relay networks has studiedresource allocation with the above mentioned constraints altogether. This chapter aims ataddressing these issues and filling the gaps.In summary, we study low-complexity utility-based resource allocation for BE servicesin buffer-aided relay-assisted OFDMA networks, based on stochastic optimization frame-work presented in [55]. We consider the network utility as a function of average dataadmission of the users and aim at maximizing it subject to the long term and instanta-neous constraints. In addition to considering several practical constraints altogether, thecontributions of our work can be classified into two categories: one category is the iden-tification of important parameters that should be taken into account in the instantaneousproblem formulation. The other one is the design of low-complexity algorithms for solvingthe resource allocation subproblem. Specifically, the main contributions are as follows:7In this chapter, when we refer to [55], we mean the Chapter 5 of that reference, unless otherwisespecified.95Chapter 5. Utility-Based Efficient Dynamic Distributed Resource Allocation• We identify the factors that need to be taken into account for adapting the Lyapunovdrift-plus-penalty policy for relay-based cellular networks. In particular, we proposeto consider an importance parameter for average power constraint, to satisfy thatconstraint in a reasonable time period for practical scenarios. Also, we propose toadd extra weight for the BS-to-relays and relays-to-users links, in the cases that thefairness is also an objective in the utility-based data admission control.• We aim at low-complexity algorithms for time slot, subchannel and power allocationsand highlight the challenges even in such algorithms, due to the lack of a prioriknowledge about the subchannel sets and total power usage of the BS and relays ineach time slot. Then, we propose a low-complexity strategy for breaking the ties andmaking the interdependence tractable, which can be used in both centralized anddistributed resource allocation implementations.• We focus on distributed mechanism for resource allocation, and propose efficient andlow-complexity algorithms for deciding about the type of time slot, subchannel setsof the nodes, and subchannel and power allocations to the links of the nodes. Basedon that, we also present a low-complexity centralized mechanism which needs moresignaling overhead and can be used as a benchmark.• We take into account practical constraints such as average power, peak power as wellas finite data availability and limited buffer space.• Using extensive simulations, we demonstrate the effectiveness of the introduced pa-rameters and verify the performance of the proposed algorithms. We observe thatthe distributed algorithm has very close performance to the centralized one and out-performs an existing centralized scheme proposed in [48].The rest of this chapter is organized as follows. Section 5.2 describes the system model96Chapter 5. Utility-Based Efficient Dynamic Distributed Resource Allocationand the stochastic problem formulations. In Section 5.3, we state the subproblems andchallenges as well as the proposed parameters and algorithms. Numerical results are pro-vided in Section 5.4, with conclusion finally presented in Section 5.5.5.2 PreliminariesIn this section, we present the system model and the stochastic problem formulation. Then,we present the transformed version of the problem and introduce the virtual queues whichmake it possible to exploit the Lyapunov drift-plus-penalty policy in the next section.Hereafter, for easiness, we will use the term “drift-plus-penalty” instead of “Lyapunovdrift-plus-penalty”.5.2.1 System ModelWe consider the downlink of a single cell relay-assisted OFDMA network, as shown inFig. 5.1. It is assumed that the users have BE services and therefore, they have not specificservice requirements. In this regard, the system resources remained after serving the userswith specific QoS requirements are considered for serving the BE users. We assume thateach user is connected to either the BS or one of the relays, meaning that it receivesservice from only one of them. This is decided at the beginning of users’ connection to thenetwork and through handshaking procedures between the BS, relays and users, about thesignal strengths that users can receive from the BS and relays. Users, relays and availablesubchannels are indexed respectively by k ∈ K = {1, ..., K}, m ∈ M = {1, ...,M} andn ∈ N = {1, ..., N}. We use the term “serving node” or simply “node” to refer to anyof the BS or relays and we show the set of all nodes by B = {0, 1, ...,M}, where m = 0indicates the BS. Also, we use Km to denote the set of users that have a direct link to nodem ∈ B. On the other hand, m(k) is used to refer to the node directly connected to user k.97Chapter 5. Utility-Based Efficient Dynamic Distributed Resource Allocationm = 0|Q0| = K buffersK0m = 1|Q1| = |K1| buffersK1Figure 5.1: System modelWe assume that time is divided into the units of slot, where a time slot can be typeA or type B. In type A slots, the BS transmits to users directly connected to it, or to therelays; in type B slots, the BS and relays can transmit to only the users connected to themand therefore, there is no transmission from the BS to relays. This transmission format isbased on LTE-A with type 1 relays where BS-to-relays transmissions and relays-to-userstransmissions use the same bandwidth but over different time slots, in order to prevent theinterference between transmit and receive antennas.We assume that the MAC layers of the BS and relays are equipped with buffers, wherethe BS has one for each user but every relay has one for each of the users connected to it.We denote the set of the users that have a buffer in node m ∈ B by Qm; therefore, we haveQ0 = K and Qm = Km, ∀m ∈ M. These notations are defined to make the formulationsand algorithms shorter. The data admitted into a BS buffer, are queued until transmissionto the corresponding direct user, or the corresponding buffer in the relay connected to theuser. Similarly, data arrived in relays’ buffers, are queued until transmission to their users.Note that in the following, when we use the term “the link of user k from node m”,we mean “the link that serves the queue of user k in node m”, which might be a direct98Chapter 5. Utility-Based Efficient Dynamic Distributed Resource Allocationlink between the BS and a user, a feeder link between the BS and a relay or an access linkbetween a relay and a user connected to it. We use emkn(t) for the link of user k from nodem, to denote the channel gain-to-noise ratio at the receiver side on subchannel n. It isassumed that the channel conditions of the links vary over time and frequency, but remainconstant during one time slot and over one subchannel. For simplicity, we will remove the(t) argument from the variables and imply them in a general transmission incident. Weassume that the BS and relays use M-ary QAM modulation for their transmissions, andtherefore, the achievable transmission rate on the link of user k from node m on subchanneln can be computed as follows [63]:rmkn = B log2(1 + pmn emknΓk), (5.1)where B is the bandwidth of a subchannel. Γk is the SNR gap due to the limited numberof coding and modulation schemes and is related to the bit error rate of user k (BERk),through equation Γk = − ln(5BERk)1.5 [63]. pmn denotes the power allocated by node m onsubchannel n. We indicate the total power used by node m as Pm =∑Nn=1 pmn . Using (5.1),the total transmission rate on the link of user k from node m can be written as rmk =∑Nn=1 xmknrmkn, where xmkn ∈ {0, 1} denotes the subchannel allocation indicator which willbe one if subchannel n is used for transmission on the link of user k from node m, andzero otherwise. Note that for any n, in type A time slot, xmkn should be set to zero form ∈ M, k ∈ Km and in type B time slot, x0kn should be set to zero for k ∈ K − K0.In each time slot, a resource allocation policy determines the type of time slot, andsubchannel and power allocations for the different links of the system. Based on that, theBS and relays transmit data from their queues and at the end, the queue sizes are updated99Chapter 5. Utility-Based Efficient Dynamic Distributed Resource Allocationas follows:Qmk (t+ 1) = min[Lmk ,max[Qmk (t)− Trmk (t), 0] + amk (t)], ∀m ∈ B, k ∈ Qm (5.2)where T is time slot duration and Lmk and Qmk (t) respectively denote the buffer capacityof user k in node m and the size of data queued in it in time slot t. Data arrival processesinto relay buffers are in fact the departure processes from the BS and therefore, we haveamk (t) = min[Q0k(t), T r0k(t)], ∀m ∈ M, k ∈ Km. On the other hand, the arrival processesat the BS buffers are managed through a data admission control policy in the MAC layer,which decides to admit or reject data from the queues of the top layer buffers in the BS.The size of the queues in top layer are updated as follows:Yk(t+ 1) = min[Jk,max[Yk(t)− a0k(t), 0] + Ak(t)], ∀k ∈ K (5.3)where Jk and Yk(t) respectively denote the buffer capacity of user k in the top layer of theBS and the size of data queued in it in time slot t. Ak(t) is the amount of data arrived intime slot t for user k, according to an exogenous stochastic process. We assume that dueto processing limitations, an upper bound of aˆ is imposed on the amount of data admittedinto the MAC layer buffers and therefore, we have a0k(t) ≤ aˆ, ∀k, t.5.2.2 Stochastic Problem FormulationWe note that in a realistic scenario, the BS buffers are not infinitely backlogged and arefed by stochastic data arrivals. This makes it necessary to take into account the queuedynamics, in the design of resource allocation algorithms, in addition to the randomnesscaused by the wireless channels. Therefore, the average performance metrics become im-portant for network operators, and specially, the average throughput and average power100Chapter 5. Utility-Based Efficient Dynamic Distributed Resource Allocationconstraints are the issues that need to be managed. In the following, we will explain thesein more detail.According to [55, Chapter 4], we define the time average expectation of a stochasticvariable v(t) as:v = limτ→∞1ττ−1∑t=0E[v(t)] (5.4)Considering the above mentioned, we aim at controlling the data traffic and resourceallocation, by addressing the following stochastic optimization problem:maxa0,x,pK∑k=0U(a0k), (5.5a)s.t. C1 : Pm ≤ P avm , ∀m ∈ B, (5.5b)C2 : rmk (t)T ≤ Qmk (t), ∀m ∈ B, k ∈ Qm, (5.5c)C3 : r0k(t)T ≤ (Lmk −Qmk (t)), ∀m ∈ M, k ∈ Km, (5.5d)C4 : rmkn(t) ≤ Bsˆ, ∀m ∈ B, k ∈ Qm, ∀n, (5.5e)C5 : a0k(t) ≤ Yk(t), ∀k ∈ K, (5.5f)C6 : a0k(t) ≤ min[aˆ, L0k −Q0k(t)], ∀k ∈ K, (5.5g)C7 : Pm(t) ≤ Pˆm, ∀m ∈ B, (5.5h)C8 :∑m∈B∑k∈Qmxmkn(t) ≤ 1, ∀n ∈ N , (5.5i)C9 : {xmkn(t)} comply to the transmission rules of either type A or B slot (5.5j)where U is the utility function and C1 is to limit the average power consumption of eachnode. C2 shows that a finite amount of data can be transmitted from each queue and C3 isto prevent the incidents of more transmissions to relay buffers than they can accommodate.C4 indicates the limit on the availability of modulation schemes, where sˆ is the spectral101Chapter 5. Utility-Based Efficient Dynamic Distributed Resource Allocationefficiency of the highest order modulation in the system; considering it helps in controllingthe power allocation and preventing overflows from relays’ buffers (This will be explainedclearly in Section 5.3). C5, C6 respectively show the limit on the availability of data inthe top layer buffers of the BS and the limit on the data admissions. C7 indicates themaximum instantaneous power, Pˆm, that node m can use for transmissions, C8 shows thateach subchannel can be allocated to only one link, and C9 is to use the feasible values forsubchannel allocation variables {xmkn}.The utility function in (5.5a) makes it possible to control the data admission of the BEusers based on the objective of the network operator. For example for maximizing the totalthroughput, U(z) = z can be used or for providing proportional fairness, U(z) = log(z)can be considered. We assume that U(z) is a concave and continuous function of z.We note that the problem (5.5) has two types of constraints. While C1 needs to besatisfied over long time, C2−C9 state the constraints that must be met in each time slot. Inparticular, P avm is different from Pˆm, as the former can be set to limit the power consumptioncosts or the circuit heating but the latter is imposed by the system hardware (such as poweramplifiers’ linear operation characteristics or maximum available instantaneous power) andis larger than P avm .5.2.3 Transformed Problem and Virtual QueuesWe note that the objective in problem (5.5) is a function of time average of users’ dataadmission rate. Similar to [55], let define auxiliary variables 0 ≤ γk(t) ≤ aˆ, k = 1, ..., K,corresponding to each a0k(t), k = 1, ..., K. Then, the problem (5.5) can be transformed into102Chapter 5. Utility-Based Efficient Dynamic Distributed Resource Allocationthe following problem, in which the objective is time average of a function:maxa0,x,p,γK∑k=0U(γk), (5.6a)s.t. C1− C9, (5.6b)C10 : γk ≤ a0k, ∀k ∈ K, (5.6c)C11 : 0 ≤ γk ≤ aˆ (5.6d)We also define the virtual power queues Zm(t) and virtual auxiliary queues Gk(t), respec-tively corresponding to the constraints C1 and C10, with the following update equations:Zm(t+ 1) = max[Zm(t) + Pm(t)− P avm , 0], ∀m ∈ B (5.7a)Gk(t+ 1) = max[Gk(t) + γk(t)− a0k(t), 0], ∀k ∈ K (5.7b)Zm(t) is in fact like a queue that has a stochastic arrival process Pm(t) and a deterministicconstant service process P avm . Similarly, Gk(t), acts like a queue that has the stochasticarrival process γk(t) and stochastic service process a0k(t). Note that over time, these virtualqueues track the difference of the instantaneous values of the variables on both sides of theconstraints C1 and C10. The idea behind using these queues is the fact that satisfyingthe constraints C1 and C10 is equivalent to stabilizing these virtual queues8 which can beperformed by using drift-plus-penalty policy (of course, assuming that the original problemis feasible) [55]. Based on the above mentioned, we are able now to study the instantaneousproblem and the algorithms for solving it, which will be presented in the next Section.8Recall that when a queue is stable, it means that its average service rate is larger than its averagearrival rate103Chapter 5. Utility-Based Efficient Dynamic Distributed Resource Allocation5.3 Cross Layer Traffic Control and ResourceAllocationIn this section, we first describe the instantaneous problem to be addressed in each time slotand propose some parameters that need to be included in it to make it suitable for relay-assisted cellular networks. Then, we present the data admission subproblem and discuss thefactors influencing it. After that, we highlight the issues in solving the resource allocationsubproblem and propose a low-complexity strategy to address them. Then, we providea low-complexity distributed algorithm which uses the proposed strategy through severalsteps for deciding about the allocation of time slots, power and subchannels. Finally wepresent a low-complexity centralized algorithm, based on the distributed one and describethe modifications needed.5.3.1 Instantaneous ProblemTo address the problem (5.6), we define the “instantaneous” problem in time slot t, basedon the drift-plus-penalty policy [55], as follows:maxa0,x,p,γVK∑k=0U(γk(t)) +K∑k=0Gk(t)[a0k(t)− γk(t)]+ IM∑m=0Zm(t)[P avm − Pm(t)] +M∑m=0∑k∈Qm(Qmk (t) + ρmk We)[rmk (t)T − amk (t)],(5.8a)s.t. C2− C9, C11 (5.8b)104Chapter 5. Utility-Based Efficient Dynamic Distributed Resource Allocationwhere V is the value that can be given to the objective (5.6a), and by that we can tradeoff higher objective to larger queue sizes[55]. I, ρmk and We are the parameters that wepropose, to adapt the drift-plus-penalty policy to relay-assisted cellular networks. I isthe importance factor that we give to average power constraint, through which we canprevent the continuous growth of the virtual power queues and consequently, we can meetthe average power constraints in shorter time. We is an extra positive weight, that can begiven for the feeder links from the BS to relays and the access links from relays to users,in the cases that the fair admission of users’ data is of our concern. ρmk ∈ {0, 1} is theindicator to specify the conditions and the queues that can exploit We; In particular, it willbe zero unless when the fair data admission is desired and the corresponding queue (eitherin the BS or in a relay) belongs to an indirect user, i.e., k ∈ K−K0, m ∈ B. Proposition ofI, ρmk and We is one of the main contributions of this chapter and will be discussed later.It is observed from (5.8a) that the instantaneous objective in each time slot, includesfour terms: The first term corresponds to the long term objective (5.6a) and the restcorrespond to serving the actual queues and stabilizing the virtual queues (to meet theconstraints C1 and C10). We note that due to the limited buffer capacities, the actualqueues of the system are always stable. However, using drift-plus-penalty policy provides auseful framework for channel-and-queue-aware resource allocation which takes into accountboth channel states and the data availability in the system’s actual queues. It also makesit possible to stabilize virtual power queues.Note that considering V , I, ρmk and We in (5.8a) is to facilitate reaching our purposesfor system utility and constraints, in different scenarios; otherwise neglecting them is infact like setting them based on a fixed scenario (i.e., V=I=1 and We=0) which would lowerthe usefulness of the drift-plus-penalty policy. Note that V and I can be tuned easily byconsidering the range of values for weighted rates in (5.8a), affected by packet sizes and105Chapter 5. Utility-Based Efficient Dynamic Distributed Resource Allocationtransmission rates. On the other hand, ρmk can be easily set as stated above, when fairnessis an objective; then, We can be tuned by increasing its value from zero towards the valuesin the range of queue sizes in relays, depending on how much we trade off between dataadmission for the direct and indirect users. These will get clearer in the following, whenwe discuss their effects.Similar to [55], by rearranging the terms in the instantaneous problem, it can be dividedinto three instantaneous Subproblems (SPs) which will be presented in the next subsections.5.3.2 Traffic Control and Data AdmissionThe first subproblem of (5.8) is related to the auxiliary variables as follows:SP1 (Auxiliary Variables Subproblem):maxγK∑k=1(V U(γk(t))−Gk(t)γk(t)), (5.9a)s.t. 0 ≤ γk(t) ≤ aˆ, ∀k ∈ K (5.9b)SP1, (5.2), (5.3), (5.7b) and the following flow control subproblem, affect the admissionof the data into the users’ buffers in the BS:SP2 (Flow Control Subproblem):maxa0K∑k=1(Gk(t)−Q0k(t))a0k(t), (5.10a)s.t. 0 ≤ a0k(t) ≤ aˆ, ∀k ∈ K, (5.10b)a0k(t) ≤ min[Yk(t), L0k −Q0k(t)], ∀k ∈ K (5.10c)106Chapter 5. Utility-Based Efficient Dynamic Distributed Resource AllocationIt is observed that the SP1 and SP2 are simple convex problems; all of their variablesare accessible to the BS and therefore, the BS can solve them easily. As an example, for theproportional fairness, i.e., when U(γk) = log(γk), after taking the derivatives, the BS candetermine the optimal auxiliary variables in time slot t through 1γk(t) =Gk(t)V =⇒ γk(t) =min(aˆ, VGk(t) ), ∀k.On the other hand, by solving the flow control subproblem, the BS can determine theoptimal data admissions in time slot t as:a0k(t)=min[Yk(t),min[aˆ, L0k −Q0k(t)]] , Q0k(t)≤Gk(t)0 , otherwise(5.11)Note that, based on the above mentioned, whenever the size of an actual queue in theBS, Q0k(t), is larger than the virtual queue Gk(t), no data is admitted into the correspondingBS buffer. This can happen for several time slots, in buffer-aided relay-assisted cellularnetworks, due to the admission of a large packet and low service rate of the queue of anindirect user in the BS (which is caused by the differential backlog terms described in thenext subsection). Consequently, even using a utility function with fairness property andlarge value for parameter V does not necessarily lead to fair data admission, and therefore,more considerations are needed in resource allocation for serving the queues. This is themotivation for proposing ρmk and the extra weight We (explained clearly later, in Remark1), which help to improve the fair data admission for indirect users.5.3.3 Resource Allocation ChallengesBy substituting (5.1) in (5.8a) and removing the constant terms, the last and most impor-tant subproblem, which is to decide about the time slot, subchannel and power allocations107Chapter 5. Utility-Based Efficient Dynamic Distributed Resource Allocationcan be stated asSP3 (Resource Allocation Subproblem):maxp,xN∑n=1M∑m=0( ∑k∈QmBTwmk (t)xmkn(t) log2(1 +pmn (t)emkn(t)Γk)− IZm(t)pmn (t)), (5.12a)s.t.C2− C4, C7− C9 (5.12b)where w0k(t) = Q0k(t), k ∈ K0 (weights for the direct links from the BS to its users; recall thatρ0k is always equal to 0 for these links), wmk (t) = Qmk (t) + ρmk We, m ∈ M, k ∈ Km (weightsfor the access links from relays to their users), and w0k(t) = Q0k(t)−Qm(k)k (t) + ρ0kWe, ∀k ∈Q0−K0 (weights for the feeder links of indirect users from the BS to relays). The differentialbacklog term Q0k(t)−Qm(k)k (t) in the weight of a feeder link is resulted by switching the sumsand considering the fact that for the buffers of the relays, the arrivals are upper boundedby the transmission rates from the BS to relays, i.e., amk (t) ≤ r0k(t), ∀m ∈ M, k ∈ Qm9. Asexplained in the previous subsection, and later in the Remark 1, these differential backlogterms lead to unfair data admission for indirect users, and their effect can be reduced byusing ρ0kWe terms.We note that SP3 is a mixed integer and nonlinear programming and needs an exhaus-tive search to find its optimum solution. One common approach for these types of problemsis to relax the subchannel allocation variables xmkn whenever this relaxation converts theproblem into a convex one. Then, using the dual decomposition, optimal solution can befound, if the duality gap is zero. This approach was used in the previous chapter, based onwhich we proposed a dynamic distributed resource allocation. However, in this chapter,due to the finite data and limited buffer capacity constraints, i.e., C2 and C3, the resultedproblem after relaxation of xmkn variables will be non-convex. In addition, we note that in9Note that the objective function in the MW policy, utilized in the previous chapters, is in fact thespecial case of that in SP3 where the virtual power queues are not considered and We = 0.108Chapter 5. Utility-Based Efficient Dynamic Distributed Resource Allocationthe previous chapter, there was only one power constraint for the whole system which madeit possible to have high convergence speed for the proposed algorithm. In a more realisticsystem like the one we consider in the current chapter, even if we remove the constraintsC2, C3 and relax xmkn variables to make it convex, a dual based iterative algorithm willneed many iterations and a long time to converge, due to the separate power constraintsof nodes. This is not suitable for use in practical scenarios where each time slot is in theorder of a millisecond and the resource allocation decision needs to be made in a smallfraction of time.Due to the above mentioned, we aim at low-complexity suboptimal algorithms whichcan be easily implemented in practical systems. For this purpose we consider equal powerallocation on subchannels and allocate them in a greedy way, based on the queue sizes andachievable transmission rates of the links.However, even considering equal power distribution on subchannels and computing theachievable transmission rates of the links is not trivial here, due to the following two issues:a) Unknown number of subchannels for each node. For deciding about theallocation of subchannels, we need to know the achievable transmission rates of thelinks on the subchannels, and for that we need to know the power allocations onthe subchannels. However, before subchannel allocation, it is not clear how manysubchannels will be allocated to the BS and relays, and consequently it is not knownon how many subchannels their total powers will be distributed equally.b) Unknown total powers to be used by each node. The total powers used by eachof the BS and relays, need to satisfy the average and peak power constraints. Thisis controlled in SP3 through the objective function, which is sum of the increasingand decreasing functions of power, and constraint C7. Based on that, the totalpower used by each node can vary in each time slot between zero and its peak power,109Chapter 5. Utility-Based Efficient Dynamic Distributed Resource AllocationAlgorithm 5.1 Subchannel and Power Allocation Strategy (SPAS)• Assume the number of subchannels to be assigned to node m, N˜m, proportional tothe number of its queues.• Assume the power each node will use on the subchannels, will be based on peakpower, and equal to PˆmN˜m .• Compute the achievable transmission rates of the links based on the channel condi-tions and assumed powers.• Determine the type of time slot and allocate the subchannels, based on the actualqueue sizes and the achievable transmission rates.• Considering the the size of actual and virtual queues, adjust the total power each nodecan use and distribute it equally on the subchannels assigned to it in the previoussteps.depending on the subchannel allocations and the sizes of the corresponding virtualqueues. Therefore, even if we make an assumption on the number of subchannels tobe used by each node, it is not clear that how much total power will be distributedequally on them.To address the stated issues, we propose a low-complexity suboptimal strategy forbreaking the interdependence of power allocations and subchannel assignments, as shownin Subchannel and Power Allocation Strategy (SPAS).Note that with SPAS, the total transmission power of each node is assumed equal tothe peak value, at the beginning. Based on that, subchannels are allocated and at the end,the total power is adjusted. The reason for this is clarified later in Remark 2.SPAS can be utilized in a centralized or distributed way, with some modifications. Inthe following we will present the distributed implementation, as it is of more interest tothe research and industrial bodies; later based on that, we will describe the centralizedresource allocation, which can be used as a benchmark for the proposed distributed one.110Chapter 5. Utility-Based Efficient Dynamic Distributed Resource Allocation5.3.4 Efficient Dynamic Distributed Resource AllocationIn this subsection, we propose the EDDRA method which performs resource allocation ineach time slot, through four steps. In the first step, every node reports estimates of itssubchannel demands to the BS and based on them, the BS decides about the type of timeslot. In the second step, the BS determines and reports the subchannel sets that each ofthe BS and relays can use. Then, in the third and forth steps, in a distributed way, eachnode first assigns the subchannels to its users and then, it adjusts the total power it candistribute over its subchannels.Step 1) Slot Type Determination(STD). At the end of each time slot, first theBS needs to specify the type of the next slot. For this, relays report an estimate of theiraverage demand for each subchannel, to the BS. These demands are computed based onthe assumptions on subchannel numbers they can get and the total power they can use.Then, the BS uses the reported demand estimates from relays as well as its own demands,to estimate the system’s total demands for type A and type B slots, and based on that,decides about the type of the next time slot. This is outlined in STD algorithm and thedetails are described in the following.Based on SP3, we define the estimated average demand of node m ∈ M on subchanneln asDmn =1|Km|∑k∈Kmwmk r˜mkn, ∀m ∈ M, ∀n ∈ N (5.13)where r˜mkn = log2(1 +PˆmemknN˜mΓk) is the estimated transmission rate of the link of user k fromnode m on subchannel n. It is computed in node m, m ∈ M, assuming that the number ofsubchannels it will get, N˜m, is proportional to the ratio of its number of queues (|Km|) andthe total number of queues that can be considered for service in type B slot (∑m∈B |Km| =111Chapter 5. Utility-Based Efficient Dynamic Distributed Resource AllocationAlgorithm 5.2 Slot Type Determination (STD)1: Each relay m ∈ M reports to BS, its estimated average demands on all subchannels, i.e.,Dmn , ∀n ∈ N .2: BS estimates the total demand of each relay m as Dm = |Km|∑Nn=0 Dmn , m ∈ M3: BS estimates its own demand for type A slot as D0a = K∑Nn=0 D0an4: BS estimates its own demand for type B slot as D0b = |K0|∑Nn=0 D0bn5: BS estimates the total demand for type A slot as DA = D0a,and for type B slot as DB = D0b +M∑m=1Dm6: if DA > DB7: BS sets the type of slot to A8: else9: BS sets the type of slot to B10: end ifK); i.e.,N˜m = N|Km|K , ∀m ∈ B, (5.14)Since the BS knows the number of queues in each relay, it can easily estimate theirtotal demand as in line 2 of STD algorithm. For itself, the BS needs to compute sep-arate demands for type A and type B slots. Noting that in type B slots, it can onlytransmit to its direct users while sharing subchannels with relays, its average demandsare computed similar to relays and based on the weights and rates of the links of directusers (assuming the transmission power on each subchannel equal to Pˆ0N˜0 , N˜0 = N|K0|K ), i.e.,D0bn = 1|K0|∑k∈K0w0k log2(1 +Pˆ0e0knN˜0Γk).On the other hand, since in type A slot, only the BS can transmit and all the queuesin the BS (including those of indirect users) can be served using all the subchannels, itstotal demand is computed based on the weights and rates of all of its links and assumingPˆ0N power on each subchannel, i.e., D0an = 1K∑k∈Kw0k log2(1 +Pˆ0e0knNΓk).Note that for computing the demands in the BS, it needs to know the queue sizes ofthe relays as well (to be used in the weights of the feeder links from the BS to relays). For112Chapter 5. Utility-Based Efficient Dynamic Distributed Resource Allocationthis purpose, relays also report the information about their modified queue sizes, to theBS. Considering the fact that in each time slot, at most N different queues can be served,the maximum number of modified queue sizes in relays is min(K−|K0|, N)). Therefore, inEDDRA, the total overhead of signaling about the demands and the modified queue sizesis of O(min(K − |K0|, N) +MN).Remark 1: Here we explain the reason for using ρmk We in the weights of the linksof indirect users from the BS and relays. Without that, due to the low powers of relaysand low transmission rates, their demands would not be comparable to the demands of BSfor direct users, unless the queues in relays grew large. On the other hand, for the linksserving the queues of indirect users in the BS, we would have w0k = Q0k−Qm(k)k , ∀k ∈ K−K0.As a result, these would not have enough impact on computing the average demands forindirect users and providing service for them (in the cases that the queue sizes of anindirect user in the BS and relay have the same size, Q0k = Qm(k)k , the impact would bezero). Consequently, the queues of indirect users in the BS would usually have larger sizesthan the queues of direct users and therefore, data admission would be less for them. Thiswould degrade the usefulness of drift-plus-penalty for cellular networks, because fairness isusually one of the main concerns in these networks. To prevent that, We should be appliedto compensate for the effect of low power of relays on the demands of access links and theeffect of differential-backlog-based weights on the demands of feeder links. Similar effectholds also in the subchannel sets determination and subchannel allocation steps which willbe described later.Step 2) Subchannel Sets Determination (SSD). We note that due to sharingsubchannels in type B time slot among all the links, the resource allocation for the linksof different nodes are tied together which is reflected in (5.12). In this step, the goal is tobreak this tie and specify the subchannel sets to be used for transmissions from the BS and113Chapter 5. Utility-Based Efficient Dynamic Distributed Resource AllocationAlgorithm 5.3 Subchannel Sets Determination (SSD)1: if the slot type is A2: BS determines the subchannels sets as N0 = N and Nm = ∅, m ∈ M3: else4: BS specifies subchannel sets, based on the relays’ demands as well as its own, as follows:5: Set D0n = D0bn6: Set Nˆm = dN |Km|∑Nn=0 Dmn∑Mm=0∑Nn=0 |Km|Dmne, ∀m ∈ B7: Initialize N ′ = N ,B′ = B,Nm = ∅, ∀m ∈ B′8: while N ′ 6= ∅ and B′ 6= ∅9: find (m∗, n∗) = arg maxm∈B′,n∈N ′Dmn10: Nm∗ = Nm∗ ∪ {n∗}11: N ′ = N ′ − {n∗}12: if |Nm∗| = Nˆm∗13: B′ = B′ −m∗14: end if15: end while16: end if17: BS notifies relays about their subchannel setsrelays. This allows to have the resource allocation in a distributed manner at each node.For the above purpose, when the slot is decided to be type A, the BS notifies the relaysabout it and they know they have no transmissions. In the case of type B slot, the BSdetermines the subchannel sets of the relays and notifies them to transmit on them. SSDalgorithm shows the whole procedure in detail. Since in the type B slots, the BS canonly transmit to the direct users, line 5 of the algorithm defines its demands based onthe estimations for type B slot, explained before. Line 6 sets Nˆm, as the upper boundfor the number of the subchannels that each node can get, and the next lines assign thesubchannels to the nodes that have not reached their limit for subchannel numbers andhave higher average demands on the subchannels.Note that, setting the limit Nˆm for the size of the subchannel set for node m is toprevent the subchannel allocation far more than it needs. For example, a relay might114Chapter 5. Utility-Based Efficient Dynamic Distributed Resource Allocationhave only one user with high average demands on the subchannels while another relaywith several users might have a little lower average demands on the subchannels. In sucha case, without considering the total number of users and the limit for subchannel setsizes, the relay with single user would overshadow the other relay, in all the iterations ofsubchannel assignments through line 9.The computational complexity of the SSD algorithm is of O((M + 1)N2), which is ob-tained by ignoring the insignificant computations and considering the number of iterationsneeded for performing line 9.Step 3) Subchannel Allocation (SA). After the determination of the subchannelsets, the resource allocation subproblem (5.12) can be further decomposed into separatesubsubproblems, as follows:SSP (Resource Allocation Subsubproblems):maxp,x∑n∈Nm∑k∈Qm(BTxmkn(t)wmk (t) log2(1 + pmn (t)emkn(t)))−∑n∈NmIZm(t)pmn (t), ∀m ∈ B,(5.15a)s.t.C2− C4, C7− C8 (5.15b)where each node knows its set of subchannels and can decide individually about allocatingthem to its own links, considering the related subset of the constraints C2−C4, C7−C8.For this purpose, following the SPAS strategy, we propose to have subchannel allocationsby each node based on using Pˆm|Nm| (i.e., assuming Zm(t) = 0) for computing the achievabletransmission rates. Then, in the power adjustment step, considering the real value ofZm(t), each node can decide about the total power it should use and distribute it on itssubchannels.Noting that the BS has more constraints than other nodes, (the constraint C3 is only115Chapter 5. Utility-Based Efficient Dynamic Distributed Resource Allocationenforced on the feeder links from the BS, which is to prevent transmitting data to therelays more than the empty buffer spaces), we provide the subchannel allocation by the BSand then, we explain its use for the relays. SA algorithm shows the details in allocatingthe subchannels by the BS. The procedure is done in |N0| steps. In each step, the weightsof the links and the resulted demands are computed, and the pair of subchannel and queuewith the corresponding highest demand is determined. This is done in an iterative wayand in each iteration, one subchannel is allocated and the affected queue sizes are updatedvirtually. Since the actual queue sizes, Qmk , are only updated at the end of transmissionintervals, we have used qmk variables to prevent ambiguity about the updating during thealgorithm iterations. Note that these updates are done to meet constraint C2 and C3.Line 6 is for applying the extra weight We, described before. However, before adding it,by comparing the queue size with BT sˆ, we make sure that there are enough data suchthat it can utilize the channel completely, in case subchannel is assigned. Line 7 is to meetthe constraint C3 and prevent overflow, by giving a negative weight in case the remainingempty space in a relay buffer is less than the possible maximum transmission size on asubchannel. If a link gets negative weight, then, it will not be considered for subchannelallocation and this will prevent transmitting data to the corresponding relay buffer.Remark 2: Note that the rate computations in SA are based on the assumption ofequal distribution of peak powers on the subchannel sets. This way we will be sure thatwhen in step 4, the total power is adjusted (which certainly will be equal or less than thepeak power), the transmission rates for each link will be less than the amounts consideredin SA algorithm, and therefore, the constraints C2 and C3 will not be violated.In type B time slots, in parallel to the BS, any relay also uses the SA algorithm withthe difference that all the superscripts/subscript 0 are replaced by the corresponding m.Note that in this case, we have Q′ = Km and Q′ − Km = ∅. Therefore, the lines 5, 7, 13116Chapter 5. Utility-Based Efficient Dynamic Distributed Resource AllocationAlgorithm 5.4 Subchannel Allocation (SA) in the BS1: if slot type is A, set Q′ = Q0 , otherwise, set Q′ = K02: Initialize q0k = Q0k, r0kn = B log2(1 +Pˆ0e0kn|N0|Γk ), k ∈ Q′, n ∈ N0.3: while N0 6= ∅ and (∑k∈Q′q0k > 0)4: Compute w0k = q0k, k ∈ K05: Compute w0k = (q0k −Qm(k)k ), k ∈ Q′ −K06: if Q0k > BT sˆ, w0k = w0k + ρ0kWe, k ∈ Q′7: if Lm(k)k − qm(k)k < BT sˆ, w0k = −1, k ∈ Q′ −K08: Compute D0kn = w0kr0kn, k ∈ Q′, n ∈ N09: Find (k∗, n∗) = arg maxk∈Q′,n∈N0D0kn10: x0k∗n∗ = 111: N0 = N0 − {n∗}12: q0k∗ = max(q0k∗ − Tr0k∗n∗ , 0)13: if k∗ ∈ Q′ −K0, then qm(k∗)k∗ = qm(k∗)k∗ +min(q0k∗ , T r0k∗n∗)14: end whileare not executed when SA algorithm is used by relays. Based on the above mentioned,the subchannel allocation task in EDDRA is split among the serving nodes, where thecomputational complexity of the SA algorithm in any node m ∈ B is of O(|Nm|2|Qm|).Step 4) Total Power Adjustment. After assigning the subchannels to the links, theBS and relays decide about the total power that they can distribute on their subchannels,to meet the constraints C4, C7. For this, based on SSP in (5.15), each node m solves thefollowing problem, which we refer to as Total Power Adjustment (TPA), to find the totalpower, Pm, that it can use.TPA (Total Power Adjustment Problems):maxPm∑n∈Nm(BTwmk(n) log2(1 +Pmemk(n)n|Nm|Γk(n)))− IZmPm, ∀m ∈ B (5.16a)s.t. 0 ≤ Pm ≤ Pˆm (5.16b)In the above, k(n) indicates the index of the user, to the queue of which the subchannel n117Chapter 5. Utility-Based Efficient Dynamic Distributed Resource Allocationhas been allocated. The TPA problem is a convex problem with one variable; therefore, theoptimal value, P ∗m, can be found easily by using an iterative one-dimensional search such asthe Golden Section method, described in [70, Appendix C.3] , which has the computationalcomplexity of O(log(1/)), where  is the desired relative error bound.Remark 3: As explained before, the constraint C1 is enforced over time through thevirtual queues, {Zm}, defined for that purpose. In fact, based on (5.7a), having nonzeroZm means that in the past time slots, there have been the events of transmission with thetotal power, Pm, larger than the average power limit, P avm . Therefore, in TPA problem, Zmapplies a kind of negative feedback, to use less power than Pˆm. The proposed importancefactor I is in fact for amplifying this negative feedback to adjust the total power use ina short period of time. Without it, the second term in the objective (5.16a) would notbe comparable to the first one, in a large period of time slots, before Zm becomes bigenough to impact the objective value. This is due to the fact that the values of the powervariables are very small (in the order of 1-10 Watts), compared with the values of queuesizes multiplied by transmission rates (in the order of tens of Megabits).After solving TPA problem, each node computes the power on its subchannels as follows,considering equal power distribution and noting that the rate on each subchannel can notbe larger than Bsˆ (due to the limited spectral efficiency of modulation schemes in practice):pmn = min(P ∗m|Nm|, (2sˆ − 1)Γk(n)emk(n)n) (5.17)The reason for considering the term with sˆ in (5.17) is to prevent using power morethan needed for maintaining the desired bit error rate. It is obtained based on (5.1) andC4.After the above steps, based on the variables xmkn, pmn , each node notifies its users aboutthe subchannel allocations and the assigned transmission rates. Then, it transmits to them118Chapter 5. Utility-Based Efficient Dynamic Distributed Resource Allocationand updates its actual and virtual queues.5.3.5 Efficient Dynamic Centralized Resource AllocationIn this subsection, we briefly describe the Efficient Dynamic Centralized Resource Alloca-tion (EDCRA) method, in which the BS performs all the procedures for resource allocation.In a centralized scheme, the BS needs to get notified about the channel states of all thelinks in the system, over all the subchannels10. For this purpose, since the indirect usersdo not have connection to the BS, the relays report to the BS about the channel conditionsof the access links (which already the indirect users have reported to their serving relays).This imposes a signaling overhead of O((K − |K0|)N) from relays to the BS. Consideringthe fact that in practice, the number of users is remarkably more than the number of therelays, the signaling overhead in EDCRA is a lot more compared with EDDRA11 (whichis of O(min(K − |K0|, N) +MN)).Having all the information about channel states and queue sizes, the BS performs STDprocedure and if the slot type is set to A, it uses the SA algorithm as in EDDRA. However,if the slot type is set to B, the BS does not need to run SSD algorithm. Instead, it usesSA algorithm, considering all the subchannels and all the links, as follows. The queues inrelays are assumed to be located in the BS and their corresponding access links are assumedas direct links starting from the BS to the indirect users; however, the weights and channelrates are considered the same as those of actual access links. Then, the SA algorithm isexploited to decide about the subchannel allocation to the different links in the system,which imposes the computational complexity of O(N2(∑m∈B |Km|)) = O(KN2). After10In a centralized implementation, the BS has information about all the queue sizes, due to the fact thatit has the history of the transmissions from all the queues.11In the above discussions, we excluded the signaling overhead of channel state feedbacks from thereceiver side of any link to the transmitter side, due to the fact that it is the same in EDDRA andEDCRA.119Chapter 5. Utility-Based Efficient Dynamic Distributed Resource Allocationthat, based on the corresponding subchannel allocations for all the nodes, i.e., {xmkn}, theBS specifies the powers to be used by each node by performing the total power adjustmentsfor each node. Finally, the BS informs all the relays about the subchannels and powersthey can use.5.4 Performance Evaluation and DiscussionTo evaluate the performance of the proposed algorithms, we have conducted extensiveMatlab simulations for a system with 6 relays, which are located at the distance of 2/3cell radius from the BS and in an equal angular distance from each other. The cell radiusis 1000 m and the BS, relay and user antenna heights are considered 15 m, 10 m and 1.5m respectively. Path loss attenuation is computed based on [61], the noise power spectraldensity in the receivers is -174 dBm/Hz and the users’ BER requirements are 10−6. Thesystem bandwidth is divided into subchannels with the bandwidth of 180 kHz and thetransmissions are done over the time slots of 1 ms. Data packets of 5 kbits arrive in thetop layer buffers of the BS according to Poisson processes and if the corresponding buffer isnot full, they are queued until getting admitted into the corresponding MAC layer buffers.For the links between the BS/relay and users, Rayleigh channel model is used and forthe links from the BS to relays, Rician channel model with κ factor equal to 6 dB [62].Utility function is considered as U(a) = log(a) for providing proportional fairness. Due tothe large packet sizes which resulted in large queue sizes, based on the observations fromsimulation results, we have chosen V to be 107. This gives high value for utility functionin (5.8a), to be comparable to the terms related to the weighted transmission rates. Thebuffer capacities at the BS and relays are considered equal to 100 and 10 packet sizes,respectively. The highest order for modulation is considered to be 64 QAM which has thespectral efficiency of 6 bits/sec/Hz.120Chapter 5. Utility-Based Efficient Dynamic Distributed Resource AllocationIn the following, we first consider a special scenario to show the effect of parameter I. Inthis simulation, the number of the subchannels is considered equal to N = 7, the BS peakpower equal to Pˆ0=34 dBm and the peak power of relays equal to Pˆm=25 dBm, m ∈ M.The average power constraint of the nodes are half of their peak power constraints, i.e.,31 dBm=1259 mW for the BS and 22 dBm=158 mW for the relays. There are 12 users inthe system, 6 of them connected directly to the BS and the rest connected to relays, oneuser per relay. The distance of the direct users from the BS and indirect users from thecorresponding relay is 300 m. The data arrival rate of each user is 100 packets/second, orequivalently 500 kbps.Fig. 5.2(a) shows the average power used by each node, over 20000 time slots, withdifferent values for importance factor I, and Fig. 5.2(b) depicts the virtual queue sizecorresponding to average power constraint of the BS, during the mentioned period. Forchoosing a suitable value for I, we have tuned it as stated in Section 5.3.1; i.e., in the rangeof the values for weighted rates in (5.8a) which is around 106 in our simulation settings.In the Figs. 5.2(a) and 5.2(b), we have also shown the case with I = 1 which is the caseconsidered in the literature and is equivalent to ignoring the weight for the virtual powerqueues. Moreover, as a case between I = 1 and I = 106, we have shown the geometricaverage of them, i.e., I = 103, instead of arithmetic average, because the difference in theperformance is more clear with the steps in the orders of the powers of 10.It is observed that without considering a suitable I, the size of virtual queue in the BSgrows constantly and the average power used over this period is about 2000 mW, far beyondthe constraint of 1259 mW. This happens due to the fact that in equation (5.16a), the valueof the first term is very large compared with the value of second one and as a result, itdoes not affect the optimization objective much; therefore, the only thing that limits thetotal power used is the peak power or maximum spectral efficiency. The consequence of121Chapter 5. Utility-Based Efficient Dynamic Distributed Resource Allocation0 1 2 3 4 5 605001000150020002500Average Power Consumption [mW]Index of Nodes  I=1I=103I=106(a)0 0.5 1 1.5 2x 104012x 107  I=10 0.5 1 1.5 2x 104012x 107Virtual Power Queue Size of BS [mW]  I=1030 0.5 1 1.5 2x 104024x 104Time Slot  I=106(b)Figure 5.2: Effect of parameter I on (a) average power consumption of BS and relays (b)virtual power queue size of BS over time; N = 7, |K0| = 6, |Km| = 1, m ∈ M.122Chapter 5. Utility-Based Efficient Dynamic Distributed Resource Allocationthis is the steady use of the peak power of BS in each time slot, and the steady growth ofits virtual power queue size according to equation (5.7). Without suitable I, this wouldcontinue for a long time, until the size of the virtual queue has grown so large that thefirst term in (5.16a) is comparable to the second one. However, by using a large I, this isprevented and the BS virtual power queue gets bounded after about 300 time slots, andthe average power used in the whole simulation period is about the defined constraint.Note that due to fewer transmissions from the relays, compared with the BS, their virtualpower queues did not grow large and remained stable in all the above values for I and hadsimilar graphs as that of the BS in the case of I = 106. Due to this similarity, their graphswere omitted.To investigate the overall performance of the proposed algorithms in general scenarios,we consider a system with 25 users, which are uniformly distributed in the cell area and areconnected to the node from which they receive higher signal strength. The simulations areconducted for 100 runs, each over 10000 time slots, to generate different realizations of userslocations. All the users have the data arrival rate of 20 packets/second or equivalently,100 kbps, and the buffer capacity in the BS and relays are respectively 100 and 10 packetsper user. There are 14 subchannels in the system, the BS peak power is 37 dBm, relays’peak power is 28 dBm and the average power constraints are half of the peak powers. As abenchmark, we have adapted the low-complexity centralized algorithm proposed in [48] toour system model, which we refer to as Fixed Half-Duplex Relaying (FHDR) in the figures.With FHDR, the odd numbered time slots are used for transmissions from the BS and theeven numbered slots for transmissions only from the relays. The subchannel allocations ineven numbered slots are based on considering a minimum of bN/Mc subchannels for eachrelay and assigning them based on Hungarian algorithm. For FHDR, the average powerlimit of each node is equally distributed over all subchannels, considering the maximum123Chapter 5. Utility-Based Efficient Dynamic Distributed Resource Allocationspectral efficiency constraint. Also, we have implemented the data traffic control procedurein the FHDR to compare the utility functions.We note that due to limited buffer capacities, all the queues are stable and their sizes areless than buffer capacities. Therefore, in the following, we do not present any results aboutthem and instead, we study the overflow performance. Fig. 5.3 displays the CDF of thesystem utility and total overflow from buffers of the relays. It is observed that even thoughall the algorithms have the same utility for data admissions, FHDR has the incidents ofoverflow. The result of this is lower throughput with FHDR, as shown in Fig. 5.4. On theother hand, the proposed centralized and distributed algorithms outperform FHDR andresult in zero overflow and higher throughput. There are several reasons for this. EDCRAand EDDRA estimate power usage on subchannels and adjust it after the subchannelallocations. In subchannel allocation, they do not consider a minimum number for thenodes; instead, they allocate subchannels based on the higher demands and take intoaccount the limited buffer capacity of the relays. All of the above lead to efficient useof the system resources and result in zero overflow in the buffers of relays, and higherthroughput for the users.Next, in the same system, we investigate the effect of increase in the data arrival rate onthe performance of the proposed algorithms. Fig. 5.5 shows the system average utility andaverage total overflows. It is observed that as the data arrival rates increase, the utilities ofthe EDDRA and EDCRA also increase; but after the arrival rate of 60 packets/second, thisincrease is not much. This is firstly due to the fact that the utility function is a concavefunction which has diminishing returns as the arrival rates increase. The other reason isthe fact that the system capacity is saturated in high packet arrival rates and the queuescan not be served much and this prevents more admission of data into the system. Similareffect is observed about FHDR; however, since it does not determine the type of time slot124Chapter 5. Utility-Based Efficient Dynamic Distributed Resource Allocation114.5 115 115.5 116 116.500.10.20.30.40.50.60.70.80.91Utility FunctionCDF  FHDREDCRAEDDRA(a)0 100 200 300 400 50000.20.40.60.81System Average Overflow [Bits/Slot]CDF  FHDREDCRAEDDRA(b)Figure 5.3: CDF of (a) system utility (b) total overflow from the buffers of relays; at thedata arrival rate of 20 packets/second125Chapter 5. Utility-Based Efficient Dynamic Distributed Resource Allocation1800 2000 2200 2400 2600 280000.20.40.60.81System Average Throughput [Bits/Slot]CDF  FHDREDCRAEDDRAFigure 5.4: CDF of the system throughput at the data arrival rate of 20 packets/secondbased on the demands and does not use the subchannel and power resources efficiently, itleads to higher queue sizes in the BS and consequently, lower admissions in high packetarrival rates. On the other hand, it does not take into account the buffer capacities ofthe relays and leads to data overflow. Due to the above mentioned, it results in lowerthroughput compared with the EDDRA and EDCRA, which is shown in Fig. 5.6.It is also worth noting that the performance of the EDCRA is a little better than that ofEDDRA. This is due to the fact that in EDCRA, the BS has information about the channelstates of all the links and performs subchannel allocation based on individual demands ofrelays’ and its own queues; but in EDDRA, the BS determines the subchannel sets of thenodes based on their average demands and then, each set of subchannels are only used toserve the set of the queues of the corresponding node. However, the degradation in theperformance of EDDRA is not significant. The reason for this is the fact that there areusually several users for each node, with different channel conditions, that make it possible126Chapter 5. Utility-Based Efficient Dynamic Distributed Resource Allocation20 40 60 80 100 120 140115120125130135140145Packet Arrival Rate [Packets/Second]System Utility  FHDREDCRAEDDRA(a)20 40 60 80 100 120 1400200400600800100012001400160018002000Packet Arrival Rate [Packets/Second]System Average Overflow [Bits/Slot]  FHDREDCRAEDDRA(b)Figure 5.5: Effect of increase in the packet arrival rate at the BS on (a) the system averageutility (b) average overflow from the buffers of relays127Chapter 5. Utility-Based Efficient Dynamic Distributed Resource Allocation20 40 60 80 100 120 1402000300040005000600070008000Packet Arrival Rate [Packets/Second]System Average Throughput [Bits/Slot]  FHDREDCRAEDDRAFigure 5.6: Effect of increase in the packet arrival rate at the BS on the system averagethroughputfor each node to utilize its set of subchannels efficiently. Also, considering an upper boundfor the number of subchannels that each node can get prevents wasting of the resources.These observations show that using EDDRA, the system can allocate resources efficientlywith less computations at the BS, low signaling overhead and without remarkable reductionin the system performance.Next, in order to show the effectiveness of the proposed parameters ρmk and We, weshow data admission for direct and indirect users. It is observed in Fig. 5.7 that as thepacket arrival rate of the users increases, the data admission for direct users increases whileless data are admitted for indirect users. As explained before, this is due to the fact thatthe queues of indirect users in the BS get low weights and grow large which result in lowerdata admissions for them. To prevent that, ρmk can be set to one and an extra weight of Wecan be used in the weights of the links from the BS to relays and the links from the relays128Chapter 5. Utility-Based Efficient Dynamic Distributed Resource Allocation20 40 60 80 100 120 140100150200250300350400450500Packet Arrival Rate [Packets/Second]Users’ Average Data Admission [Bits/Slot]  EDCRA−direct usersEDCRA−indirect usersEDDRA−direct usersEDDRA−indirect usersFigure 5.7: Effect of increase in the packet arrival rate on the amount of data admitted fordirect and indirect usersto users; this will increase their chances for getting more service for the correspondingqueues and consequently more data admissions. Fig. 5.8 shows this in the case of arrivalrate equal to 140 packets/second for every user. It is observed that giving higher weightsto the links from the BS to relays and the links from the relays to users can increase thedata admissions for indirect users; this comes at the cost of large reductions in the dataadmissions of direct users. It is because adding extra weights results in the increase ofsubchannel allocations to the queues of indirect users in the BS and relays. Since the BShas higher power than the relays, the less subchannel allocation to the direct users meansthat their queues lose higher transmission rates than those for the queues in relays. As aresult, the queues of direct users in the BS grow quicker and limit their data admissions.129Chapter 5. Utility-Based Efficient Dynamic Distributed Resource Allocation0 0.5 1 1.5 2 2.5 3x 104200250300350400450500Parameter WeUsers’ Average Data Admission [Bits/Slot]  EDCRA−direct usersEDCRA−indirect usersEDDRA−direct usersEDDRA−indirect usersFigure 5.8: Effect of parameter We on providing fair data admission for direct and indirectusers, at the arrival rate of 140 packets/second for every user5.5 SummaryIn this chapter, we have studied data admission control and resource allocation in buffer-aided relay-assisted OFDMA networks. We have formulated time slot, subchannel andpower allocation as a utility-based stochastic optimization problem, taking into accountseveral practical constraints. Using the Lyapunov drift-plus-penalty policy, we have trans-formed the problem into instantaneous subproblems. We have introduced important pa-rameters that should be considered in using drift-plus-penalty policy in cellular networks.For practical considerations, we have proposed low-complexity strategy for allocation ofpower and subchannels and used it in distributed and centralized schemes. In particular,the proposed EDDRA policy is attractive for use in practice, due to its low-complexity aswell as low signaling overhead. Numerical results confirm the effectiveness of the proposedparameters and also show that the proposed algorithms lead to significant improvement indata admission and throughput of the system.130Chapter 6Efficient and FairThroughput-Optimal Scheduling6.1 IntroductionWireless RSs are promising solutions for expanding cellular networks due to their low-complexity and cost effective deployment possibility [5, 72]. As a consequence of theadvantages resulted by the exploitation of buffers in the RSs, it is expected that thebuffer-aided relay-based cellular networks will attract a lot of attention in the comingyears.In the previous chapters, we addressed QoS provisioning and low-complexity issues inresource allocation in different scenarios of buffer-aided relay-assisted cellular networks.The studied instantaneous resource allocation problems were based on the MW policy toprovide queue stability. In particular, the objective functions in those problems consistedthe product terms of the transmission rates of the links and the differential backlogs of thecorresponding queues at the two ends of the links. MW is well-known for being throughput-optimal in wireless networks with fixed number of queues and stationary ergodic dataarrival processes [60, 73, 74]. It is also called Maximum Differential Backlog (MDB),which leads to backpressure routing method and is able to route packets to their destinationwithout knowing the path to their destination and only based on the differential backlogs.Due to this, MDB is of great interest in the systems with multiple paths from the source131Chapter 6. Efficient and Fair Throughput-Optimal Schedulingto the destination as in the system considered in Chapter 3 and the systems withoutinfrastructure such as Ad Hoc networks.However, in a topology with fixed routes and only one path for packet transmissionfrom source to destination, as in the systems considered in Chapters 4 and 5, MDB leadsto discrimination against the relayed users in terms of queueing delay. This happensbecause MDB gives lower weights to the links between the BS and RSs, which causes therelayed users’ data to experience higher delay until reaching the user. Recently, [46] hasproposed another throughput-optimal policy, called MSB, which tries to provide fairnessin terms of packet delay, among the relayed and direct users. In MSB, the weight given toa transmission link from the BS is equal to the size of the corresponding queue in the BS,while the weight considered for a link from the RS is equal to the sum of the correspondingqueue sizes in the BS and RS. However, MSB can lead to inefficient utilization of resourcesby granting channel to a queue in the RS which is almost empty; this can cause instabilityin some scenarios, because it leads to backlogs at the BS while allocating channel to theRS which does not have much data to transmit. Another drawback of MSB and MDB isthat since the BS or RS need to have information about the queue states of each other,even a distributed system with independent channels for the BS and RS has the overheadof signaling about the queue states of the relayed users.Motivated by [46, 60, 73, 74], in this chapter, we aim at designing new throughput-optimal scheduling rule in single-path relay networks, which is fair to the users served bythe RSs and at the same time promises low signaling overhead. We propose MMW policywhich has the following properties:• Weights of the links are based on the local queue sizes either in the BS or RS, almostall the time. This makes them suitable for use in both centralized and completelydistributed implementations, in the scenarios with shared or independent channels132Chapter 6. Efficient and Fair Throughput-Optimal Schedulingfor the BS and RS.• Based on the above mentioned property, it is also possible to provide fairness in termsof queueing delay for direct and relayed users.• It is possible to extend the algorithm for multihop cellular networks, without muchextra signaling about the relayed users’ channel and queue states, and, therefore,have completely distributed resource allocation.The intuition behind MMW is based on the similarity between the topology of the single-path relay networks and that of parallel queueing system, which allows to adapt the MWalgorithm to such networks and make it efficient in scheduling and resource allocation. Inthis chapter, we discuss MMW for the systems with single channels for transmissions fromthe BS and RS; however, the arguments presented can also be applied to multichannelsystems.The rest of this chapter is organized as follows. Section 6.2 describes the system model,provides a background on throughput-optimal scheduling and discusses the problem. InSection 6.3, we present the proposed method in detail and discuss its stability. Section 6.4describes decentralized implementations, followed by simulation results in Section 6.5. Con-clusions are stated in Section 6.6.6.2 Preliminaries6.2.1 System ModelAs it is shown in Fig. 6.1, we study the downlink of a time slotted wireless network with atotal of K wireless users in the system. Users are indexed by k ∈ K, where K indicates theset of all the users. We consider a two-hop network with a single RS, as a basis for relay-133Chapter 6. Efficient and Fair Throughput-Optimal SchedulingFigure 6.1: System modelbased cellular networks. However, as will be discussed in Section 6.4, it is straightforwardto generalize the algorithms and analysis for a system with several RSs and/or more thantwo hops. We assume that the set of the users Kd = {1, 2, · · · , K1} are served by the BS(“direct” users), and we use lBk to represent the link from the BS to user k. Users in the setKr = K−Kd = {K1 +1, · · · , K} have no direct link to the BS and are served through theRS (“indirect” users). For these users, lRk represents their link from the RS to user k and lBkrepresents their link from the BS to RS. Note that the links are specified by their beginningpoint and the user for which they carry data. Based on this, when k ∈ Kr, lBk representsthe link starting at the BS for transmission of data destined for user k. Therefore, this linkends at the RS, as there is no direct link between the BS and user k ∈ Kr. This definitionof notation is selected to remove the need for another superscript/subscript to specify theending points of the links.We consider two system scenarios. First, with a single wireless channel which is sharedby the BS and RS and only one of the BS or RS can use the channel for transmission.Therefore, in each time slot, based on the scheduling algorithm, only one of the links, i.e.,the links starting at the BS LB = {lBk }, k ∈ K, or the links starting at the RS LR = {lRk },k ∈ Kr, can get the channel. We denote the set of all the links by L = LB ∪ LR andthe selected link for transmission in time slot t by lsh∗(t), in which the superscript shmeans “shared”. In the second scenario, the BS and RS have independent and orthogonal134Chapter 6. Efficient and Fair Throughput-Optimal Schedulingchannels and, therefore, they can use them simultaneously for transmissions on one oftheir own links. Therefore, in this case, there will be two selected links, lB∗(t) ∈ LBand lR∗(t) ∈ LR. Independent channels might be realized over frequency (e.g. separatefrequency bands) or over time (e.g. using odd numbered time slots for transmissions fromthe BS and even numbered slots for transmissions from the RS). In the case of independentchannels over frequency, it is assumed that the RS has the capability to receive (throughthe BS channel) and transmit (through the RS channel) at the same time.Note that in the case of shared channel, a centralized or semi-distributed schedulingstrategy can be implemented to prevent the simultaneous transmissions and interference;but in the case of independent channels, there is no need for centralized scheduling anda distributed scheduling is more reasonable. In Section 6.4, we will discuss these in moredetail.Note that the two scenarios discussed here can be used as the basis for resource alloca-tion in multichannel systems, in the cases that all the subchannels are shared or partitionedbetween the BS and RS. For example in LTE-Advanced standards, Type 1 relays (non-transparent relays which are able to appear like an independent BS to their users) mighthave independent and separate bandwidth for the BS-RS and RS-user transmissions (Type1a relays) or use shared bandwidths for them (Type 1b relays)[75]. Furthermore, in any ofthese types, the BS-user transmissions might be done in the same or different bandwidththan that used for the BS-RS transmissions.We assume that the channel conditions of the links are stationary and ergodic andremain constant during transmission period. Instantaneous capacity of a link is a functionof the channel state and the power that is allocated to it; e.g., for the link lBk , we havecBk (t) = cf(pB, sBk (t)) (6.1)135Chapter 6. Efficient and Fair Throughput-Optimal Schedulingwhere, sBk (t) is the channel state of the link of user k from the BS in time slot t, pB is theBS transmission power, cf(.) is the capacity function and cBk (t) indicates the number ofbits possible to transmit on the link lBk at time slot t. Based on the scheduling algorithm,the transmitted bits per slot on the link of user k from the BS can be expressed as follows:rBk (t) = cBk (t)iBk (t), (6.2)where iBk (t) is an indicator variable which is 1 if the corresponding link, lBk , is selected fortransmission in time slot t, and zero otherwise. The instantaneous capacity and transmis-sion rate of a link of an indirect user k from the RS, cRk (t) and rRk (t), can be computedsimilarly based on sRk , pR and iRk . We use i = [iB1 , ..., iBK , iRK1+1, ..., iRK ] and Π to denote thescheduling vector and the set of all the feasible scheduling vectors, respectively. Based onthe system settings, different constraints might apply on the scheduling vector and there-fore, i ∈ Π is used to show this. In the case of shared channel, Π will be the set of allvectors with length 2K − K1, which have one entry equal to 1 and all the others equalto zero. In the case of independent channels, Π will be the set of all vectors with length2K −K1, which have one of the first K entries and one of the last K −K1 entries equalto 1 and all the others equal to zero.We note that the channel state of the links of indirect users starting at the BS arethe same and equal to sBR, the channel state between the BS and RS. In other words,sBk = sBR for k ∈ Kr. Because of this, we use s = [sB1 , ..., sBK1, sBR, sRK1+1..., sRK ] and r =[rB1 , ..., rBK1, rBR , rRK1+1..., rRK ], which have K + 1 entries, to denote the channel states vectorand rates vector, respectively. It is assumed that the BS and RS have separate powersources and use fixed power for their transmissions. Therefore, based on (6.1) and (6.2), weshow the transmission rates of the links as functions of their channel states and the indicatorvariables, and use r(i, s) to indicate the specific transmission rate vector achievable in the136Chapter 6. Efficient and Fair Throughput-Optimal Schedulingsystem channel state s by the scheduling vector i.It is assumed that the users’ data traffic streams are inelastic, i.e., the destinationdoes not send feedback to the source for flow control. Therefore, an admission controlmechanism is used in the BS to ensure that the arrival rates can be supported by thenetwork capacity. We assume that the BS has a buffer for each of the users, where datapackets arrive according to an exogenous process and are queued until transmission to adirect user or to the RS. Similarly, the RS has a buffer for each of the indirect users whichreceives data packets from the BS and keeps them until transmission to an indirect user.Each queue in the BS (RS) corresponds to a unique link starting at the BS (RS). Weassume that the buffer capacity of user k in the BS/RS is infinite, and denote the size ofdata queued in the buffer in time slot t by QBk (t)/QRk (t), which is updated based on thefollowing equations:QBk (t+ 1) = max(QBk (t)− rBk (t), 0)+ aBk (t), k ∈ KQRk (t+ 1) = max(QRk (t)− rRk (t), 0)+ aRk (t), k ∈ Kr (6.3)where aBk (t) is the number of bits arrived at the BS for user k in time slot t, according to anexogenous arrival process. aRk (t) is the number of bits arrived at the RS, which would beequal to aRk (t) = min(QBk (t), rBk (t)). As (6.3) shows, it is assumed that the arrivals occurafter transmissions in each time slot. In other words, up to rBk (t) or rRk (t) bits from thecorresponding queue in the BS or RS will be transmitted first, and then, new arrivals willbe accumulated. We assume that the packet arrivals at the BS are stationary and ergodicprocesses and have finite mean and variance. In the following, we will use the capacity ofa link interchangeably with that of its corresponding user, e.g. clBk (t) and cBk (t) for the linklBk . Also, in order to uniquely specify the links, we will denote with bl and kl the beginningpoint and the user corresponding to the link l ∈ L, respectively.137Chapter 6. Efficient and Fair Throughput-Optimal SchedulingNote that the system model and the notations defined in this section can be used,with slight modification, for the two-hop networks with more than one relays and similararchitecture (where each user is assigned to either the BS or a single relay and therefore,there is only one path for packets’ travel from the BS to each user). In such networks, allthe users that are connected to relays can be considered as the indirect users connected toonly one RS and the queues in the relays can be thought of queues located in this specificRS. Then, the link definitions and their corresponding variables would be the same asabove; the only difference would be the fact that now, the channel states of the indirectusers’ links between the BS and RS will not be the same and therefore, the vectors ofchannel states and rates should be defined as s = [sB1 , ..., sBK1, sBK1+1, ..., sBK , sRK1+1..., sRK ]and r = [rB1 , ..., rBK1, rBK1+1, ..., rBK , rRK1+1..., rRK ].6.2.2 Background and Problem StatementOne of the important goals in queueing systems is the stability of queues, meaning thatthe queues should remain bounded. Any scheduling algorithm that is able to reach thisgoal in a stabilizable system is called throughput-optimal. This means that such an algo-rithm would maximize the throughput, as long as the exogenous packet arrival rates at thequeues of the source nodes of the system are supportable by any other algorithm. Notethat this approach is different from capacity maximization, discussed in the literature withthe assumption of infinitely backlogged buffers at the source nodes. With that assump-tion, capacity maximization is equivalent to throughput maximization; however, when thequeue dynamics are taken into account, capacity maximization does not necessarily leadto throughput maximization, due to burst nature of data traffic. Instead, queue stabilityis studied due to the fact that if an algorithm is able to make the system stable, in fact itwill result in the maximum possible throughput which is equal to the sum of average rates138Chapter 6. Efficient and Fair Throughput-Optimal Schedulingof the data arrived in the system.In [73], authors addressed queue stability for the first time in a wireless multihopnetwork. They proposed the MW algorithm which is throughput-optimal and is also knownas MDB. It aims at maximizing the sum of weighted rates of the links,∑l∈Lβklwl(t)rl(t),where βkl is a constant related to the service class and priority of user kl, and wl is equalto the difference of the queue sizes of the nodes in the beginning and the ending points ofthe link l in time slot t. Therefore, MDB is able to utilize multiuser diversity and at thesame time provides stability of the queues as long as all the packet arrival rates at the BSare strictly within the capacity region.According to the MDB policy, in the case of a shared channel for the BS and RS, thechannel is allocated to the “best” link based on the following criterion:lsh∗(t) = argmaxl∈LβklwMDBl (t)cl(t) (6.4)wherewMDBl (t) =QBk (t), l ∈ {lBk }, k ∈ KdQBk (t)−QRk (t), l ∈ {lBk }, k ∈ KrQRk (t), l ∈ {lRk }, k ∈ Kr(6.5)In the case of independent channels for the BS and RS, (6.4) is used separately for each ofthe BS and RS channels, to allocate them to one of the links in LB and LR, respectively.MDB mostly has been used in the scenarios of multihop routing networks with meshstructure, where a packet can be forwarded through several paths to reach the destination.The fact of using the difference of queue backlogs as the weights of the links leads tothe well-known “backpressure routing” method which aims at equalizing the differentialbacklogs, and routes the packets through a node that has small sizes of queues (which is a139Chapter 6. Efficient and Fair Throughput-Optimal Schedulingsign of good channel conditions in its forward transmission links). In the case of single-pathrelay network, if all the users’ queue sizes in the BS are equal, MDB will lead to lowerpriorities for the links {lBk }, k ∈ Kr, when the indirect users have non empty queues in theRS. Therefore, on average, the total queue size of an indirect user (which is equal to thesum of its queue sizes in the BS and RS) will be different from that of a direct user withthe same βk, average data arrival rate and average channel conditions. As a result, MDBdiscriminates between the direct and indirect users by causing different queueing delaysfor them.In order to provide a fairer solution in terms of delay, [46] proposed MSB algorithmfor two-hop systems with independent transmission channels for the BS and RS. It uses asimilar scheduling method as in (6.4) for each of the BS and RS channels, but with linkweights, wMSBl (t), different from those of MDB. MSB treats all the links starting at theBS, LB, similarly, and considers their weights equal to their corresponding queue sizes inthe BS. On the other hand for transmissions from the RS to its users, the weights are equalto the sum of the corresponding queue sizes in the BS and RS; i.e.,wMSBl (t) =QBk (t), l ∈ {lBk }, k ∈ KQBk (t) +QRk (t), l ∈ {lRk }, k ∈ Kr(6.6)Although MSB aims at compensating the relaying effect for indirect users by usingthese weights, it might lead to inefficient utilization of system resources and consequentlyto instability, in the systems with shared channel for the BS and RS. This happens becausean indirect user’s link starting at the RS might get selected when its queue in the BS haslarge number of packets but its queue in the RS is almost empty.Note that in the context of throughput-optimal algorithms and the networks withinelastic data traffic, throughput fairness is not discussed. This is due to the fact that140Chapter 6. Efficient and Fair Throughput-Optimal Schedulingin this context, as a result of the stability of the system, each destination will receive anaverage throughput equal to the average data arrival rate in its source and can not getmore or less than what has arrived for it. On the other hand, although throughput-optimalalgorithms lead to bounded average queue sizes and average delays, they do not necessarilyresult in similar delay performance among the users. This needs to be addressed in cellularnetworks, because providing similar QoS, independent from the location of the users, isone of the objectives of the service providers. In order to discuss this rigorously, in thischapter we consider the scenarios where the users have equal data arrival rates, so the onlydifference between the users is their locations.In the next section, we propose new throughput-optimal scheduling method for single-path relay networks, based on the MW algorithm and a new perspective.6.3 MMW PolicyIn this section, we propose the MMW scheduling policy, which can be exploited both inthe systems with shared channels for the BS and RS and the systems with independentchannels.6.3.1 Motivation and The Main IdeaMMW aims at providing similar QoS among the direct and indirect users, efficient use ofthe system resources and lower signaling overhead, in single-path relay networks. It triesto maximize the sum of weighted rates of the links, with weights being different from thoseof MDB and MSB.Fig. 6.2 depicts the main idea of MMW12. In the system of parallel queues as in12Here, a single server indicates a shared channel. For independent channels, similar figure can beconsidered by an independent server for both queue 1 and 2 and an independent server for queue 3.141Chapter 6. Efficient and Fair Throughput-Optimal Scheduling(a) (b)Figure 6.2: (a) Parallel queues (b) combination of parallel and tandem queues.Fig. 6.2(a), all the queues have exogenous packet arrival processes. If such a system isstabilizable, MW algorithm will be able to provide stability by considering the weights ofoutgoing links equal to their queue sizes and maximizing the sum of weighted transmissionrates of the links. Now consider Fig. 6.2(b), which is a combination of parallel and tandemqueues, with the same channel capacities of the outgoing links and the same arrival pro-cesses in queues 1 and 2, as those in Fig. 6.2(a). Since the arrivals at queue 3 in Fig. 6.2(b)are delayed and processed versions of its arrivals in Fig. 6.2(a), i.e., with different packetsizes and arrival times, the total data entering queue 3 will not be more than those enteringit in Fig. 6.2(a). Based on this, we propose to treat the system in Fig. 6.2(b) as parallelqueues and, therefore, use the weight of each link proportional to the queue size in itsbeginning point. However, to be able to prove the stability, we consider a condition forthat, which will be stated later.Note that a single-path relay network is a system similar to Fig. 6.2(b), and therefore,we can exploit the perspective mentioned above. Specifically, we consider the weight of thelink of an indirect user from the BS to RS, proportional to the corresponding queue size inthe BS, unless the corresponding queue size in the RS exceeds a large threshold Qˆ; whenthis happens, the difference of the corresponding queue sizes in the BS and RS is used inits weight (similar to MDB). Also, to compensate the effect of relay’s queueing delay on the142Chapter 6. Efficient and Fair Throughput-Optimal Schedulingindirect users, we can use a coefficient based on the number of hops a packet experiencesbefore arriving at the destination. For example, in a system with shared channel and twohops for indirect users, when all the links have similar channel conditions, this coefficientcan be chosen equal to 2, to give twice as much priority for both links (starting at the BSand RS) of indirect users compared with the links of direct users. Furthermore, to improvethe performance of proper link selection, we use actual channel rates of the links insteadof their capacity.Based on the above discussion, the objective in MMW is as follows:Maximizei∈Π∑l∈LαklβklwMMWl (t)min{Qblkl(t), rl(i(t), s(t))} (6.7)where βkl is the constant related to the service class and priority of user kl, as before, andwMMWl refers to the weight of the link l in MMW, which is computed aswMMWl (t) =QBk (t), l ∈ {lBk }, k ∈ KdQBk (t)− I[QRk (t) > Qˆ]QRk (t), l ∈ {lBk }, k ∈ KrQRk (t), l ∈ {lRk }, k ∈ Kr(6.8)and αkl is defined asαkl =1cl,avg , l ∈ {lBk }, k ∈ Kdzcl,avg , z ≥ 1, l ∈ {lBk } ∪ {lRk }, k ∈ Kr,In (6.8), I[.] is the indicator function which is one if its argument is true, and zero otherwise.cl,avg, is the estimated average transmission rate of link l, the use of which is clarified later.Based on (6.8), as long as a queue size in the RS remains less than the threshold Qˆ, theweight of the corresponding link from the BS to RS would be proportional to just the queue143Chapter 6. Efficient and Fair Throughput-Optimal Schedulingsize in the BS. On the other hand, when the queue size in the RS gets larger than Qˆ, theweight of the corresponding link from the BS to RS will be proportional to the difference ofthe queue sizes in the BS and RS. As will be shown later through simulations, by defininga suitably large threshold, the queue sizes in the RS will be less than the threshold, almostall the time (more than 99.99 percent); therefore, practically, the weights of all the linkswill be proportional to just the queue sizes in their beginning point (similar to the systemof parallel queues), i.e., wMMWl (t) = Qblkl(t).In the definition of α, the parameter z is the coefficient mentioned before: if we wantto compensate for the effect of relaying delay in a single-path multihop network, it canbe adjusted to a value larger than one, based on the number of hops, channels and othernetwork parameters. On the other hand, the reason for using cl,avg here is independentfrom the fact of different hops for the users and is similar to that of PF scheduling [76].The inclusion of it in αkl is to compensate for the effect of different path losses of the linkson the queue sizes. For example, consider two direct users with the same packet arrivalrate and different distances from the BS. Although a throughput-optimal algorithm willkeep their queues stable and provide the same throughput for them in long term, withoutusing 1cl,avg the user far from the BS will have a larger queue size on average over time and,therefore, will receive its data with more delay. The min operator, used in (6.7), makessure that the actual channel rates of links get considered. This has a similar effect as insingle-hop networks [77], and will prevent the inefficient link selection (which might happenwhen a queue has not much data but its corresponding link has high channel capacity)and the consequent large queue sizes in the BS and RS.Note that in the single-path relay networks, where the routes are fixed, there is noneed for backpressure routing. Hence, defining a suitable threshold, such that its value issignificantly larger than the RS queue sizes (as will be discussed in Section 6.5), makes it144Chapter 6. Efficient and Fair Throughput-Optimal Schedulingpossible to use the local queue sizes in the weights of the links. This way, MMW appliesa kind of forepressure to the next hop and enforces the RS queues to remain below thethreshold. Moreover, defining αkl as stated above, helps to provide more service to theindirect users’ queues in the BS and RS, and improves the fairness in terms of queueingdelay between the direct and indirect users.Based on the above, in the case of a shared channel, MMW will allocate the channelto the link lsh∗ as follows:lsh∗(t) = argmaxl∈LVl(t)︷ ︸︸ ︷αklβklwMMWl (t)min{Qblkl(t), cl(t)} (6.9)In the case of independent channels for the BS and RS, (6.9) can be used separately forselecting lB∗ and lR∗ among LB and LR, respectively. We will show later that MMW leadsto an interesting result in this case: By selecting a suitably large Qˆ, there is no need forany information exchange about the channel states and the queue sizes of indirect users,almost all the time. Hence, a completely distributed stabilizing resource allocation can beimplemented, which is highly scalable for multihop networks with more than two hops.6.3.2 Stability AnalysisWe note that according to [60], the capacity region of our system is defined as follows:Definition 6.1 The network capacity region Λ is the closure of the set of all rate vectorsλB = [λB1 ...λBK ] that can be stably supported over the network, considering all possiblealgorithms (possibly those with full knowledge of future events).It is shown in [60] that the capacity region is determined based on the channel states,power allocations and achievable transmission rates for the links of the network. The suffi-cient condition for the stability of the network is to have λB interior to Λ, i.e., there should145Chapter 6. Efficient and Fair Throughput-Optimal Schedulingexist an  > 0 such that λB +  ∈ Λ [60]. This ensures the feasibility to provide the systemqueues with the service rates strictly larger than their data arrival rates. Considering theabove mentioned, in the following theorem, we show that MMW is throughput-optimal.Theorem 6.1 If there exists an  > 0 such that λB +  ∈ Λ, then, MMW is able tostabilize the system.The proof of Theorem 6.1 is given in Appendix C.6.4 Distributed and Semi-DistributedImplementationsThe MMW scheduling algorithm can be implemented easily in decentralized ways. Ac-cording to (6.5) and (6.6), in MDB (MSB), the BS (RS) needs the information about thequeue sizes in the RS (BS) to compute its weights. On the other hand, according to (6.8),by defining the queue threshold large enough in MMW, the BS and RS can decide abouttheir weights independently and based on only their local queue information, almost allthe time. In the following, we assume that a suitably large Qˆ has been selected (this isclarified in Section 6.5) such that the RS queue sizes remain below it, and based on that,we discuss the decentralized resource allocation. In the rare events that any RS queue sizeexceeds the threshold, the RS will need to notify the BS about the queue size, in additionto any other information.6.4.1 Case1: Shared ChannelIn the case of a shared channel, the BS and RS can initially exchange messages about thenetwork parameters to decide the value of z. After that, using MMW in each time slot146Chapter 6. Efficient and Fair Throughput-Optimal Schedulingand based on their local QCSI, they can compute Vl in (6.9) for their own links and the RScan inform the BS about its maximum Vl. Then, the BS can compare it with Vl of its ownlinks and find lsh∗; If lsh∗ ∈ LB, the BS will inform the ending point of lsh∗ (which is eitherone of the direct users or RS) to prepare for data reception and then, the BS will use thechannel to serve the corresponding queue and to transmit data to the link’s ending point.If lsh∗ ∈ LR, the RS can perform the similar procedures to transmit data, after gettingnotified by the BS that the winner link belongs to it13. Note that in the semi-distributedresource allocation here, the RS just needs to inform the BS about the highest value ofits Vl for the transmission channel; but in the case of using MDB or MSB, it will also beneeded to exchange QSI for the modified queue sizes in the previous time slot. Specially,this is of great importance in OFDMA networks, where usually there are a large numberof subcarriers and users in the system and, therefore, in each time slot, it is possible totransmit to many users and have modified information about many queues. Specifically,considering a system with N sc subchannels, with MDB this can impose an overhead of upto min(N sc,K −K1) signaling from the RS to the BS, as it is possible to transmit to thisnumber of users from the RS; with MSB an overhead of up to K −K1 signaling from theBS to the RS will be imposed due to the possibility of packet arrivals in any of the BSqueues that belong to indirect users.6.4.2 Case2: Independent Channels and Highly ScalableFrameworkIn the case of independent transmission channels for the BS and RS, distributed resourceallocation becomes a lot easier and leads to a more interesting result. After initial systemsettings and adjusting z > 1, each of the BS and RS can use (6.9) for allocating its channel13In the systems with more than one RS, the similar procedures can be performed: all the RSs informthe BS about their maximum Vl and then, the BS finds the lsh∗ and notifies the RS to which lsh∗ belongs.147Chapter 6. Efficient and Fair Throughput-Optimal Schedulingand power to a link in LB and LR, respectively. The main interesting feature here is that,in terms of signaling, the RS acts like a direct user and feeds back just its own receivingchannel condition to the BS, and there is no need for any other signaling between theBS and RS, neither about Vl nor about the QCSI of indirect users14. Note that in thiscase, there will be no point in using centralized resource allocation, as the BS and RShave separate power resources and can operate independently. Accordingly, in OFDMAsystems with independent frequency bands for the BS and RS, each of the serving nodescan independently use the algorithms proposed in the literature for traditional OFDMAsystems (without relays) [68, 78–86] to allocate its resources.Based on the above mentioned, MMW can be easily extended to single-path relay net-works with more than two hops, as in Fig. 6.3, and independent channels for the BS andRSs, where the BS and RSs can use (6.9) for allocating their channels to their own links.This way, the aforementioned feature still holds and each RS acts as a direct user for theprevious hop and as a BS for the next hop. Consequently, MMW provides a framework forcompletely distributed resource allocation in relay-based cellular networks with indepen-dent channels, which is highly scalable without increasing the signaling overhead betweenthe serving nodes (i.e., the BS and RSs); as a result, it can be of great advantage in buildingrelay-based cellular networks.6.5 Performance EvaluationTo evaluate the system performance, we have conducted extensive Matlab simulations over10000 time slots, in a system with cell radius equal to 1000 m, where the RS is locatedat the distance of 2/3 cell radius from the BS; the BS, RS and user antenna heights are14With MDB and MSB, again an overhead will be imposed in the orders mentioned in the previoussubsection.148Chapter 6. Efficient and Fair Throughput-Optimal SchedulingFigure 6.3: Multihop relay-based cellular network. Square, circle and triangle representthe BS, RS and user, respectively.assumed 15 m, 5 m and 1.5 m respectively, and the transmission power for the BS andRS is respectively 31 dBm and 22 dBm. The path loss of each link is calculated basedon [61] and the power spectral density of the noise in the receivers is assumed equal to-174 dBm/Hz. The bandwidth of the shared channel used for the transmissions from theBS or RS is considered equal to 1 MHz and the time slot duration is set to 1 ms. Users’data packet arrivals at the BS are based on Bernoulli distribution with packet sizes equalto 1 kbits.For the channel from the BS or RS to the users, Rayleigh model is used while thechannel from the BS to RS is modeled as Rician with κ factor equal to 6 dB [62]. Thechannel fading over the system bandwidth is assumed to be flat which varies independentlyfrom one time slot to another.For computing the channel capacity of the links, we have used cl(t) = WT log2(1 +plsl(t)), where W is the channel bandwidth, T is the time slot duration and pl is the powerthat is used for transmission on the channel for link l. sl(t) is the instantaneous channelgain-to-noise ratio in the receiver side of the link l, which is computed as sl(t) = |hl(t)|2Glσ2n,where hl(t) and Gl are respectively, the small scale fading coefficient in time slot t andthe path loss attenuation of link l, and σ2n is the variance of Gaussian noise in the receiver149Chapter 6. Efficient and Fair Throughput-Optimal Schedulingside of link l. Based on these, as an estimate of average channel capacities, we have usedcl,avg = WT log2(1 + plGlσ2n ), i.e., the capacity without considering the small scale fading. Inorder to be able to investigate just the effect of distance and relaying on user delays, wehave assumed all the users have the same service class and therefore, we have set βk = 1for all of them.First, in order to decide about the queue threshold, we consider a system with a sharedchannel for the BS and RS in a scenario where 8 direct and 4 indirect users are located inthe middle point between the BS and RS, and the middle point between RS and the celledge, respectively. The reason for selection of these distances is to approximate the averagecase of random distances of direct and indirect users from the BS and RS, respectively.The packet arrival probability in each slot is 0.28 for every user. Therefore, the average bitarrival rates are equal to 280 kbps, which correspond to a medium load in this scenario.Fig. 6.4(a) displays the percentage of time slots in MMW15 with z = 1, 2, that on average,each queue in the RS exceeds the shown preset queue threshold (which makes it necessaryto set the weight of corresponding links from the BS to RS based on the differential queuesizes). It is observed that as the considered threshold increases, the percentage of timeslots decreases and after the threshold of 12 kbits, this percentage gets zero. It is worthmentioning that in this setting, due to the forepressure effect stated before, queue sizesremain under the threshold. This can be observed in Fig. 6.4(b), which depicts maximumqueue sizes in the RS when Qˆ = 14 kbits. These results show that if we preset the thresholdto a large value, MMW will consider the weights of the links proportional to just the queuesizes in the links’ beginning points, almost all the time. This will lead to less overheadwhich can facilitate the semi-distributed and completely distributed resource allocation, asdiscussed in Section 6.4. Also, it will improve the delay fairness in the system, as shownin the figures later. In the following, we have considered the queue threshold equal to 5015In the figures, for brevity, the coefficients have been displayed immediately after the term MMW.150Chapter 6. Efficient and Fair Throughput-Optimal Scheduling2 4 6 8 10 12 140510152025303540Queue Threshold [kBits]Percentage of Time [%]  MMW1MMW2(a)0 2000 4000 6000 8000 10000020004000600080001000012000Time SlotMaximum Queue Size in RS [Bits]  MMW1MMW2(b)Figure 6.4: (a) Percentage of time slots each queue in the RS exceeds the queue threshold(b) maximum queue sizes in the RS, over time, when Qˆ = 14 kbits.151Chapter 6. Efficient and Fair Throughput-Optimal Schedulingkbits and observed that almost all the time (more than 99.99 percent of the time slots),the RS queues remain under the threshold and therefore, local queue sizes are used in theweights of the links.Now, we investigate the delay fairness in the above system, i.e., with a shared channeland packet arrival probability of 0.28, where the 8 direct users in the distance between theBS and RS and 4 indirect users between the RS and the cell edge are randomly located(with uniform distribution). We have run simulations for more than 100 realizations ofuser locations, each over 10000 time slots. Fig. 6.5 depicts the CDF of direct and indirectusers’ experienced queue sizes and their average bit delays. Here, the queue size for relayedusers is the sum of their corresponding queue sizes in the BS and RS, and the average bitdelay is defined based on the Little’s law, as the ratio of the average queue size to theaverage data arrival rate [87]. A fairer policy will have closer values of queue sizes andaverage bit delays, and, as a result, closer graphs for both groups of users. It is observedthat in a shared channel scenario, on average, MDB has good performance for direct userswhile discriminating against indirect users. This is due to the differential backlog basedweights for the links of indirect users from the BS to RS, according to (6.5), which resultsin lower priorities and service rates for the queues of indirect users in the BS. Moreover,the queueing at the RS adds to the delay of relayed users’ data before reaching the users.Therefore, on average, the indirect users receive their data with more delay than the directusers. MSB has better performance than MDB, as it assigns the weights based on (6.6)which results in higher priorities for the links of indirect users starting at the BS and RS,compared with MDB, and more service for their corresponding queues.MMW1 and MMW2 behave better than MDB and MSB, in providing good delay perfor-mance for both the direct and indirect users. In particular, using min{Qblkl(t), rl(il(t), sl(t))}in (6.7) leads to efficient link selection and utilizing the channel, and using 1/cl,avg helps in152Chapter 6. Efficient and Fair Throughput-Optimal Scheduling0 1 2 3 4 5 6 7x 10400.10.20.30.40.50.60.70.80.91Queue Size [Bits]CDF  MSB(Direct)MSB(Indirect)MDB(Direct)MDB(Indirect)MMW1(Direct)MMW1(Indirect)MMW2(Direct)MMW2(Indirect)(a)MSB MDB MMW1 MMW2051015202530354045Average Bit Delay [mili second]  user1user2user3user4user5user6user7user8user9user10user11user12(b)Figure 6.5: (a) CDF of the average queue size for direct and indirect users (b) average bitdelay of all the users; the case of shared channel.153Chapter 6. Efficient and Fair Throughput-Optimal Scheduling0.2 0.4 0.6 0.8 100.10.20.30.40.50.60.70.80.91Jain’s Index of Average Bit DelaysCDF  MSBMDBMMW1MMW2Figure 6.6: CDF of Jain’s fairness index for average bit delays of the users; the case ofshared channel.providing similar delay for the users in each group. Moreover, by using wMMWl (t) = Qblkl(t)almost all the time, MMW1 and MMW2 provide more service for the queues of indirectusers at the BS, compared with MDB. This reduces the delay for the data of indirect users.Furthermore, MMW2 sets z = 2 which doubles the weights of indirect users starting atthe BS and RS, and serves their corresponding queues with even higher rates. This way,MMW2 decrease the delay of the indirect users and increases that of the direct users and,therefore, provides similar performance for all the users. These results are also reflectedin Fig. 6.6, which shows the CDF of the Jain’s fairness index [88] for users’ average bitdelays in each realization of user locations. It is observed that the Jain’s fairness index foraverage bit delays in MSB is larger than MDB. MMW1 leads to even larger values of Jian’sfairness index and MMW2 has the largest. Note that the Jain’s fairness index does notprovide any insight about the efficiency; it is by the figures of queue size and average bitdelay that we get a comprehensive idea about the efficiency and fairness of the algorithms.154Chapter 6. Efficient and Fair Throughput-Optimal SchedulingTo have a clearer picture about the effectiveness of the proposed algorithms, we alsopresent the performance in a higher load with data arrival rate of 350 kbps for each userand a special instance of user locations: all the direct users are located in the closestdistance to the BS (50 m) and all the indirect users are on the cell edge, i.e., at about333 m from the RS. As Figs. 6.7 and 6.8 show, in this scenario, all the algorithms leadto almost similar queue sizes and average bit delays for direct users, but MDB results inhigh values for indirect users. MSB is better than MDB, but the performance of MMWis remarkably better. Based on the above, we note that MMW is very suitable for givingpriority to indirect users and has a fair performance. Specifically, MMW1 has a betterfairness compared with MSB and MDB, and MMW2 is the fairest.As it was explained in subsection 6.2.2, it should be noted that MSB was proposed0 0.5 1 1.5 2 2.5x 10500.10.20.30.40.50.60.70.80.91Queue Size [Bits]CDF  MSB(Direct)MSB(Indirect)MDB(Direct)MDB(Indirect)MMW1(Direct)MMW1(Indirect)MMW2(Direct)MMW2(Indirect)Figure 6.7: CDF of the average queue size for direct and indirect users; the case of sharedchannel, where direct users are located close to the BS and indirect users on the cell edge.155Chapter 6. Efficient and Fair Throughput-Optimal SchedulingMSB MDB MMW1 MMW2050100150200250300350400450500Average Bit Delay [mili second]  DirectIndirectFigure 6.8: Average bit delay for direct and indirect users; the case of shared channel,where direct users are located close to the BS and indirect users on the cell edge.in [46] for the case of independent channels for the BS and RS and it might lead toinstability in some scenarios in the case of shared channel. To see this clearly, consider ascenario in which the relayed users are located close to the RS and at the distance of 50mfrom it. On the other hand, direct users are located in the middle point between the BSand RS distance. The packet arrival probability in each slot is 0.45 for every user, leadingto the average bit arrival rates of 450kbps. Figure 6.9 shows the average queue size andaverage throughput of the system. It is observed that the queue sizes grow unboundedwith MSB, whereas MMW and MDB are able to keep the queues stable. As a result, whilewith MMW and MDB, the system average throughput in each time slot is equal to thetotal average data arrival rate in the system (i.e., 12*.45= 5.4 kbits per time slot), MSBis not able to support the arrival rates.As explained previously, this happens because an indirect user’s link from the RS with156Chapter 6. Efficient and Fair Throughput-Optimal Scheduling0 2000 4000 6000 8000 1000000.511.522.533.5 x 105Time SlotAverage Queue Size [Bits]  MSBMDBMMW1MMW2(a)MSB MDB MMW1 MMW20100020003000400050006000System Average Throughput [Bits/Slot](b)Figure 6.9: (a) System average queue size over time (b) system average throughput in eachtime slot; the case of shared channel, where indirect users are located close to the RS.157Chapter 6. Efficient and Fair Throughput-Optimal Schedulingfew bits in its corresponding queue in the RS will get selected when there are many bitsin its corresponding queue in the BS. This happens specially in the cases that the averagechannel gain of the indirect users’ links starting at the RS is higher than those startingat the BS. In this situation, MSB serves the indirect users’ queues in the BS with a lowerrate, and due to this inefficiency, its stability region is smaller.For the case with independent channels, we have assumed a separate channel in the RSwith the bandwidth of 500 kHz and over a different frequency band. Thus, the BS andRS can transmit at the same time without interfering to each other. Fig. 6.10 investigatesthe CDF of direct and indirect users’ experienced queue sizes and the average bit delays.Users’ location settings are similar to those of Fig. 6.5 and the packet arrival probabilityis equal to 0.42, which is a medium load in the case of independent channels. This is dueto the fact that with separate resources for serving indirect users’ queues in the RS, thesystem capacity is higher than the system with shared channel. It is observed that whileMSB has better performance than MDB in some realizations, MMW behaves better thanMDB and MSB, in all the realizations of user locations and result in lower queue sizes andaverage delays. In particular, MMW with z = 1.5 provides similar queue sizes and averagedelays for direct and indirect users and therefore, as shown in Fig. 6.11, has higher valuesfor Jain’s fairness index. The coefficient z = 1.5 has been obtained through simulations.Note that since the indirect users have the benefit of using two channels, one for their BS-to-RS links and one for their links starting at the RS, z = 2 will not be suitable coefficientif we want to have high fairness, as it can lead to discrimination against the direct users.Similar to the case of shared channel, we also note that in the case of independentchannels, MMW is more efficient as it assigns the links’ weights, almost all the time,proportional to just the queue sizes in the beginning points of the links and, therefore,does not need the BS/RS signaling about the QCSI of indirect users.158Chapter 6. Efficient and Fair Throughput-Optimal Scheduling0 1 2 3 4x 10400.10.20.30.40.50.60.70.80.91Queue Size [Bits]CDF  MSB(Direct)MSB(Indirect)MDB(Direct)MDB(Indirect)MMW1(Direct)MMW1(Indirect)MMW1.5(Direct)MMW1.5(Indirect)(a)MSB MDB MMW1 MMW1.50246810121416Average Bit Delay [mili second]  user1user2user3user4user5user6user7user8user9user10user11user12(b)Figure 6.10: (a) CDF of the average queue size for direct and indirect users (b) averagebit delay of all the users; the case of independent channels.159Chapter 6. Efficient and Fair Throughput-Optimal Scheduling0.4 0.5 0.6 0.7 0.8 0.9 100.10.20.30.40.50.60.70.80.91Jain’s Index of Average Bit DelaysCDF  MSBMDBMMW1MMW1.5Figure 6.11: CDF of Jain’s fairness index for average bit delays of the users; the case ofindependent channels.6.6 SummaryIn this chapter, we have proposed a variation of throughput-optimal resource allocationalgorithms, namely MMW, for buffer-aided relay-based cellular networks with one pathfor packet transmissions from the BS to each user. MMW uses just the correspondinglocal queue size for assigning the weight of a link, unless in the rare events that the queuesize in the RS exceeds a predefined large threshold, in which case the weight is definedaccording to conventional MW. Moreover, MMW can adjust a coefficient and prioritizerelayed users in order to improve fairness, in terms of average delay, between the directand relayed users. MMW can be employed both in centralized and decentralized networkimplementations, as well as the scenarios with shared or independent channels for the BS160Chapter 6. Efficient and Fair Throughput-Optimal Schedulingand RS. In particular, in the case of independent channels for the BS and RS, by defininga suitably large threshold, a completely distributed resource allocation is possible most ofthe time without any signaling between the BS and RS about the QCSI of the relayedusers. Numerical results confirm this as well as the fact that MMW is able to improve thedelay fairness in the system and provides similar performance for direct and relayed users.161Chapter 7Conclusions and Future WorkIn this chapter, we conclude the presented works in this thesis and also suggest severaltopics for future work. Note that the conclusions provided in the following address thequestions and objectives stated in Chapter 1. In particular, the conclusion for Chapter 2addresses the question about the effect of buffer-aided relaying on the end-to-end delay.The service oriented concern for QoS provisioning in a multiuser system is addressed in theconclusion related to Chapter 3. Finally, the conclusions of Chapters 4, 5 and 6 address theimplementation oriented concerns about the low-complexity and efficiency of the resourceallocation algorithms.7.1 Conclusions• In Chapter 2, we have provided insights on the effect of buffer-aided relaying on theend-to-end delay, i.e., the delay that data packets experience since their arrival atthe BS buffer until delivery to the destination. We have shown that using bufferat the relay helps in utilizing the BS channel more opportunistically and results inthe fast transfer of the data packets from the BS buffer to the relay buffer. Then,the queued data in the relay buffer are served whenever the relay channel is in goodcondition. This way, the BS and relay channels are used more efficiently and thedata packets are delivered to the destination in a less amount of time, comparedwith the conventional relaying. Also, we have analyzed the average packet delays162Chapter 7. Conclusions and Future Workin the relay networks with Bernoulli packet arrivals and channel conditions, and wehave shown mathematically that the average delay is less in the case of buffer-aidedrelaying. Furthermore, through intuitive generalizations, we have clarified that thethroughput improvement in buffer-aided relaying in fact leads to the improvement inthe average end-to-end delay performance as well. We have verified our analysis anddiscussions through extensive computer simulations. Numerical results confirm thateven though buffer-aided relaying leads to queueing delays at the relay, it significantlyreduces queueing delay at the BS and therefore, on the whole, it reduces averageend-to-end packet delay. These results helps in deciding about the use of bufferingrelays in cellular networks as they dispel the concern on the delay performance of thesystem.• In Chapter 3, we have presented a novel framework for formulating QoS-aware re-source allocation problem in buffer-aided relay-enhanced OFDMA networks. Wehave shown that the IRAP formulation is itself a challenge when both the serviceswith average throughput requirement and the services with packet delay thresholdsare present in the network. To address that, we have proposed novel CQDA poli-cies. Based on the approaches these policies have for problem formulation, we havecalled them SUMR and JUMR. SUMR defines the utility function based on onlythe delay-tolerant users and imposes minimum rate constraints whenever there aredelay-sensitive packets in the queues of the BS and relays. On the other hand, JUMRdefines utility function based on all the users and imposes minimum rate constraintsonly when getting close to the packet deadlines of delay-sensitive users. We havealso proposed enhanced versions of these policies, as ESUMR and EJUMR, whichuse more information or computations in formulating or solving the IRAP. Throughextensive computer simulations, we have evaluated the performances of the proposed163Chapter 7. Conclusions and Future Workpolicies in extreme scenarios as well as general ones. The results show significantimprovements in provisioning average throughput and packet delay guarantees, com-pared with the systems without relays, relay enhanced systems without buffers atthe relays, as well as the systems with buffer-aided relays but QoS-unaware. Also,we have observed that when the system load increases, SUMR and ESUMR workin the favor of delay-sensitive users and keep their packet drop ratios equal to zero,at the cost of lowering the throughput of delay-tolerant users. On the other hand,JUMR and EJUMR are able to jointly serve the delay-sensitive and delay-tolerantusers, and penalize both of them whenever the system load increases.• In Chapter 4, we have presented a novel framework for distributed resource allo-cation in a buffer-aided relay-assisted OFDMA network. We have provided a newperspective, which considers the buffers at the relays as virtual users and models thewhole network as small cells served by the BS and relays. Based on that and usingthe concept of time sharing, we have formulated the resource allocation problem as aconvex optimization problem. Using dual decomposition, we have proposed an itera-tive algorithm called DDRA, which provides insights on reducing the computationalburden on the BS and the CSI reporting overhead of the system. In DDRA, theBS and relays pass messages among themselves to solve their own problems usingsome global variables and the information about their queues and channel states oftheir links. The closed form equations for power and subchannel allocation in DDRAreveal that it adaptively allocates the system resources based on the queue sizes,channel conditions and required BER of the users. Numerical results confirm thatDDRA is able to utilize the potential of buffer-aided relaying and results in significantimprovement in terms of average throughput and queue stability.• In Chapter 5, we have introduced important parameters that adapt the Lyapunov164Chapter 7. Conclusions and Future Workdrift-plus-penalty policy to cellular networks with buffering relays. In particular, theimportance parameter for average power constraints amplifies the effect of virtualpower queues of the BS and relays in the instantaneous problem. This way, it preventscontinuous use of the peak powers at the BS and relays, and facilitates satisfyingaverage power constraints in the presence of large actual data queue sizes. Theother parameter is the extra weight that is given to the links of relayed users andincreases their priority to enable more service for the queues of relayed users. Asa consequence, it reduces the queue size of relayed users in the BS and helps inproviding fair data admission for direct and relayed users. We have also proposed alow-complexity subchannel and power allocation strategy and based on that, we havedesigned distributed and centralized resource allocation schemes, respectively calledEDDRA and EDCRA, which take into account several practical constraints such aslimited buffer capacities and half-duplex relaying. EDDRA splits resource allocationtasks between the BS and relays and leads to lower computational burden on the BSand lower CSI overhead compared with EDCRA. In particular, in EDDRA, the BSdecides about the type of time slot and the set of subchannels for the relays and itself.Then every one of them allocates its set of subchannels to its links in a distributedway and adjusts its total power to be used on the subchannels. Numerical resultsshow that the proposed parameters lead to fair data admission for the users and helpin satisfying the average power constraints. Also, they confirm that the EDDRA hasclose performance to EDCRA and outperforms an existing centralized algorithm interms of system utility, overflow and throughput.• In Chapter 6, we have proposed MMW policy which assigns the weight of a linkbased on the queue size in its starting point, unless in the rare events that the queuesize in the ending point of a feeder link from the BS to relay, exceeds a predefined165Chapter 7. Conclusions and Future Worklarge threshold, in which case its weight is defined according to conventional MW.MMW reduces the average delay for the relayed users. Moreover, by adjusting acoefficient, it is able to further prioritize the relayed users’ links and improve thedelay fairness between the direct and relayed users. MMW can be exploited bothin centralized and decentralized network implementations as well as the scenarioswith shared or independent channels for the BS and relays. In particular, whenthe BS and relays have dedicated channels, by defining a suitably large threshold,MMW leads to a completely distributed resource allocation without any signalingbetween the BS and relays about the local channel and queue states of the relayedusers. Numerical results confirm that MMW causes lower overhead and improvesdelay fairness between direct and relayed users.Note that the numerical results in this thesis are obtained from Monte Carlo simu-lations based on the well established models for random data arrivals and wirelesschannels, which are widely used in the literature. Therefore, even though these mod-els may not match exactly the practical scenarios, the performance improvementsof our proposed schemes over the baseline methods existing in the literature are ex-pected to hold, as the same models have been used in simulating the performance ofthe proposed algorithms as well as the baseline methods.7.2 Suggestions for Future WorkIn the following, we consider several interesting possibilities for extension of the currentwork.1. Effect of Buffer-Aided Relaying on the Delay Variance: Research on buffer-aided relaying is new, and more investigations are needed to analyze different per-166Chapter 7. Conclusions and Future Workformance metrics in the presence of buffering relays. Several works in the litera-ture [32, 35, 38, 56] as well as the investigations in this thesis have already shown thatusing buffers at relays leads to throughput improvements. Moreover, we have shownthat buffer-aided relaying also reduces average end-to-end packet delays. However,the effect of buffer-aided relaying on the delay variance is an open research problem.It is needed to identify the different affecting factors that can lead to large or smallvariances of end-to-end packet delays.2. Energy Efficient Resource Allocation: The growing demands for wireless accessto the Internet entail a great amount of energy consumption in cellular networks,which inevitably leads to a bigger carbon footprint, and greatly contributes to en-vironmental pollution. Therefore, considering Energy Efficiency (EE) in the systemdesign is becoming an urgent trend, and recently remarkable efforts have been in-vested in this area [89–95]. In energy efficient design, the goal is to optimize theamount of data transmitted per unit energy, which requires tradeoffs for SpectralEfficiency (SE) and system capacity. Considering the fact that the buffering capabil-ity in relays improves the system capacity, it can result in better tradeoffs betweenEE and SE. This needs to be investigated more and the affecting parameters needto be identified. Moreover, since EE can lead to larger queue sizes in the system,it is necessary to study the instantaneous problem formulation such that the EE isoptimized over time while keeping the queues stable.3. Mobility Awareness: One of the challenges in cellular networks is maintainingwireless connectivity for the users that move in the network area, leaving the coverageof one serving node and entering a new one. This is usually addressed through linkquality measurements and handover mechanisms. If the serving nodes are buffer-aided fixed relays, the data buffered in the previous relay needs to be retransmitted167Chapter 7. Conclusions and Future Workfrom the BS to the new relay. In the scenarios that the users have high mobility, likedriving in a road, this can affect the throughput performance of the system, as thevolume of retransmissions is large. Therefore, suitable predictive resource allocationalgorithms are needed to prevent the BS from transmitting to the old relay, earlyenough before the handover.On the other hand, for the scenarios that a lot of users move together, like in a busor train, mobile relays have been considered as a potential solution to reduce theoverhead of handover signaling [96–100]. Using buffering capability in mobile relayscan improve the reliability and quality of service in such scenarios. However, suitablepredictive resource allocation methods are needed in this case, to feed the buffers ofusers in the mobile relay, early enough, to ensure a satisfactory service during thehandover procedure.4. Multicell Scenarios: One of the important research topics is to consider the chal-lenges encountered in multicell systems. Noting that the next generation of cellularnetworks will be aggressive in frequency reusing, the interference resulted from ad-jacent cells can degrade the performance of the systems. There has been continuouswork going on in this area [101–105]. However, these works either consider the net-works without relays or the relay networks without buffering capability in relays.Considering the fact that the buffering capability allows to postpone data transmis-sions from relays, it can facilitate the interference management mechanisms. How-ever, coordination between the adjacent cells and the involved entities is challengingas the queue states in relays should also be taken into account, in addition to thechannel conditions. Therefore, resource allocation in buffer-aided relay-assisted mul-ticell scenarios needs to be investigated and efficient algorithms need to be designed.168Bibliography[1] D. Pareit, B. Lannoo, I. Moerman, and P. Demeester, “The history of WiMAX: Acomplete survey of the evolution in certification and standardization for IEEE 802.16and WiMAX,” IEEE Commun. Surveys and Tutorials, vol. 14, no. 4, pp. 1183–1211,Oct. 2012.[2] “IEEE standard for air interface for broadband wireless access systems,” IEEE Std.802.16-2012 (pp. 12544) (2012).[3] S. Parkvall, A. Furuskar, and E. Dahlman, “Evolution of LTE toward IMT-advanced,” IEEE Commun. Magazine, vol. 49, no. 2, pp. 84–91, Feb. 2011.[4] “Evolved universal terrestrial radio access (E-UTRA) and evolved universal terres-trial radio access networks(E-UTRAN; overall description: Stage 2,” 3GPP, TS36.300, Rel. 12, Mar. 2015.[5] J. Son, “New feature for IMT-advanced: Relay,” Samsung Electronics TrainingWorkshop on 4G Mobile (IMT Advanced) System and Applications, Nov.2009. [Online]. Available: https://www.itu.int/ITU-D/asp/CMS/Events/2009/CoE/4Gmobile/Session6 SON.pdf[6] C. Hoymann, W. Chen, J. Montojo, A. Golitschek, C. Koutsimanis, and X. Shen,“Relaying operation in 3GPP LTE: challenges and solutions,” IEEE Commun. Mag-azine, vol. 50, no. 2, pp. 156–162, Feb. 2012.[7] M. Salem, A. Adinoyi, M. Rahman, H. Yanikomeroglu, D. Falconer, Y. Kim, E. Kim,and Y. Cheong, “An overview of radio resource management in relay-enhancedOFDMA-based networks,” IEEE Commun. Surveys and Tutorials, vol. 12, no. 3,pp. 422–438, Third quarter 2010.[8] M. Salem, A. Adinoyi, H. Yanikomeroglu, and D. Falconer, “Opportunities and chal-lenges in OFDMA-based cellular relay networks: A radio resource management per-spective,” IEEE Trans. Veh. Technol., vol. 59, no. 5, pp. 2496–2510, June 2010.[9] K. Sundaresan and S. Rangarajan, “Adaptive resource scheduling in wirelessOFDMA relay networks,” in Proc. IEEE Conf. on Computer Commun., Mar. 2012,pp. 1080–1088.169Bibliography[10] D. Ng and R. Schober, “Cross-layer scheduling for OFDMA amplify-and-forwardrelay networks,” IEEE Trans. Veh. Technol., vol. 59, no. 3, pp. 1443–1458, Mar.2010.[11] D. Ng, E. Lo, and R. Schober, “Dynamic resource allocation in MIMO-OFDMAsystems with full-duplex and hybrid relaying,” IEEE Trans. Commun., vol. 60, no. 5,pp. 1291–1304, May 2012.[12] Y. Pan, A. Nix, and M. Beach, “Distributed resource allocation for OFDMA-basedrelay networks,” IEEE Trans. Veh. Technol., vol. 60, no. 3, pp. 919–931, Mar. 2011.[13] S. Zhang and X. Xia, “A high-efficiency semi-distributed resource allocation inOFDMA-based wireless relay networks,” in Proc. IEEE Wireless Commun. and Net-working Conf., Apr. 2013, pp. 3277–3281.[14] Z. Chang, T. Ristaniemi, and Z. Niu, “Radio resource allocation for collaborativeOFDMA relay networks with imperfect channel state information,” IEEE Trans.Wireless Commun., vol. 13, no. 5, pp. 2824–2835, May 2014.[15] O. Oyman, “Opportunistic scheduling and spectrum reuse in relay-based cellularnetworks,” IEEE Trans. Wireless Commun., vol. 9, no. 3, pp. 1074–1085, Mar. 2010.[16] J. Liang, H. Yin, H. Chen, Z. Li, and S. Liu, “A novel dynamic full frequency reusescheme in OFDMA cellular relay networks,” in Proc. IEEE Veh. Technol. Conf., Sep.2011, pp. 1–5.[17] H. Mei, J. Bigham, P. Jiang, and E. Bodanese, “Distributed dynamic frequencyallocation in fractional frequency reused relay based cellular networks,” IEEE Trans.Commun., vol. 61, no. 4, pp. 1327–1336, Apr. 2013.[18] W. Jeon, J. Han, and D. Jeong, “Distributed resource allocation for multi-cell relay-aided OFDMA systems,” IEEE Trans. Mobile Computing, vol. 13, pp. 2003–2015,Sep. 2014.[19] L. Jiang, J. Pang, G. Shen, and D. Wang, “A game theoretic channel allocationscheme for multi-user OFDMA relay system,” in Proc. IEEE Wireless Commun.and Networking Conf., Mar. 2011, pp. 298–303.[20] L. Liang and G. Feng, “A game-theoretic framework for interference coordination inOFDMA relay networks,” IEEE Trans. Veh. Technol., vol. 61, no. 1, pp. 321–332,Jan. 2012.[21] I. Chaabane, S. Hamouda, S. Tabbane, and J. Vicario, “A new PRB sharing schemein dual-hop LTE-advanced system using game theory,” in Proc. IEEE Personal,Indoor and Mobile Radio Commun. Sympos., Sep. 2012, pp. 375–379.170Bibliography[22] Y. Farazmand and A. Alfa, “A game theoretic power allocation and relay load bal-ancing in OFDMA-based DF cellular relay networks,” in Proc. IEEE Veh. Technol.Conf., Sep. 2013, pp. 1–6.[23] D. Zhang, Y. Wang, and J. Lu, “QoS-aware relay selection and subcarrier allocationin cooperative OFDMA systems,” IEEE Commun. Lett., vol. 14, no. 4, pp. 294–296,Apr. 2010.[24] D. Zhang, X. Tao, J. Lu, and M. Wang, “Dynamic resource allocation for real-timeservices in cooperative OFDMA systems,” IEEE Commun. Lett., vol. 15, no. 5, pp.497–499, May 2011.[25] X. Zhang, X. Tao, Y. Li, and J. Lu, “QoS provisioning scheduling with joint opti-mization of base station and relay power allocation in cooperative OFDMA systems,”in Proc. IEEE Intern. Commun. Conf., June 2013, pp. 5453–5457.[26] A. Marques, C. Figuera, C. Rey-Moreno, and J. Simo-Reigadas, “Asymptoticallyoptimal cross-layer schemes for relay networks with short-term and long-term con-straints,” IEEE Trans. Wireless Commun., vol. 12, pp. 333–345, Jan. 2013.[27] O. Elgendy, M. Ismail, and K. Elsayed, “Max-min fair resource allocation for LTE-advanced relay-enhanced cells,” in Proc. IEEE Wireless Commun. and NetworkingConf., Apr. 2014, pp. 1432–1437.[28] Y. Farazmand and A. Alfa, “Power allocation framework for OFDMA-based decode-and-forward cellular relay networks,” IEEE J. Commun. and Networks, vol. 16, pp.559–567, Oct. 2014.[29] Y. Zhao, X. Fang, R. Huang, and Y. Fang, “Joint interference coordination and loadbalancing for OFDMA multihop cellular networks,” IEEE Trans. Mobile Computing,vol. 13, pp. 89–101, Jan. 2014.[30] D. Incebacak, H. Yanikomeroglu, and B. Tavli, “Trade-offs in sum-rate maximizationand fairness in relay-enhanced OFDMA-based cellular networks,” in Proc. IEEEGlobal Telecommun. Conf., Dec. 2014, pp. 4770–4775.[31] C. Yangyang, P. Martins, L. Decreusefond, F. Yan, and X. Lagrange, “Stochasticanalysis of a cellular network with mobile relays,” in Proc. IEEE Global Telecommun.Conf., Dec. 2014, pp. 4758–4763.[32] N. Mehta, V. Sharma, and G. Bansal, “Performance analysis of a cooperative systemwith rateless codes and buffered relays,” IEEE Trans. Wireless Commun., vol. 10,no. 4, pp. 1069–1081, April 2011.[33] N. Zlatanov, R. Schober, and P. Popovski, “Throughput and diversity gain of buffer-aided relaying,” in Proc. IEEE Global Telecommun. Conf., Dec. 2011, pp. 1–6.171Bibliography[34] C. Dong, L. Yang, and L. Hanzo, “Performance analysis of multihop-diversity-aidedmultihop links,” IEEE Trans. Veh. Technol., vol. 61, no. 6, pp. 2504–2516, July 2012.[35] N. Zlatanov and R. Schober, “Buffer-aided relaying with adaptive link selection-fixed and mixed rate transmission,” IEEE Trans. Inform. Theory, vol. 59, no. 5, pp.2816–2840, May 2013.[36] N. Zlatanov, R. Schober, and P. Popovski, “Buffer-aided relaying with adaptive linkselection,” IEEE J. Select. Areas Commun., vol. 31, no. 8, pp. 1530–1542, Aug. 2013.[37] V. Jamali, N. Zlatanov, A. Ikhlef, and R. Schober, “Achievable rate region of thebidirectional buffer-aided relay channel with block fading,” IEEE Trans. Inform.Theory, vol. 60, no. 11, pp. 7090–7111, Nov. 2014.[38] N. Zlatanov, A. Ikhlef, T. Islam, and R. Schober, “Buffer-aided cooperative commu-nications: opportunities and challenges,” IEEE Commun. Magazine, vol. 52, no. 4,pp. 146–153, Apr. 2014.[39] I. Krikidis, T. Charalambous, and J. Thompson, “Buffer-aided relay selection for co-operative diversity systems without delay constraints,” IEEE Trans. Wireless Com-mun., vol. 11, no. 5, pp. 1957–1967, May 2012.[40] T. Islam, A. Ikhlef, R. Schober, and V. Bhargava, “Multisource buffer-aided relaynetworks: Adaptive rate transmission,” in Proc. IEEE Global Telecommun. Conf.,Dec. 2013, pp. 3577–3582.[41] H. Liu, P. Popovski, E. de Carvalho, and Y. Zhao, “Sum-rate optimization in a two-way relay network with buffering,” IEEE Commun. Lett., vol. 17, no. 1, pp. 95–98,Jan. 2013.[42] I. Ahmed, A. Ikhlef, R. Schober, and R. Mallik, “Power allocation for conventionaland buffer-aided link adaptive relaying systems with energy harvesting nodes,” IEEETrans. Wireless Commun., vol. 13, no. 3, pp. 1182–1195, Mar. 2014.[43] J. Huang and A. Swindlehurst, “Wireless physical layer security enhancement withbuffer-aided relaying,” in Asilomar Conf. on Signals, Systems and Computers, Nov.2013, pp. 1560–1564.[44] ——, “Buffer-aided relaying for two-hop secure communication,” IEEE Trans. Wire-less Commun., vol. 14, no. 1, pp. 152–164, Jan. 2015.[45] M. Darabi, V. Jamali, B. Maham, and R. Schober, “Adaptive link selection forcognitive buffer-aided relay networks,” IEEE Commun. Lett., vol. 19, no. 4, pp.693–696, Apr. 2015.172Bibliography[46] D. Park, “A throughput-optimal scheduling policy for wireless relay networks,” inProc. IEEE Wireless Commun. and Networking Conf., Apr. 2010, pp. 1–5.[47] M. Salem, A. Adinoyi, M. Rahman, H. Yanikomeroglu, D. Falconer, and Y. Kim,“Fairness-aware radio resource management in downlink OFDMA cellular relay net-works,” IEEE Trans. Wireless Commun., vol. 9, no. 5, pp. 1628–1639, May 2010.[48] M. Salem, A. Adinoyi, H. Yanikomeroglu, and D. Falconer, “Fair resource allocationtoward ubiquitous coverage in OFDMA-based cellular relay networks with asymmet-ric traffic,” IEEE Trans. Veh. Technol., vol. 60, no. 5, pp. 2280–2292, June 2011.[49] I. Bastuerk, B. Oezbek, and D. Ruyet, “Queue-aware resource allocation forOFDMA-based mobile relay enhanced networks,” in Proc. Intern. Sympos. on Wire-less Commun. Systems, Aug. 2013, pp. 1–5.[50] R. Wang and V. Lau, “Delay-aware two-hop cooperative relay communications viaapproximate MDP and stochastic learning,” IEEE Trans. Inform. Theory, vol. 59,no. 11, pp. 7645–7670, Nov 2013.[51] H. Ju, B. Liang, J. Li, and X. Yang, “Dynamic joint resource optimization for LTE-Advanced relay networks,” IEEE Trans. Wireless Commun., vol. 12, no. 11, pp.5668–5678, Nov. 2013.[52] L. Wang, Q. Du, P. Ren, L. Sun, and Y. Wang, “Buffering-aided resource allocationfor type I relay in LTE-advanced cellular networks,” in Proc. IEEE Global Telecom-mun. Conf., Dec. 2014, pp. 4484–4489.[53] B. Zhou, Y. Cui, and M. Tao, “Stochastic throughput optimization for two-hopsystems with finite relay buffers,” in Proc. IEEE Global Telecommun. Conf., Dec.2014, pp. 1728–1733.[54] L. Georgiadis, M. Neely, and L. Tassiulas, “Resource allocation and cross-layer con-trol in wireless networks,” Foundation and Trends in Networking, vol. 1, no. 1, pp.1–144, Apr. 2006.[55] M. Neely, Stochastic Network Optimization with Application to Communication andQueueing Systems. Morgan & Claypool, 2010.[56] B. Xia, Y. Fan, J. Thompson, and H. Poor, “Buffering in a three-node relay network,”IEEE Trans. Wireless Commun., vol. 7, no. 11, pp. 4492–4496, Nov. 2008.[57] F. Gebali, Analysis of Computer and Communication Networks. Springer, 2008.[58] I. Adan and J. Resing, Queueing Systems. Eindhoven University: Online-Available:http://www.win.tue.nl/ iadan/queueing.pdf, 2015.173Bibliography[59] M. Andrews, K. Kumaran, K. Ramanan, A. Stolyar, P. Whiting, and R. Vijayaku-mar, “Providing quality of service over a shared wireless link,” IEEE Commun.Magazine, vol. 39, pp. 150–153, 2001.[60] M. Neely, E. Modiano, and C. Rohrs, “Dynamic power allocation and routing fortime varying wireless networks,” IEEE J. Select. Areas Commun., vol. 23, no. 1, pp.89–103, Jan. 2005.[61] “Spatial channel model for multiple input multiple output (MIMO) simulations,”3GPP TR 25.996 V7.0.0 (2007-06).[62] M. Jeruchim, P. Balaban, and K. Shanmugan, Simulation of Communication Sys-tems: Modeling, Methodology and Techniques, 2nd ed. Kluwer Academic, 2000.[63] X. Qiu and K. Chawla, “On the performance of adaptive modulation in cellularsystems,” IEEE Trans. Commun., vol. 47, no. 6, pp. 884–895, June 1999.[64] Y. Kim, K. Son, and S. Global, “QoS scheduling for heterogeneous traffic in OFDMA-based wireless systems,” in Proc. IEEE Global Telecommun. Conf., Nov. 2009, pp.1–6.[65] C. Wanshi, M. Neely, and U. Mitra, “Energy efficient scheduling with individualpacket delay constraints: Offline and online results,” in Proc. IEEE Conf. on Com-puter Commun., May 2007, pp. 1136–1144.[66] J. Gan, Z. Guo, F. Rui, L. Weihong, W. Hai, K. Sandlund, L. Jianjun, S. Xiaodong,and L. Guangyi, “LTE in-band relay prototype and field measurement,” in Proc.IEEE Veh. Technol. Conf., May 2012, pp. 1–5.[67] L. Wang, Y. Ji, and F. Liu, “A semi-distributed resource allocation scheme forOFDMA relay-enhanced downlink systems,” in Proc. IEEE Global Telecommun.Conf., Dec. 2008, pp. 1–6.[68] J. Huang, V. Subramanian, R. Agrawal, and R. Berry, “Downlink scheduling andresource allocation for OFDM systems,” IEEE Trans. Wireless Commun., vol. 8,no. 1, pp. 288–296, Jan. 2009.[69] S. Boyd and L. Vandenberghe, Convex Optimization. Cambridge University Press,2004.[70] D. Bertsekas, Nonlinear Programming, 2nd ed. Athena Scientific, 1999.[71] L. Huang, M. Rong, L. Wang, Y. Xue, and E. Schulz, “Resouce scheduling forOFDMA/TDD based relay enhanced cellular networks,” in Proc. IEEE WirelessCommun. and Networking Conf., Mar. 2007, pp. 1544–1548.174Bibliography[72] B. Bangerter, S. Talwar, R. Arefi, and K. Stewart, “Networks and devices for the 5Gera,” IEEE Commun. Magazine, vol. 52, pp. 90–96, Feb. 2014.[73] L. Tassiulas and A. Ephremides, “Stability properties of constrained queueing sys-tems and scheduling policies for maximum throughput in multihop radio networks,”IEEE Trans. Automatic Control, vol. 37, no. 12, pp. 1936–1948, 1992.[74] ——, “Dynamic server allocation to parallel queues with randomly varying connec-tivity,” IEEE Trans. Inform. Theory, vol. 39, pp. 466–478, 1993.[75] “Evolved universal terrestrial radio access (E-UTRA); further advancements for E-UTRA physical layer aspects (release 9),” TR 36.814 V9.0.0 (2010-03).[76] P. Bender, P. Black, M. Grob, R. Padovani, N. Sindhushyana, and S. Viterbi,“CDMA/HDR: a bandwidth efficient high speed wireless data service for nomadicusers,” IEEE Commun. Magazine, vol. 38, pp. 70–77, July 2000.[77] M. Andrews and L. Zhang, “Scheduling algorithms for multi-carrier wireless datasystems,” in ACM Intern. Conf. on Mobile Computing and Networking, Sep. 2007.[78] G. Song and Y. Li, “Cross-layer optimization for OFDM wireless networks-part ii:Algorithm development,” IEEE Trans. Wireless Commun., vol. 4, no. 2, pp. 625–634,Mar. 2005.[79] ——, “Utility-based resource allocation and scheduling in OFDM-based wirelessbroadband networks,” IEEE Commun. Magazine, vol. 43, no. 12, pp. 127–134, Dec.2005.[80] Y. Ma, “Rate-maximization scheduling for downlink OFDMA with long term rateproportional fairness,” in Proc. IEEE Intern. Conf. on Commun., May 2008, pp.3480–3484.[81] ——, “Rate maximization for downlink OFDMA with proportional fairness,” IEEETrans. Veh. Technol., vol. 57, no. 5, pp. 3267–3274, Sep. 2008.[82] C. Mohanram and S. Bhashyam, “Joint subcarrier and power allocation in channel-aware queue-aware scheduling for multiuser OFDM,” IEEE Trans. Wireless Com-mun., vol. 6, pp. 3208–3213, Sep. 2007.[83] G. Song, Y. Li, and L. Cimini, “Joint channel-and queue-aware scheduling for mul-tiuser diversity in wireless OFDMA networks,” IEEE Trans. Commun., vol. 57, no. 7,pp. 2109–2121, July 2009.[84] H. Zhu and J. Wang, “Chunk-based resource allocation in OFDMA systems - part i:Chunk allocation,” IEEE Trans. Commun., vol. 57, no. 9, pp. 2734–2744, Sep. 2009.175Bibliography[85] ——, “Chunk-based resource allocation in OFDMA systems-part ii: Joint chunk,power and bit allocation,” IEEE Trans. Commun., vol. 60, no. 2, pp. 499–509, Feb.2012.[86] E. Larsson, “Optimal OFDMA downlink scheduling under a control signaling costconstraint,” IEEE Trans. Commun., vol. 58, pp. 2776 – 2781, Sep. 2010.[87] J. Little, “A proof for the queuing formula: L=λW,” Operations Research, vol. 9,no. 3, pp. 383–387, June 1961.[88] R. Jain, D. Chiu, and W. Hawe, “A quantitative measure of fairness and discrim-ination for resource allocation in shared computer systems,” DEC Research ReportTR-301, Sep. 1984.[89] C. Xiong, G. Li, S. Zhang, Y. Chen, and S. Xu, “Energy-efficient resource allocationin OFDMA networks,” in Proc. IEEE Global Telecommun. Conf., Dec. 2011, pp. 1–5.[90] C. Ho and C. Huang, “Energy-efficient 2-D resource allocation with fairness con-straints for OFDMA networks,” in Proc. IEEE Veh. Technol. Conf., Sep. 2011, pp.1–5.[91] S. Huang, H. Chen, J. Cai, and F. Zhao, “Energy efficiency and spectral-efficiencytradeoff in amplify-and-forward relay networks,” IEEE Trans. Veh. Technol., vol. 62,no. 9, pp. 4366–4378, Nov. 2013.[92] I. Ku, C. Wang, and J. Thompson, “Spectral-energy efficiency tradeoff in relay-aidedcellular networks,” vol. 12, no. 10, pp. 4970–4982, Oct. 2013.[93] F. Parzysz, M. Vu, and F. Gagnon, “Trade-offs on energy-efficient relay deploymentin cellular networks,” in Proc. IEEE Veh. Technol. Conf., Sep. 2014, pp. 1–6.[94] Y. Chen, X. Fang, and B. Huang, “Energy-efficient relay selection and resourceallocation in nonregenerative relay OFDMA systems,” IEEE Trans. Veh. Technol.,vol. 63, no. 8, pp. 3689–3699, Oct. 2014.[95] L. Venturino, A. Zappone, C. Risi, and S. Buzzi, “Energy-efficient scheduling andpower allocation in downlink OFDMA networks with base station coordination,”IEEE Trans. Wireless Commun., vol. 14, no. 1, pp. 1–14, Jan. 2015.[96] R. Balakrishnan, X. Yang, M. Venkatachalam, and I. Akyildiz, “Mobile relay andgroup mobility for 4G WiMAX networks,” in Proc. IEEE Wireless Commun. andNetworking Conf., Mar. 2011, pp. 1224–1229.[97] L. Chen, Y. Huang, F. Xie, Y. Gao, L. Chu, H. He, Y. Li, F. Liang, and Y. Yuan,“Mobile relay in LTE-advanced systems,” IEEE Commun. Magazine, vol. 51, no. 11,pp. 144–151, Nov. 2013.176Bibliography[98] H. Zhao, R. Huang, J. Zhang, and Y. Fang, “Handoff for wireless networks withmobile relay stations,” in Proc. IEEE Wireless Commun. and Networking Conf.,Mar. 2011, pp. 826–831.[99] H. Chang, M. Ku, K. Singh, and J. Lin, “Low-complexity amplify-and-forward mobilerelay networks without source-to-relay CSI,” in Proc. IEEE Veh. Technol. Conf., Sep.2013, pp. 1–5.[100] Z. Liao, X. Zhang, and C. Feng, “Mobile relay deployment based on Markov chainsin WiMAX networks,” in Proc. IEEE Global Telecommun. Conf., Dec. 2014, pp.4508–4513.[101] J. Eun, H. Shin, and J. Lee, “Inter-cell interference coordination for a downlinkOFDMA relay network with multicells,” in Proc. IEEE Veh. Technol. Conf., May2012, pp. 1–5.[102] A. Argyriou, “Interference decoding in cellular wireless relay networks with space-time coding,” in Proc. IEEE Wireless Commun. and Networking Conf., Apr. 2014,pp. 1160–1165.[103] A. Omri and M. Hasna, “Performance analysis of OFDMA based wireless cooperativenetworks with interference management,” in Proc. IEEE Veh. Technol. Conf., June2013, pp. 1–6.[104] Y. Zhao, X. Fang, R. Huang, and Y. Fang, “Joint interference coordination and loadbalancing for OFDMA multihop cellular networks,” IEEE Trans. Mobile Computing,vol. 13, no. 1, pp. 89–101, Jan. 2014.[105] M. Fallgren, “An optimization approach to joint cell, channel and power allocationin multicell relay networks,” IEEE Trans. Wireless Commun., vol. 11, no. 8, pp.2868–2875, Aug. 2012.[106] “Evolved universal terrestrial radio access (E-UTRA); physical channels and modu-lation,” 3GPP, TS 36.211, Rel. 12, Mar. 2015.177Appendix AAssumptions for Channel ModelsIn cellular networks, data transmissions between different nodes go through wireless chan-nels. Wireless channel affects a transmitted signal in different ways and therefore, whena signal arrives at the receiver, its power is different from the power at the transmitter.This is firstly due to the path loss, which is the reduction in the signal’s power as it travelsthrough the air. Second factor affecting the signal’s power is the propagation of signalthrough multiple paths before reaching the receiver. This leads to the arrival of severalcopies of the signal at the receiver, with different attenuation and delays, which mightadd either constructively or destructively at the receiving antenna. This affect is calledmultipath fading [62, Chapter 9].To simulate the behavior of wireless channels and their effect on the transmission ratesof the links in the performance evaluations of this thesis, we have assumed the following:• The network environment is urban Macrocell [61].The reason for this assumption is that using relays in cellular networks makes itpossible to have large coverage area for a single cell served through the BS and relays.Considering this assumption, we have used the COST231 Hata urban propagationmodel as in [61] and computed the path loss of a link based on the following equation:PL = (44.9− 6.55 log10(htx)) log10(d1000) + 45.5+ (35.46− 1.1hrx) log10(fc)− 13.82 log10(htx) + 0.7hrx + 3, (A.1)178Appendix A. Assumptions for Channel Modelswhere PL is the path loss in dB, htx is the transmitter antenna height in meters, hrxthe receiver antenna height in meters, fc the carrier frequency in MHz, and d is thedistance between the transmitter and receiver in meters.• The BS and relays are located in a way that there is Line of Sight (LOS)between them.This assumption usually holds when the BS and relays’ antennas are installed in highlocations, as in our simulations. In this case, there is a dominant component amongthe copies of a signal that travel multiple paths from the BS to a relay. Consideringthis assumption, for the links between the BS and relays, we have used Rician channelmodel [61, Chapter 9].• The links between the BS and users as well as the links between the relaysand users are in Non-Line of Sight (NLOS) condition.This assumption usually holds as there are many blocks between the antennas ofthe BS/relays and the user equipment’s antenna. Therefore, there is not a singledominant component contributing to the power of the signal arrived at the userequipment antenna. Considering this assumption, for the links between the BS/relaysand users, we have used Rayleigh channel model [61, Chapter 9].• The channel conditions of all the links remain constant over a time slotbut vary from one slot to another.In our system models, we consider the scenarios in which the users are either fixed orhave very limited mobility and the network environment has little changes over time.In this case, channel variations happen over time intervals larger than the symbollength of the transmitted signal, which is referred to as slow fading [61, Chapter 9].Since the time slots considered in the simulations have large durations (1 millisecond)179Appendix A. Assumptions for Channel Modelscompared to the symbol length of OFDM systems (which are usually in the order ofless than 100 microsecond [2, 106]), this assumption matches the slow fading presentin the scenarios considered.• The channel conditions of all the links are constant over a subchannel butare different from one subchannel to another.We have assumed that the network environments do not have diverse distribution ofthe reflecting blocks, and multiple copies of a signal arrive at the receiver with smalldelay spread [61, Chapter 9]; therefore, the variations over frequency happen overlarger bandwidths than the OFDM subcarrier frequency. In our simulations, we haveconsidered subchannels composed of one or more OFDM subcarriers, which matchesthis behavior.We acknowledge the fact that, in reality, the channel behaviors in cellular networksmight be different from the models considered in our simulations. However, consideringthe fact that the same models have been used in simulating the behavior of the baseline al-gorithms existing in the literature, the relative performance improvements of our proposedalgorithms are expected to hold.180Appendix BProof of Theorem 2.1In order to prove that the buffer aided relaying system incurs equal or lower delay comparedthe conventional one, it is required to prove E(Dnb)−E(Db) ≥ 0. To show this, note thatE(Dnb)−E(Db) = 1 +1− as1s2 − a− 1− as1 − a− 1− as2 − a(B.1)By adding and subtracting the term 1−as1−a1−as2−a and rearranging the equations, we haveE(Dnb)− E(Db) =1− as1 − a1− as2 − a− 1− as1 − a− 1− as2 − a+ 1 + 1− as1s2 − a− 1− as1 − a1− as2 − a=( 1− as1 − a− 1)( 1− as2 − a− 1)+ 1− as1s2 − a− 1− as1 − a1− as2 − a. (B.2)Since 0 < a < si, i = 0, 1, and si ≤ 1, we have 1−asi−a ≥ 1, i = 0, 1. Therefore, the firstterm in the right hand side of Equation (B.2) is non-negative. Hence, it suffices to show1− as1s2 − a≥ 1− as1 − a1− as2 − a. (B.3)By canceling 1− a and cross-multiplying in Equation (B.3), we obtain(s1 − a) (s2 − a) ≥ (1− a) (s1s2 − a) . (B.4)After multiplying both sides out and canceling the common terms of Equation (B.4),181Appendix B. Proof of Theorem 2.1we have:a (1− s1) (1− s2) ≥ 0, (B.5)which is always true since s1 ≤ 1 and s2 ≤ 1.182Appendix CProof of Theorem 6.1Let define ρkl = αklβkl, ρmax = suplρkl, amax = supl,tablkl(t) and cmax = supl,tcl(t). For a givennumber of M time slots (the use of which will be clarified later), consider a time slot t0at which the system queue sizes have grown large such that Qblkl(τ) ≥ cmax, l ∈ L, t0 ≤τ ≤ t0 + M − 1. Then we have min(Qblkl(τ), cl(τ)) = cl(τ), l ∈ L, t0 ≤ τ ≤ t0 + M − 1.Considering the definition (6.2), in these time slots the MMW objective (6.7) is equivalenttoMaximize∑l∈LρklwMMWl (τ)rl(τ) (C.1)In this case, if all of the RS queues are larger than the threshold, MMW weights aresimilar to those of MDB and therefore, it will stabilize the system [60]. On the otherhand, if any RS queue is less than Qˆ (in which case the weights of the correspondinglinks from the BS would be considered proportional to the queue sizes in the BS, and notthe difference of the queue sizes in the BS and RS), stability of the whole system canbe proved by using M-slot Lyapunov drift, similar to the stability proof of MDB in [60].The proof exploits Lyapunov drift theory and has the following structure: Based on thequeueing system state Q(t0), a Lyapunov function, L(Q(t0)), is defined as a measure of thequeue congestion in the system, and an M-slot conditional Lyapunov drift, ∆M (Q(t0)),is defined as ∆M(Q(t0)) = E{L(Q(t0 + M)) − L(Q(t0))|Q(t0)}. It is shown in [60] thatif the sufficient condition for stability holds, there exists a finite M and a stationaryrandomized algorithm, referred to as STAT, such that the M-slot drift for it satisfies183Appendix C. Proof of Theorem 6.1∆STATM (Q(t0)) ≤ B − g(Q(t0)), where B is a finite value and g(Q) is a non-negativeincreasing function of queue sizes in the system. This means that using STAT, the system’soverall queue congestion tends to decrease if it becomes sufficiently large. The stability ofMMW can be proved by comparing it with STAT and showing that the MMW provides asimilar bound on the drift as well. This is shown in detail in the following.Based on [60], we define the Lyapunov function as:L(Q(τ)) =∑l∈Lρkl(Qblkl(τ))2, (C.2)Based on (6.3), the following holds:Qblkl(t0 +M) ≤ max[Qblkl(t0)−t0+M−1∑τ=t0rblkl(τ), 0] +t0+M−1∑τ=t0ablkl(τ) (C.3)The inequality is due to the fact that some of the arrived data may depart in the intervalfrom t0 to t0 + M − 1. Let define the average transmission rate and average arrival rateover M-slot interval, as:r˜blkl(t0) =1Mt0+M−1∑τ=t0rblkl(τ) (C.4)a˜blkl(t0) =1Mt0+M−1∑τ=t0ablkl(τ)Considering (C.3) and (C.4), we have:Qblkl(t0 +M) ≤ max[Qblkl(t0)−Mr˜blkl(t0), 0] +Ma˜blkl(t0) (C.5)184Appendix C. Proof of Theorem 6.1By squaring both sides and noting that (max{x, 0})2 ≤ x2, we get:(Qblkl(t0+M))2−(Qblkl(t0))2 ≤ M2[(r˜blkl(t0))2+(a˜blkl(t0))2]−2MQblkl(t0)[r˜blkl(t0)−a˜blkl(t0)] (C.6)Note that the data arrivals at the RS queues are always less than or equal to the transmis-sion rates from the corresponding queues in the BS, i.e. aRk (t) ≤ rBk (t). Considering this,multiplying both sides by ρkl, summing over all the links and taking conditional expecta-tion, results in the following inequality for the M-slot Lyapunov drift by any algorithm X:∆XM =∑l∈LρklE{(Qblkl(t0 +M))2−(Qblkl(t0))2|Q(t0)} ≤ B−2M [ΦX(Q(t0))−Θ(Q(t0))] (C.7)where ΦX(Q(t0)) and Θ(Q(t0)) are defined as:Θ(Q(t0)) =E{∑l∈LBρklQBkl(t0)a˜Bkl(t0)|Q(t0)} = E{∑k∈KρkQBk (t0)a˜Bk (t0)|Q(t0)}ΦX(Q(t0)) =E{∑l∈LBρklQBkl(t0)r˜B(X)kl (t0) +∑l∈LRρklQRkl(t0)[r˜R(X)kl (t0)− r˜B(X)kl (t0)]|Q(t0)}(C.8)and B is a constant such that M2∑l∈LρklE{(r˜blkl(t0))2 + (a˜blkl(t0))2|Q(t0)} ≤ B, which existsdue to the fact that channel capacities and the packet arrivals have finite mean and variance.Based on [60], for an  > 0, there exists a finite M and a stationary randomizedscheduling algorithm, STAT, which satisfies the following:E{r˜B(STAT )k (t0)|Q(t0)} − E{a˜Bk (t0)|Q(t0)} ≥2 , ∀k ∈ KE{r˜R(STAT )k (t0)|Q(t0)} −E{r˜B(STAT )k (t0)|Q(t0)} ≥2 , ∀k ∈ Kr(C.9)185Appendix C. Proof of Theorem 6.1and therefore, considering (C.7) and the fact that E{xy|y} = yE{x|y}, we have:∆STATM ≤ B −M∑l∈LρklQblkl(t0) (C.10)For MMW, we can show that similar bound exists with a different constant than B. Tosee this, first note that by changing the order of summations and stating their bounds basedon user sets, ΦX(Q(t0)) can be written as ΦX(Q(t0)) = 1M∑t0+M−1τ=t0 E{ΥX(Q(t0))|Q(t0)}whereΥX(Q(t0))=∑k∈KdρkQBk (t0)rB(X)k (τ) +∑k∈Krρk[QBk (t0)−QRk (t0)]rB(X)k (τ) +∑k∈KrρkQRk (t0)rR(X)k (τ)Now, consider a scheduling algorithm called FRM, which maximizes ΥX(Q(t0)) every timeslot, i.e., it maximizes a weighted sum of rates for a frame of M-slots, where the weightsare based on the queue sizes in the beginning of the frame, i.e., t0. It is easy to see thatΦSTAT (Q(t0)) ≤ ΦFRM (Q(t0)), by using the definition of ΦX(Q(t0)) as follows:ΦSTAT (Q(t0)) =1Mt0+M−1∑τ=t0E{ΥSTAT (Q(t0))|Q(t0)}≤ 1Mt0+M−1∑τ=t0E{maxX[ΥX(Q(t0))]|Q(t0)} (C.11)= 1Mt0+M−1∑τ=t0E{ΥFRM(Q(t0))|Q(t0)} = ΦFRM (Q(t0))where in the inequality we have used the fact that g ≤ max(g), ∀g, and the first equalityin the last line follows by the definition of the algorithm FRM.To prove the stability of MMW, we show that ΦFRM(Q(t0)) ≤ ΦMMW (Q(t0)) + D,where D is a constant. Let define Kˇr as the set of indirect users for which QRk (t0) ≤ Qˆ. Forsimplicity of the equations, we assume that Kˇr remains unchanged during M slots (Based186Appendix C. Proof of Theorem 6.1on the following discussions, it is easy to see that this only would affect the constant Dand does not have any impact on the fact that the drift bound tends to decrease whenthe queue sizes get large). Then, according to the definition of the MMW weights andconsidering the fact that MMW maximizes (C.1), for any τ, t0 ≤ τ ≤ t0 +M − 1, we have:∑l∈LρklwMMWl (τ)rMMWl (τ) =∑k∈KdρkQBk (τ)rB(MMW )k (τ) +∑k∈KˇrρkQBk (τ)rB(MMW )k (τ)+∑k∈Kr−Kˇrρk(QBk (τ)−QRk (τ))rB(MMW )k (τ) +∑k∈KrρkQRk (τ)rR(MMW )k (τ) (C.12)n1≥∑k∈KdρkQBk (τ)rB(MMW )k (τ) +∑k∈Kˇrρk(QBk (τ)−QRk (t0))rB(MMW )k (τ)+∑k∈Kr−Kˇrρk(QBk (τ)−QRk (τ))rB(MMW )k (τ) +∑k∈KrρkQRk (τ)rR(MMW )k (τ) (C.13)n2≥∑k∈KdρkQBk (τ)rB(MMW )k (τ) +∑k∈Kˇrρk(QBk (τ)− Qˆ)rB(MMW )k (τ)+∑k∈Kr−Kˇrρk(QBk (τ)−QRk (τ))rB(MMW )k (τ) +∑k∈KrρkQRk (τ)rR(MMW )k (τ) (C.14)n3≥∑k∈KdρkQBk (τ)rB(MMW )k (τ) +∑k∈KˇrρkQBk (τ)rB(MMW )k (τ)− (K −K1)Qˆρmaxcmax+∑k∈Kr−Kˇrρk(QBk (τ)−QRk (τ))rB(MMW )k (τ) +∑k∈KrρkQRk (τ)rR(MMW )k (τ) (C.15)n4≥∑k∈KdρkQBk (τ)rB(FRM)k (τ) +∑k∈KˇrρkQBk (τ)rB(FRM)k (τ)− (K −K1)Qˆρmaxcmax+∑k∈Kr−Kˇrρk(QBk (τ)−QRk (τ))rB(FRM)k (τ) +∑k∈KrρkQRk (τ)rR(FRM)k (τ) (C.16)n5≥∑k∈KdρkQBk (τ)rB(FRM)k (τ) +∑k∈Kˇrρk(QBk (τ)−QRk (t0))rB(FRM)k (τ)− (K −K1)Qˆρmaxcmax+∑k∈Kr−Kˇrρk(QBk (τ)−QRk (τ))rB(FRM)k (τ) +∑k∈KrρkQRk (τ)rR(FRM)k (τ) (C.17)187Appendix C. Proof of Theorem 6.1where the inequalities n1 and n5 are obtained due to the fact that for any numbersy ≥ 0, x ≥ 0 and z ≥ 0, we have yz ≥ (y − x)z. The inequality n2 is obtainednoting that the queue sizes of users Kˇr in the RS are less than Qˆ and the inequal-ity n3 is based on the definition of ρmax and cmax. Finally, the inequality n4 is basedon the definition of MMW, as it maximizes (C.1) over all possible scheduling methods,i.e.∑l∈LρklwMMWl (t)rMMWl (t) ≥∑l∈LρklwMMWl (t)rFRMl (t). Considering equations (C.13)and (C.17), and changing the grouping of the terms, yields:∑k∈KρkQBk (τ)rB(MMW )k (τ) +∑k∈Kˇrρk[QRk (τ)rR(MMW )k (τ)−QRk (t0)rB(MMW )k (τ)]+∑k∈Kr−KˇrρkQRk (τ)(rR(MMW )k (τ)− rB(MMW )k (τ))≥∑k∈KρkQBk (τ)rB(FRM)k (τ) +∑k∈Kˇrρk[QRk (τ)rR(FRM)k (τ)−QRk (t0)rB(FRM)k (τ)] (C.18)+∑k∈Kr−KˇrρkQRk (τ)(rR(FRM)k (τ)− rB(FRM)k (τ))−D1where D1 = (K − K1)Qˆρmaxcmax. Now, for any queue size Q, we define the maximumchange from time slot t0 to τ as δ(τ) = max |(Q(τ) − Q(t0)| = max(amax, cmax)(τ − t0).Then, we have:∑k∈KρkQBk (t0)rB(MMW )k (τ) +∑k∈Kˇrρk[QRk (t0)rR(MMW )k (τ)−QRk (t0)rB(MMW )k (τ)]+∑k∈Kr(τ)−KˇrρkQRk (t0)(rB(MMW )k (τ)− rB(MMW )k (τ)) + (2K −K1)ρmaxcmaxδ(τ)≥∑k∈KρkQBk (t0)rB(FRM)k (τ) +∑k∈Kˇrρk[QRk (t0)rR(FRM)k (τ)−QRk (t0)rB(FRM)k (τ)]+∑k∈Kr−KˇrρkQRk (t0)(rB(FRM)k (τ)− rB(FRM)k (τ))−D1 − (2K −K1)ρmaxcmaxδ(τ)where we have used the fact that there are totally 2K −K1 number of QB(X)k (τ)/QR(X)k (τ)188Appendix C. Proof of Theorem 6.1in each side of the inequality, and have considered the maximum change in the size of allof them from t0 to τ . Summing over τ = t0, ..., t0 +M − 1, taking conditional expectationand noting that δl(t0) = 0 leads to:ΦMMW (Q(t0))≥ΦFRM (Q(t0))−D1 −2ρmaxcmax(2K −K1)max(amax, cmax)Mt0+M−1∑τ=t0+1(τ − t0)⇒ ΦMMW (Q(t0))≥ΦFRM (Q(t0))−D (C.19)where D = D1 + (M − 1)(2K −K1)ρmaxcmax max(amax, cmax).Based on (C.11) and (C.19), we have ΦMMW (Q(t0)) ≥ ΦSTAT (Q(t0)) − D and as aresult, based on (C.7) and (C.10), we have:∆MMWM ≤ B + 2DM −M∑l∈LρklQblkl(t0) (C.20)This bound shows that if the queue sizes tend to grow large, the drift will become negative,which is sufficient to prove the stability of MMW [60].189

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
http://iiif.library.ubc.ca/presentation/dsp.24.1-0165770/manifest

Comment

Related Items