UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Algorithmic aspects of constrained unit disk graphs Breu, Heinz 1996

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata

Download

Media
831-ubc_1996-090493.pdf [ 14.34MB ]
Metadata
JSON: 831-1.0051600.json
JSON-LD: 831-1.0051600-ld.json
RDF/XML (Pretty): 831-1.0051600-rdf.xml
RDF/JSON: 831-1.0051600-rdf.json
Turtle: 831-1.0051600-turtle.txt
N-Triples: 831-1.0051600-rdf-ntriples.txt
Original Record: 831-1.0051600-source.json
Full Text
831-1.0051600-fulltext.txt
Citation
831-1.0051600.ris

Full Text

A L G O R I T H M I C A S P E C T S OF C O N S T R A I N E D UNIT DISK G R A P H S By Heinz Breu B.Sc. University of British Columbia, 1978 M.Sc. University of British Columbia, 1980 A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF DOCTOR OF PHILOSOPHY in THE FACULTY OF GRADUATE STUDIES COMPUTER SCIENCE We accent this thesis as conforming to the required standard THE UNIVERSITY OF BRITISH COLUMBIA Apri l 1996 © Heinz Breu, 1996 In presenting this thesis in partial fulfillment of the requirements for an advanced degree at the University of British Columbia, I agree that the Library shall make it freely available for reference and study. I further agree that permission for extensive copying of this thesis for scholarly purposes may be granted by the head of my department or by his or her representatives. It is understood that copying or publication of this thesis for financial gain shall not be allowed without my written permission. Computer Science The University of British Columbia 2366 Main Mall Vancouver, Canada V6T 1Z4 Date: Abstract Computational problems on graphs often arise in two- or three-dimensional geometric contexts. Such problems include assigning channels to radio transmitters (graph colour-ing), physically routing traces on a printed circuit board (graph drawing), and modelling molecules. It is reasonable to expect that natural graph problems have more efficient solutions when restricted to such geometric graphs. Unfortunately, many familiar NP-complete problems remain NP-complete on geometric graphs. Indifference graphs arise in a one-dimensional geometric context; they are the inter-section graphs of unit intervals on the line. Many NP-complete problems on arbitrary graphs do have efficient solutions on indifference graphs. Yet these same problems remain NP-complete for the intersection graphs of unit disks in the plane (unit disk graphs), a natural two-dimensional generalization of indifference graphs. What accounts for this situation, and how can algorithms be designed to deal with it? To study these issues, this thesis identifies a range of subclasses of unit disk graphs in which the second spatial dimension is gradually, introduced. More specifically, T -strip graphs "interpolate" between unit disk graphs and indifference graphs; they are the intersection graphs of unit-diameter disks whose centres are constrained to lie in a strip of thickness r. This thesis studies algorithmic and structural aspects of varying the value r for r-strip graphs. The thesis takes significant steps towards characterizing, recognizing, and laying out strip graphs. We will also see how to develop algorithms for several problems on strip graphs, and how to exploit their geometric representation. In particular, we will see that problems become especially tractable when the strips are "thin" (r is small) or "discrete" ii (the number of possible y-coordinates for the disks is small). Note again that indifference graphs are the thinnest (r = 0) and most discrete (one y-coordinate) of the nontrivial r-strip graphs. The immediate results of this research concern algorithms for a specific class of graphs. The real contribution of this research is the elucidation of when and where geometry can be exploited in the development of efficient graph theoretic algorithms. in Table of Contents Abstract ii List of Tables ix List of Figures xi Acknowledgement xvi 1 Introduction 1 1.1 Prologue 1 1.2 Geometric Constraints in Graph Theory 3 1.3 Conventions, Background, and Notation 5 1.3.1 Sets 6 1.3.2 Graphs 10 1.3.3 Geometry 17 1.3.4 Algorithms 19 1.3.5 Unit Disk, Strip, and 2-Level Graphs 24 1.4 Overview: A Reader's Guide 26 2 Related Research 29 2.1 Unit Disk Graphs 29 2.1.1 Cluster Analysis 30 2.1.2 Random Test Case 30 2.1.3 Molecular Graphics and Decoding Noisy Data 31 iv 2.1.4 Radio Frequency Assignment 31 2.2 Geometric Intersection Graphs 32 2.3 Perfect Graphs 34 2.3.1 Cocomparability Graphs 36 2.3.2 Indifference Graphs 40 2.3.3 Grid Graphs 44 2.4 Proximity Graphs 46 2.4.1 The Delaunay Triangulation Hierarchy 47 2.4.2 Sphere of Influence Graphs 48 3 Unit Disk Graphs 50 3.1 Basic Observations 51 3.1.1 Connections with Planarity 52 3.1.2 Every r-Strip Graph is a Cocomparability Graph 54 3.2 Exploiting a Geometric Model 57 3.2.1 Least Vertex-Weight Paths in Unit Disk Graphs 57 3.2.2 Semi-Dynamic Single-Point Circular-Range Queries 67 3.2.3 Building an Adjacency List from a Model 72 3.3 NP-Complete Problems Restricted to Unit Disk Graphs 83 3.3.1 Independent Set 85 3.3.2 Maximum Clique 90 3.3.3 Chromatic Number 94 3.4 Unit Disk Graph Recognition is NP-Hard 104 3.4.1 Coin Graphs 105 3.4.2 A Graph That Simulates SATISFIABILITY 108 3.4.3 Drawing the Graph on the Grid 109 v 3.4.4 Simulating the Drawing with Coins 114 3.4.5 Fabricating Flippers 118 3.4.6 Capacity Arguments 120 3.4.7 The Skeleton of the Graph Gc 123 3.4.8 The Components of the Graph Gc 124 3.4.9 Building and Realizing Gc 135 3.4.10 Extending Coin Graph Recognition 137 3.4.11 Miscellaneous Properties 143 4 Cocomparabil i ty Graphs 146 4.1 Introduction 146 4.2 Algorithms for Dominating and Steiner Set Problems . 151 4.2.1 Connected Dominating Sets 153 4.2.2 Minimum Cardinality Dominating Sets 160 4.2.3 Total Dominating Sets 170 4.2.4 Weighted Independent Dominating Sets 177 4.2.5 Weighted Steiner Sets 182 4.2.6 Applications 188 4.3 Transitive Orientation, Implication Classes, and Dominating Paths . . . . 190 4.3.1 Definitions and Basic Properties 190 4.3.2 Transitive Orientation and Dominating Paths 193 4.3.3 Implications for Cocomparability Graphs 202 5 Strip Graphs 203 5.1 Exploiting a Geometric Model 205 5.1.1 Review of Cocomparability Results 206 5.1.2 Application to Strip Graphs 206 vi 5.2 Characterization Via Cycles in Weighted Digraphs 209 5.2.1 Solving Systems of Difference Constraints 212 5.2.2 Constraint Digraphs Corresponding to r-Strip Graphs 216 5.2.3 An 0(V3) Algorithm for Laying Out Levelled, Complement Ori-ented, Strip Graphs 218 5.3 Stars in Strip-Graphs . 220 5.4 Distinguishing Strip Graphs and Indifference Graphs 227 5.4.1 Dangerous 4-Cycles 228 5.4.2 Connection with Indifference Graph Characterization 234 5.5 A Characterization of Trees in Strip Graphs 235 5.5.1 A Universal Embedding for Caterpillars 236 6 Two-Level Graphs 242 6.1 Definitions 244 6.2 Examples 246 6i3 Exploiting a Geometric Realization 250 6.3.1 &:-Level Graphs and their Oriented Complements 250 6.3.2 An Implicit Representation of the Oriented Complement 251 6.3.3 An Algorithm for Constructing the Representation 254 6.3.4 An Implicit Representation of the Transitive Reduction 257 6.3.5 Weighted Cliques and Independent Dominating Sets 259 6.4 Relation to Indifference Graphs 261 6.4.1 Trapezoid Graphs and PI* Graphs 262 6.4.2 Cross Edges Induce a Bipartite Permutation Graph 267 6.4.3 Short Edges Induce an Indifference Graph 273 vii 6.4.4 Every Two-Level Graph is the Intersection of Two Indifference Graphs 275 6.5 Recognizing Two-Level Graphs 281 6.5.1 Striated Two-Level Graphs 281 6.5.2 Orienting the Complement of a Two-Level Graph 284 6.5.3 There is No Forbidden-Ordered-Triple Characterization of Striated Two-Level Graphs 298 6.5.4 Incomparability Between Different Thicknesses 300 6.6 Determining the Thickness of the Strip 305 7 Conclusion 310 7.1 Epilogue 312 Bibliography 314 Appendix 325 A A Data Structure for Maintaining "Test Tube" M a x i m a 325 A . l Definitions 325 A.2 Test Tubes, Maxima, and Their Properties 326 A.3 The Data Structure 332 A.4 Building the Data Structure 334 A.5 Deletions 340 Index 346 vm List of Tables 3.1 Algorithm: DIJKSTRA(G,S) 59 3.2 Algorithm: V E R T E X - D I J K S T R A ( G , S ) 61 3.3 Algorithm: MIN-PATH(G,S) 63 3.4 Algorithm: L IST-DELETE(G, u) 65 3.5 Algorithm: A D J - P L A N E - S W E E P ( V ) 77 3.6 Algorithm: A D J - D E L A U N A Y ( V , S) 79 3.7 Approximation algorithms 83 3.8 Algorithm: G R E E D Y ( G ) 99 4.1 Algorithm: OCC(G) 149 4.2 Complexity of Domination Problems 152 4.3 Algorithm: MCCDS-OCC(G) 158 4.4 Algorithm: MCCDS-CC(G) 159 4.5 Algorithm: AO-MCDS-OCC(G) 166 4.6 Algorithm: A l - M C D S - O C C ( G ) 167 4.7 Algorithm: A3-MCDS-0CC(G) 167 4.8 Algorithm: A4-MCDS-0CC(G) 168 4.9 Algorithm: A5-MCDS-0CC(G) 168 4.10 Algorithm: Aux-MCDS-OCC(G') 169 4.11 Algorithm: MCDS-OCC(G') 170 4.12 Algorithm: MCTDS-OCC(G) 176 4.13 Algorithm: M W M C - C ( G ) 181 ix 4.14 Algorithm: MWIDS-CC(G) 182 4.15 Algorithm: MWSS-OCC(G, R) 184 4.16 Algorithm: MWSS-CC(G*, R) 187 5.1 Algorithm: MWSS-OCC(G', R) 207 5.2 Algorithm: STRIP-LAYOUT(G) 219 6.1 Algorithm: I N I T I A L I Z E - O R I E N T E D - C O M P L E M E N T ( V , / ) 252 6.2 Algorithm: T W O - L E V E L - F ( ? , j , p, n, / ) 255 6.3 Algorithm: G E N E R A T E - F ( V , / ) 256 6.4 Algorithm: k T R A N S ( V , / ) 258 6.5 Algorithm: M W M C - k ( V , / ) 260 6.6 Algorithm: 2STRIAE(G) 308 A . l Algorithm: CHILDREN(u, d(L U R)) 334 A.2 Algorithm: F I L L - I N T E R N A L ^ ) 335 A.3 Algorithm: M E R G E R , dL, I, dR, r) 337 A.4 Algorithm: BRIDGE(3L , /, dR, r) 338 A.5 Algorithm: DELETE-SITE(p, d(LuR),v) 343 x List of Figures 2.1 A permutation diagram (a) and its corresponding permutation graph (b). 37 2.2 Functions for intervals [/u,ru] and [/„,r„] 40 2.3 Forbidden indifference graphs 42 2.4 A grid subgraph that is not a grid graph 44 2.5 Simulating edges in grid subgraphs with grid graphs 45 2.6 The grid graph G{ corresponding to the grid subgraph Gs 46 3.1 Crossing segments in a disk graph realization 52 3.2 A K^-free disk graph homeomorphic to K§ 54 3.3 A set of five points that generate an induced cycle 56 3.4 Adversary argument for shortest path 57 3.5 Any unit-radius query disk covers at most 21 cells 69 3.6 Algorithm A D J - P L A N E - S W E E P examines site p 77 3.7 The disk through p and q contains Voronoi center O; 80 3.8 Embedding the mth vertex [after [Val81]] 86 3.9 Components for simulating grid embeddings 87 3.10 Simulation of the grid embedding of a small graph 88 3.11 A saturated edge and a selected node disk 88 3.12 Saturating an unsaturated edge 90 3.13 The lune through a pair of sites 91 3.14 The unit-lune denned by a pair of sites 93 3.15 G(N(p) n N(q)) is not always cobipartite 95 xi 3.16 A planar graph with degree at most 4 96 3.17 The graph from Figure 3.16 embedded in the grid . 97 3.18 Simulations for edges and pseudo-edges 97 3.19 An instance of unit disk graph 3-colourability . 98 3.20 A SATISFIABILITY graph drawn on the grid 110 3.21 An oriented grid drawing 112 3.22 A skewed oriented grid drawing 114 3.23 Schematic drawings for cages and flippers 115 3.24 The hexagonal realization of a cage 116 3.25 Joining cages 117 3.26 A hexagonal packing 117 3.27 Fabricating full flippers 118 3.28 Fabricating a half flipper 119 3.29 Dividing a hexagon into a quarter and three-quarters 120 3.30 Dividing a hexagon into two quarters and a half . . 121 3.31 The horizontal wire component 125 3.32 The vertical wire component 127 3.33 The corner components 128 3.34 The crossover component 129 3.35 The positive literal 131 3.36 The negative literal 132 3.37 Clause component 133 3.38 Capping a terminal 134 3.39 How to connect components 136 3.40 Expanding flippers 138 3.41 Joining square cages 139 xii 4.1 A cocomparability graph on nine vertices 147 4.2 Every 5-cycle in a cocomparability graph has a chord 150 4.3 Lemma 4.8, Case 2 155 4.4 Lemma 4.8, Case 3 156 4.5 Lemma 4.8, Case 4 156 4.6 Lemma 4.8.(ii) A graph with \P\ = 3 and \M\ = 2 157 4.7 Reducing total domination to domination 171 4.8 The effect of function / on a minimum cardinality dominating set 173 4.9 Lemma 4.26 Case 1 174 4.10 Lemma 4.26 Case 2.1 174 4.11 Lemma 4.26 Case 2.2 174 4.12 (ak-i,bk-i) e E if and only if (ak,bk) £ E 192 4.13 An induced cycle on n > 5 vertices 193 4.14 The endpoints of a chordless path force the chords in the complement. . . 194 4.15 (s, t) captures (u, v) 195 4.16 Definition of u' and v' 196 5.1 7^1,4 is a 0.8-strip graph 204 5.2 The weighted digraph corresponding to 217 5.3 Point a is not on the boundary 222 5.4 Point b is not on the boundary 223 5.5 Point a is not on the boundary 225 5.6 Point b is not on the boundary 226 5.7 A claw in a thin strip 227 5.8 Case 1: (a, b) is positive, and (a,d) is positive 230 5.9 Case 2: (a, b) is positive, and (a,d) or (d, a) is negative 231 xiii 5.10 Case 3: (a, b) is negative, and (d,a) is positive 231 5.11 Case 4: (a,b) is negative, and (a,d) or (d,a) is negative 232 5.12 (—,+,—, -f) in D corresponds to a square in G 233 5.13 ( —, —, +, +) in D corresponds to a claw in G 233 5.14 The leaves of this tree form an asteroidal triple 235 5.15 The infinite degree-4 caterpillar 236 5.16 The iterative conditions 237 5.17 Embedding the first four vertices 238 5.18 Placing a new leaf 238 5.19 Circle C(d) intersects the lower level 239 5.20 Circle C(c) intersects the lower level 239 5.21 Circle C(b') intersects arc A 240 5.22 Point c' is on C(c) and inside C(b') 240 5.23 The strip graph generated by {a, c, d, a', c', d'} 241 6.1 An induced square 247 6.2 An induced claw 248 6.3 A uniquely levelled, uniquely complement oriented, two-level graph. . . . 249 6.4 Definition of array F 253 6.5 Arc (u,w) is transitively implied by (u,v) and (v,w) 258 6.6 A dense oriented complement 261 6.7 The convention used to specify a trapezoid 263 6.8 Definition of triangle graphs 265 6.9 Kit4 is a permutation graph and therefore also a PI* graph 267 6.10 An indifference graph that is not a permutation graph 269 6.11 Constructing permutations P and Q from a two-level graph 272 xiv 6.12 Three L\ disks (rotated squares) • . 278 6.13 Two indifference graphs whose intersection is 281 6.14 Forcing striae 283 6.15 The arc (a, b) is unextendable 287 6.16 The unit-radius disks about a and b 288 6.17 The neighbourhoods of a and b 290 6.18 The rotation operation preserves arcs with v on the lower level 293 6.19 The rotation operation preserves arcs with v on the upper level 293 6.20 The rotation operation preserves edges with v on the lower level 295 6.21 The rotation operation preserves edges with v on the upper level 296 6.22 Two-level graphs that are not forced 297 6.23 A n unforbidden order that is not a two-level order 299 6.24 The only unforbidden order for this non two-level graph 300 6.25 A 1/2-TWO graph that is not a n a - T W O graph for any a > 5/8 302 6.26 A 3/4-TWO graph that is not an a-TWO graph for any a < 5/8 304 7.1 A Hierarchy of Graphs 313 A . l A test tube rotating clockwise about site p 327 A.2 Raising a test tube 328 A.3 Two neighbouring maxima 329 A.4 Both Tw a n d Trir intersect the separator 339 A.5 Site q is maximal in S \ {p} but not in S 342 A.6 Deleting the left bridge site, / 343 xv Acknowledgement This thesis would not have been possible without my mentor and supervisor, Professor David G. Kirkpatrick 1. It is not possible to say that his influence is stronger in any one chapter, since every part of this thesis has felt his touch. My thesis committee comprised Professors Richard Anstee, Alain Fournier, Maria Klawe, Nick Pippenger, and Alan Wagner. Thanks for many comments, especially in setting early directions. I am grateful to Professors Y . Daniel Liang, Indiana Purdue University at Fort Wayne; Lorna Stewart, University of Alberta; and C. Pandu Rangan, Indian Institute of Tech-nology Madras for helpful discussions and references related to domination algorithms on cocomparability graphs (§4.2). I am also grateful to Professor Jan Kratochvil, Charles University, Prague, for helpful discussions on the NP-hardness proof for unit disk graphs (§3.4), and Dr. Madhav Marathe, Los Alamos National Labs, for extended discussions on approximation algorithms for unit disk graphs (§3.3.3). Thanks to Hewlett-Packard Company, and especially Hewlett-Packard Laboratories, for keeping me on leave-of-absence for so many years. I also must thank two long-time friends for greatly enriching my years at U .B .C . : Pro-fessor David Lowe, for welcoming us to Vancouver, for always showing a keen interest in my work, for always being encouraging, and for teaching me how to teach; and Professor Trevor Hurwitz, for many lunches and stimulating discussions. Many new friends, far too many to acknowledge individually, have also influenced this thesis. Still, two have made unusual impacts: Professor Yossi Gi l , the Technion, Haifa; and Dr. W i l l Evans. 1 Individuals whose affiliations are not explicitly stated are with the University of British Columbia. xvi My parents have been an enormous influence. I would not be here without their early encouragement of my scientific career. They have also contributed greatly to the resolution of those daily tasks that face us all, yet take so much of our time. Finally, thank you Wendy, my wife and partner for 16 years. You put your own interests on hold for over six long years so that I might pursue this research. In particular, you supported us financially when you would much rather have participated full time in the life of our son Lorenz, now four years old. To you I dedicate this thesis. xvn Chapter 1 Introduction 1.1 Prologue "It just can't be this hard," mumbled Alice 1 to herself. "I wouldn't feel so bad if my graph theoretic model didn't fit the problem so well, but it seems so natural. It really simplifies the statement of my problem; it just doesn't seem to lead to an efficient algorithm." She found herself ruminating over the events of the last few months. It all began with a big project meeting at Blue Sky Airlines ("The airline where the rubber meets the sky"), where Alice designs algorithms for a living. Blue Sky had equipped each airplane in its fleet with a radio beacon. Every beacon has the same range, which is uniform in all directions. Over the next few weeks, Alice discovered that one of her tasks is to solve the "midnight bell problem", as she called it. Every day at midnight (Vancouver time), every plane in a certain trans-Pacific corridor must either send a beacon signal, or receive one from at least one other plane. Blue Sky will know the location (including the altitude) of every plane in the corridor each midnight. Alice's job is to minimize the number of beacons that must send a signal. Naturally, Alice does not plan to show up every midnight to tell Blue Sky which beacons to buzz. Her problem is really to design an algorithm to solve this problem for her, given the number and locations of the planes as input. Furthermore, Blue Sky expects to have more than 50 airplanes in the corridor every night by late next decade 1 Alice and her company are fictional. This prologue, and its epilogue in the Conclusion chapter, is intended only to motivate the theoretical problems addressed in this thesis, and to strengthen your intuition. 1 Chapter 1. Introduction 2 ("The sky's the limit, Alice!"), so it had better be a good algorithm. Alice thought about this problem for some time, formulating suitable abstractions. Eventually she constructed a graph theoretic model of her problem. The airplanes are vertices in the graph, and two vertices are adjacent in the graph if they are within beacon range. Now, she only needs to find a minimum cardinality subset of vertices (the "transmitters") such that every other vertex ("receiver") is adjacent to a transmitter. Alice was pleased with her formulation; she likes using graph theory and knows that there are many algorithms available for many different problems. But this graph theoretical problem looks like it might be hard. She consulted her favorite reference book on these matters, Computers and Intractabil-ity: a Guide to the Theory of NP-Completeness, by Garey and Johnson [GJ79]. After a few hours—Alice is easily distracted by the many interesting problems to be found in Garey and Johnson—she discovered that the corresponding decision problem is called the D O M I N A T I N G SET problem, and that it is indeed NP-complete. What could she do now? Fifty airplanes are not that many, but still too many to consider every subset of airplanes, in exponential time. She could develop a heuristic solution, but this approach always leaves her feeling unsatisfied. Perhaps graph theory is a little too general. After all, her original problem had a very geometric flavour to it, involving airplanes in a shallow corridor and beacons with circular ranges. What happened to all the geometry? And how could it be used to solve her problem, anyway? She could think of no other option but to start the whole abstraction process again. Alice slumped heavily over her desk, with the dreadful feeling that she had run out of alternatives. Chapter 1. Introduction 3 1.2 Geometric Constraints in Graph Theory Let us propose an alternative to Alice: exploit geometric constraints in graph theoretic problems. We will study this advice by deriving a class of graphs from geometric situa-tions. In particular, this thesis studies unit disk graphs, which are the intersection graphs of closed unit-diameter disks in the plane. Although this thesis may on occasion draw your attention to related problems set in higher dimensional space, its primary concern is with the plane, that is, with two-dimensional space. Clearly, unit disk graphs capture some two-dimensional constraints. Some subclasses of unit disk graphs capture even more geometric constraints. In particular, if we constrain the centres of the circles to be collinear, then we call the graphs indifference graphs or unit interval graphs. Clearly, indifference graphs capture some one-dimensional constraints. With some care, we can move "gradually" from unit interval graphs to unit disk graphs, as outlined below. This thesis explores algorithmic implications of this gradual movement. As hinted in the prologue, this thesis studies the complexity of some NP-complete graph theoretical problems restricted to these classes of unit disk graphs. Graph theoretic problems on these classes exhibit diverse behaviour. Some problems remain NP-complete for some of these classes, but some problems admit polynomial time solutions, and some problems may be efficiently approximated. Sometimes there are efficient algorithms that require a model for the graph, that is, a set of disks that realize the graph. We are there-fore also interested in the complexity of recognizing unit disk graphs, or of constructing a realization. Unfortunately, we will see that the decision problem corresponding to both problems is NP-hard. On the other hand, there are algorithms for some problems that do not require models. A graph is a unit disk graph if each vertex can be mapped to a closed, unit diameter disk in the plane such that two vertices are adjacent (in the graph) if and only if their Chapter 1. Introduction 4 corresponding disks intersect (on the plane). Since there are other definitions for unit disk graphs, let us refer to this mapping of vertices to disks as a unit disk model. Another definition is that a graph is a unit disk graph if each vertex can be mapped to a point in the plane such that two vertices are adjacent (in the graph) if and only if their corresponding points are within unit distance of one another. See Section 1.3.5 for a more careful definition of these terms. Such a mapping of vertices to points is called a proximity model of the graph. Clearly, there is a one-to-one correspondence between these families of models: the centre of a disk is the mapped-to point. Clearly also, the unit of distance is of no great consequence, since the models of graphs under one unit can be transformed into another by scaling. This thesis uses both models for the analysis of algorithms, and sometimes also for their implementation. Alternatively, we can think of the model as generating the unit disk graph. Construct a unit disk graph from a set of points (or unit disks) in the plane by identifying vertices with points and by putting an edge between two vertices if the distance between them is at most unit distance (or if the disks intersect). For example, Alice's Blue Sky problem can be modelled in two dimensions (i.e., assume that the airplanes lie in a common vertical plane) by letting the unit distance be the beacon range, and by centering unit-diameter disks on the airplanes. Alice's problem is even more constrained. Let us imagine that a "trans-Pacific cor-ridor" is an (effectively) infinite length strip (rectangle) with thickness r , which is de-termined by the least and greatest allowed altitudes. We are particularly interested in the case where r is small compared to the (unit) range of the radio beacons, say 0 < r < "v/3/2. Such unit disk graphs, where the disks lie in a thin strip, are called r-strip graphs or just strip graphs when r = y/H/2. The reason for the number y/Z/2 is that every strip graph is also a cocomparability graph (Theorem 3.7). Such graphs are perfect, which is often a hint that many graph theoretic problems involving cliques, Chapter 1. Introduction 5 colours, and independent sets can be solved in polynomial time. In fact, we will solve Alice's problem for cocomparability graphs in Chapter 4. Note that the class of 0-strip graphs is identical to the class of indifference graphs. Note also that every unit disk graph is a T-strip graph for some (not necessarily small) value r. Cocomparability graphs properly include the class of strip graphs. Is it possible to exploit the additional constraints from strips? It is, as Chapter 5 demonstrates by solving a problem related to Alice's, namely the minimum weight Steiner set problem, even more efficiently for strip graphs than for cocomparability graphs. We also could imagine that certain "rules of the road" apply to Blue Sky's airplanes in the corridor. For example, perhaps they restrict east bound traffic to the upper boundary of the corridor (let us hope that they also restrict west bound traffic to the lower boundary). Such a unit disk graph is called a two-level T-strip graph. Chapter 6 solves another problem related to Alice's, namely the minimum weight independent dominating set problem for two-level graphs. 1.3 Conventions, Background, and Notation The following definitions for sets (§1.3.1), geometry (§1.3.3), algorithms (§1.3.4), and graphs (§1.3.2) are standard, or as nearly so as the literature allows. This thesis assumes that you are already familiar with these basic concepts, but presents their definitions to avoid ambiguity. Furthermore, the body of the thesis repeats some of these definitions as they are required. You may therefore want to skim lightly over this section on a first reading, to ensure that these definitions coincide with your expectations. The section on unit disk graphs (§1.3.5) is more esoteric, so you may want to examine it more closely. Chapter 1. Introduction 6 1.3.1 Sets These definitions are common to many standard textbooks on discrete mathematics or algorithms, for example"[Gol80], [CLR90], [Epp95]. The terms set and element are the undefined primitives of axiomatic set theory, but the intended meaning is that a set is a collection of elements. Some sets have special symbols: 0 is the empty set, which contains no elements, Z is the set of integers, Z + is the set of positive integers, and R is the set of real numbers. The following table summarizes basic relations on sets. Write: and say: if, for all elements x in some universal set: x e S x is in S x is an element of set S x(£S x is not in S x is not an element of set S ACB A is a subset of B x £ A implies x £ B A = B A equals B ACB and B C A AcB A is a proper subset of B ACB but A^ B AD B A is a superset of B BCA AD B A is a proper superset of B BC A For example, \/2 £ R but y/2 £ Z, and Z C R. Usually, sets will be uppercase symbols, and elements will be lowercase. This thesis is concerned primarily with finite sets, for which the elements can often be listed explicitly or implicitly inside curly braces. For example, {1 ,2 ,3 , . . . , n} denotes the set of integers from 1 to n inclusive. We can also specify sets by rule, for example {1,2, 3 , . . . , n} = {i : i £ Z and 1 < i < n). For a finite set S, let l ^ l denote its cardinality, in this case the number of elements in S. For example, |{1,2, 3 , . . . , n}| = n. A multiset is similar to a set, except that it may have two or more elements with the same name. For example, set {1,2,3} and set {1,2,2,3} are the same and have three elements, but multiset {1,2,2,3} has four elements, including Chapter 1. Introduction 7 two occurrences of element 2. The basic set operations in the following table are especially useful. Here, A and B are sets, and U is some "universe" set that will be clear from context. operator: written: equals: union AUB {x : x G A or x G B} intersection An B {x : x G A and x G B} difference A\B {x : x G A and x £ B} complement A {x : x G U and x £ A} Cartesian Product Ax B {(a,b) : a G A and b G B} Two sets A and B are disjoint if A fl Z? = 0. If the operands A and B of the union operator are disjoint, we may write the disjoint union A-\-B instead of AUB to emphasize this fact. The Cartesian product operation is also defined on more than two sets: Si X £ 2 X • • • X Sk = {(si, S2, • • • , sk) •• «i G Si, 5 2 G S2, . . . , and sfc G S^}. The elements of the Cartesian product of k sets are called (ordered) k-tuples. When all k sets are the same, it is traditional to write SxSx---xS = Sk. For example, R 2 denotes the real plane as a set of coordinates (x,y) G R 2 . A k-ary relation on a Cartesian product of sets Si x S2 x • • • x Sk is just a subset of this product. This thesis is mainly concerned with binary relations, for which k = 2. More specifically, it is primarily concerned with binary relations on a Cartesian product of the same set. Therefore, say that a (binary) relation on a set S is a pair (S, R), where R C S2. Sometimes we will write aRb to mean (a, b) G R. There are a few standard properties on binary relations, as summarized by the following table. Chapter 1. Introduction 8 A binary relation (S, R) is said to be: if, for all a, b, c G S: reflexive (a, a) G R irreflexive (a, a) £ R symmetric (a,b) G R implies (6, a) G i? asymmetric (a, 6) G i? implies (6, a) ^ i? antisymmetric (a, 6) G R and (6, a) G R implies a = b transitive (a, 6) G i? and (6, c) G -R implies (a, c) G i? complete a ^ b implies (a, 6) G -R or (6, a) G strongly complete (a, 6) G # or (6, a) G i? It is sometimes advantageous to force a relation to have one or more of these proper-ties. The following table defines a set of operations on relations designed for this purpose. This operation on relation R yields the relation: inverse reflexive closure symmetric closure transitive closure transitive reduction R~l = {(a,b) : (6, a) G R} RU{(a,a) : (a,a) G R] R U {(b, a) : (a, b) G R} = R U R~l the smallest transitive superset of R the smallest relation having the same transitive closure as R Some relations are given special names, depending on the standard properties they satisfy. The following table summarizes the relations used in this thesis. Chapter 1. Introduction 9 A Binary Relation if it is: (S, R) is said to be a: partial order reflexive, antisymmetric, and transitive strict partial order irreflexive, asymmetric, and transitive linear order a strongly complete, partial order strict linear order a complete, strict partial order equivalence relation reflexive, symmetric, and transitive (If (S, =) is an equivalence relation, and a 6 5, then the equivalence class of a is the set [a] = {x : x = a}.) Note that the literature also refers to linear orders as total orders, complete orders, and simple orders. A strict linear order can also be specified by listing its elements (ai ,a2,.. . ,an) with the understanding that a,- < aj if and only if i < j. This thesis is mainly concerned with strict partial orders (and strict linear orders), which are sometimes written as pairs (S, <). Furthermore, to distinguish the relation from the set on which it is defined, we will sometimes want to refer to the relation R as a partial order, and the pair (S, R) as a partially ordered set, or more economically, as a poset. A n element a in a poset (S, <) is maximal (respectively minimal) if a < x (respectively x < a) does not hold for any element x £ S. In particular, a set 5" in a family of sets J- is maximal if it is a maximal element in the poset (T, C) . The restriction (S', R') of a relation (S, R) to a subset S' C S is defined by the equation R' = {(a,b) : a,b £ S' and (a,b) £ R}. Note that the restriction of a poset is also a poset. A linear extension of a strict partial order R is a complete strict partial order R', where R C R'. It is perhaps not obvious that such a linear extension exists; you can always generate one by listing (topologically sorting, [CLR90] pages 485-488) its elements as follows. Remove a maximal element from S, Chapter 1. Introduction 10 recursively list the partial order restricted to S \ {m}, and append m to the end of the list. The operators union, intersection, and difference are defined also for binary relations on sets, by operating on the set and the relation. For example, the intersection (Si, Ri)f) (S2, R2) of two binary relations is the relation (S\ D S2, RiC\ R2). In nearly all cases, this thesis applies set operators only to binary relations on the same set. The dimension of a partial order P is the minimum number of linear orders whose intersection is P. Note that, if P is the intersection of k linear orders, then each linear order is a linear extension of P. Note also, that P is the intersection of all of its linear extensions, so the notion of dimension is well-defined. An interval is a contiguous subset of a linearly ordered set. There are open, closed, and partially closed intervals defined by the following table. open (a,b)--= {x : a < x < b} closed [a,b} = --{x: a < x < b} partially closed (a,b} =  {x a < x < b} partially closed [a,b) = = {x a < x < b} A poset (S, R) is called an interval order if S is a set of intervals, and R = {(( a, 6), (c, d)) : (a, b) £ S, (c, d) £ S, and b < c}. The interval order dimension of a poset P is the smallest number of interval orders whose intersection is P. 1.3.2 Graphs For the most part, the graph theoretic definitions in this thesis conform to those in Golumbic's book [G0I8O] on algorithmic graph theory. A graph G — (V, E) is a set of Chapter 1. Introduction 11 vertices V and an irreflexive binary relation E, called edges, on the vertices. We also write V(G) — V and E(G) = E. Unless mentioned otherwise, both sets are finite; the order of a graph G is the value |V(Cr)|. A multigraph is similar to a graph, except that E is a multiset. If, in addition, E is not irreflexive, then G is called a pseudograph. If E is symmetric, the graph is said to be undirected. In this case, it is sometimes convenient to assume that edges (a, b) and (b,a) are the same edge, which is also said to be undirected. Since graphs are irreflexive, the complement G of a graph G is defined differently than the complement of a relation, it is the graph G = (V, E) where E = {(u,v) : (u,v) £ E and u ^ v}. If E is not symmetric, then G is called a directed graph and has directed edges or arcs. We will often write A instead of E for arcs. An oriented graph is a directed graph G = (V, A) where A is asymmetric. An orientation of an undirected graph G = (V, E) is an oriented subgraph H = (V, A) where E = A-\- A~x (the arc set A is also called an orientation of the edge set E). The adjectives dense and sparse are used informally in this thesis. Roughly, a graph G is dense if | J E ( G ) | = Q(V2) and sparse otherwise. Two vertices u and v in V are said to be adjacent if (u,v) £ E; vertex u is said to be adjacent from vertex v, and vertex v is said to be adjacent to vertex u. Two adjacent vertices are said to be connected by an edge. If (u, v) is an edge, then vertices u and v are said to be its endpoints. An edge is said to be incident to its endpoints, and two edges are adjacent if they have a common endpoint. Graph vertices will usually be drawn as small circles. A n undirected edge will be drawn as a solid, usually straight, line segment between the circles corresponding to its endpoints. A directed edge (u,v) will be drawn as an arrow from circle u to circle v. When an edge in the complement of a graph needs to be emphasized, it will be drawn as a dotted line segment. The following table summarizes some familiar parameters of vertices in graphs. Chapter 1. Introduction 12 Name Symbol Meaning adjacency list AdJH {u : (v,u) G £ } out-neighbourhood N + (u) Adj(i;)U{i;} in-neighbourhood N"(u) {u : (u,u) G £ } U {v} neighbourhood (undirected) N(u) N + (u) indegree indegree(u) |{(u,i;):( M , i ; )e£} | outdegree outdegree(u) | { ( t ; , u ) : ( t ; , u ) e £ } | degree (for undirected graphs) deg(u) outdegree(u) Note that the adjacency list of v is just the set of vertices adjacent to v, and that deg(u) = indegree(u) = outdegree(u) = |Adj(u)| for undirected graphs. The neighbour-hood N(S) of a subset S C V(G) is the union of the neighbourhoods2 of its elements. That is, N(S') = {u : u G N(v) for some v G S}. A source in a directed graph G is a minimal vertex in G (that is, one with indegree 0). Similarly, a sink in a directed graph G is a maximal vertex in G (that is, one with outdegree 0). A graph Gs = (Va, Es) is a subgraph of a graph G = (V, E), written Gs C G, if Vs C V and E s C Subgraph G s is spanning if V s = V . The subgraph of G = (V, i?) induced by (induced on, generated by) a subset of vertices U C V is the graph G(£/) = (U,Eu) where .Ey is the set of all edges in E that have both endpoints in U. A class of graphs is hereditary or closed under taking induced subgraphs if every induced subgraph of every graph in the class is also a graph in the class. When this is the case (and only when this is case), it makes sense to consider a forbidden subgraph characterization of the class. A graph F is a forbidden subgraph of the class if it is not in the class, but every induced subgraph of F is in the class. The forbidden subgraph characterization of a class of hereditary graphs is the set of all forbidden subgraphs. 2 T h e neighbourhood of a vertex is also sometimes called the closed neighbourhood of the vertex. This is because some authors use "neighbourhood" to mean the adjacency list. Chapter 1. Introduction 13 The complement G of a graph G = (V, E) is the graph G = (V,E). A subset of vertices U C V is independent if no pair of vertices in U is connected by an edge, that is, if E(G(U)) = 0. A subgraph K of a graph G is a clique (or completely connected) if every pair of vertices in A' is connected by an edge, that is, if E(K) = 0. A matching in a graph G is a subset of edges M C Z£(G) such that no two edges are adjacent. Let D be a subset of the vertices V of a graph G = (V,E). Subset D is said to dominate a vertex v G V if u G Z) or if v is adjacent to some vertex in Z). Subset D is a dominating set, and said to dominate the graph G (or the vertices V), if Z? dominates every vertex in V . Subset D is said to be a connected dominating set if it is dominating and the subgraph it induces is connected. Subset D is an independent dominating set if it is dominating and the subgraph it induces has no edges. Subset D is a total dominating set if every vertex in V (including those in D) is adjacent to some vertex in D. Given a graph and a required set R C V, a subset S C V \ R is a Steiner set if the subgraph induced by R U S is connected. In general, a maximum subgraph satisfying some specified property is one that attains the greatest cardinality. For example, a maximum clique is a clique that has at least as many vertices as any other clique, and a maximum matching is one that has at least as many edges as any other matching. A graph G = (V, E, <) is partially ordered if (V, <) is a partial order on the vertex set. Often this partial order will be linear. If G\ and G2 are subgraphs in a linearly-ordered graph, then G\ < G2 if u < v for all u G V{G\) and v G V^G^)- The subgraphs are said to overlap in the order if neither G\ < G2 nor G2 < G\. The vertices V of a graph G are weighted by a function w : V —> R. Similarly, the edges £ of a graph G are weighted by a function w : E —> R. A weighted graph G = (V,E,w) may be either a vertex weighted graph or an edge weighted graph, depending on context. The weight of a graph is typically the sum of the weights of its vertices (or Chapter 1. Introduction 14 edges), but may be redefined to be some other function of its constituent weights. A chain of length k (v0, vi, v 2 , . . . , vk) in a graph G is a sequence of vertices from V(G) such that («,•_!, ut-) G E or (u;,u4_i) G E for i = 1, 2,. . . , k. The vertices ?Jo and vk are called the endpoints of the chain, and the chain is between its endpoints. A graph is connected (and a digraph is weakly connected) if there is a chain between every pair of vertices. A path of length k (VQ, VI, V 2 , . . . , vk) in a graph G is a sequence of V(G) such that (v{-i, G E for i = 1, 2 , . . . , k. A directed graph is strongly connected if there is a path between every pair of vertices. A cycle of length A; is a path (uo, vi,...,vk = vo). Note that all cycles are paths, and all paths are chains. A chain or path of length k is called simple if i ^ j implies u,- / Vj for all i,j < k. A cycle of length k is called simple if z 7^  j implies Uj ^ U j for all i , j < k. It is sometimes convenient to treat a chain simply as a set of vertices or edges by context. Let P = (uo, Vi, • • •, vk) be a chain. An edge (a, 6) G £ is an edge o / P if (a,b) = ( u 8 _ i , u 8 ) or (a,b) = (vi,Vi-\) for some i G An edge (a,b) (E. E is a chord of P if a G P and 6 G P , but (a, 6) ^ P and (6, a) ^ P . A chain (or path or cycle) is chordless if it is simple and has no chords. Some graphs prove to be so useful (and ubiquitous) that they have been given their own names, as the following table illustrates. Chapter 1. Introduction 15 Name Symbol Meaning complete graph on ra vertices Kn G = (V, E) where E = {(u,v) : u,v £ V} complete bipartite graph on m + ra vertices G = (U,V,E) where E = {(u,v) :ueU and v £ V} chordless cycle on ra vertices Cn G = ({1,2, . . . , «} ,£? ) where E = {(i, (i mod ra) + 1) : i <G [1, ra]} star on + 1 vertices square c4 claw Kl,3 triangle K3 = C3 Two graphs Gi = (V\,Ei) and G2 = ( V ^ , ^ ) are said to be isomorphic if there is a bijection (an isomorphism) f : v\ —> V2 that preserves adjacency, that is, (u,v) 6 Ei if and only if (f(u),f(v)) £ E2 for all u,v £ V i . Clearly, the relation of being isomorphic, also called isomorphism, is an equivalence relation, that is, it is relexive, symmetric, and transitive. The following table defines some well-known graph parameters studied in this thesis. Note that the value of any one of these parameters would be the same for all graphs in an isomorphism equivalence class. Therefore these parameters are also called graph invariants (they are invariant up to isomorphism). Chapter 1. Introduction 16 Name Symbol Meaning (maximum) graph degree A(G) max{deg(u) : v £ V(G)} minimum graph degree 6(G) min{deg(u) : v £ V(G)} (maximum) graph indegree Ain(G) max{indegree(u) : v £ V(G)} minimum graph indegree min{indegree(u) : v £ V(G)} independence (or stability) number a(G) cardinality of a maximum independent set in G clique number u(G) cardinality of a maximum clique in G clique cover number KG) least number of cliques for which every v £ V is in some clique. chromatic number X(G) least number of colours to colour G. (A graph G = (V, E) is (correctly) coloured by a function c : V —, Z + if (u,v) £ E implies c(u) 7^  c(v).) A graph is said to be k-partite if its vertex set can be partitioned into k non-empty independent sets, that is, if its chromatic number is at most k. In this thesis, the most important value for k is 2; such graphs are called bipartite. To emphasize a bipartition of a graph's vertex set, it is traditional to write G = (U, V, E), where vertex sets U and V are both independent sets. A graph is said to be cobipartite if its complement is bipartite. A planar graph is one that can be drawn on the plane such that edges may share a vertex, but do not otherwise cross. Such a drawing is called a plane graph. Kura-towski characterized planar graphs in 1930. We will need the following definitions to appreciate his theorem. To subdivide an edge (u,v) in a graph G, replace it with a path ( u , u 1 , « 2 , • •. ,Uk = v) of length k > 1. A graph H is a subdivision of a graph G if H Chapter 1. Introduction 17 can be constructed from G by subdividing some of its edges. Finally, two graphs are homeomorphic if they are subdivisions of the same graph. Theorem 1.1 (Kuratowski [Kur30]) Every non-planar graph has a subgraph homeo-morphic to either K$ or Kz,z. The binary set (and binary relation) operators are defined for graphs also, by operat-ing on the edge and vertex sets. For example, the intersection of two graphs G\ = (Vi, E\) and G 2 = (V2,E2) is G\ fl G2 — (Vi Pi V2,Ei fl E2). In nearly all cases, this thesis will only apply set operators to graphs that have the same vertex set. The intersection graph 0 (0) generated by a family (a multiset) O of sets, also called the intersection graph ofO, is the graph G where V(G) = 0 and G(E) = {(u,v) : u,v G O and uC\v ^ 0}. The intersection graph class of a set S is the set of all intersection graphs generated by finite (multi) subsets O of S (cf. [Sch85]). A graph G is an S-intersection graph if it is isomorphic to 0,(0) for some subset O C S. Extending the codomain of the isomorphism / : V(G) —>• O to the entire set S leads to the following equivalent definition. A graph G is an S-intersection graph if there is a mapping / : V(G) —> S such that (u,v) G E(G) if and only if f(u) n f(v) ^ 0. The function / : V(G) -> S is called an S-realization of G, or a realization of G in terms of 5". 1.3.3 Geometry Points in space R d (or Ed to emphasize Euclidean metric space) are denoted with lower case letters (for example, p and q) or, when there is a one-to-one correspondence between points and the vertices of a graph, just with the names of the vertices. The coordinates of a point are denoted p = (pi,j»2? • • • -,Pd)- We need only a few standard operations on points (as vectors): p + q = (px + qx,p2 + q2, • • •,Pd + qd), cp = (cpi,cp2,..., cpn), where c G R, and p — q = p H—!(<?)• This thesis focuses on two-dimensional space, where the Chapter 1. Introduction 18 coordinates of a point are denoted p = (xp,yp). This thesis uses three metrics in the plane: the L\ metric, also called the city block (or taxi cab or Manhattan) metric; the L2 metric, also called the Euclidean metric; and the Leo metric, also called the maximum coordinate (or chess board or queen's move) metric. These metrics are defined by their associated distance functions, Lk being defined by the distance function || • \\k : R d —> R, where IblU = ( £ \IH \t=i for all p G R d . The distance function for is given by: llplloo = l im \\p\\k = max{|pi| : 1 < i < d}. The distance between two points p and q is given by the expression lb - g|U-This thesis concerns itself primarily with the Euclidean metric in two dimensions. The distance function therefore will usually denote Wvh = \Jxl + y2P-If p and q are points in space, then the (open) line segment between them is the set (p, q) = {r : (1 — X)p + Xq where A G (0,1)}. The diameter of a set of points P in space is max{ | |p - q\\ :p,q£ P}. Two points in a set are said to be diametral if the distance between them is the diameter of the set. The sphere of radius r about a point c in <i-dimensional space is the set {p : \\p — c\\=r and p G Ed}. Chapter 1. Introduction 19 The two-dimensional sphere is called a circle. The (closed) ball of radius r about a point c in d-dimensional space is the set {p : \\p — c|| < r and p £ Ed}. The two-dimensional ball is called a disk. A unit sphere, circle, ball, or disk is a sphere, circle, ball, or disk respectively with radius 1/2, that is, with unit diameter. The lune through a pair of points p and q is the set {s : \\s — p\\ < ||p — q\\ and \\s — q\\ < ||p — q\\}. 1.3.4 Algorithms Most of the algorithms described in this thesis manipulate graphs. A traditional model of computation for such work is the Random Access Machine [AHU74], or R A M for short. Briefly, a R A M has three devices (an input, an output, and a memory), as well as a program, of course. A R A M can read from input, write to output, or store in memory, an arbitrary integer, all at unit cost in space and time. It also can execute the usual [AHU74] arithmetic and comparison operations between two integers, and program control instructions, in unit time each. Finally, this thesis uses a R A M augmented with a unit-time floor function. Geometric computations on a R A M must restrict any input points to integer coor-dinates (or rational coordinates represented as pairs of integers). We can deal with the possibly irrational Euclidean distance between points by computing only the squared Eu-clidean distance, which is an integer (or a rational). This model, or one easily transformed to it, also will be assumed for NP-hardness proofs. A disadvantage of using an integer R A M is the temptation to exploit the algorithmic properties of small integers. For exam-ple, we may be tempted to sort coordinates in linear time, using counting sort [CLR90], Chapter 1. Introduction 20 for example. While such an exploitation can have practical benefits, particularly when the application domain is naturally discrete, it is peripheral to this investigation. The purpose of this thesis is to investigate the role of geometric constraints in graph theory. That is, we wish to exploit geometric constraints (e.g., position, proximity, and distance) in dealing with combinatorial problems. In any representation of the underlying entities (space, points, and graphs) there are issues that are peripheral to this cause. For example, (physical) digital computers represent numbers to some finite precision. Therefore, programs written for such machines must deal with issues such as inconsistent roundoff and the need for higher precision during intermediate calculations. Where such issues are secondary to a more primary interest, as they are in this thesis, it has become customary to adopt a model of computation called a real RAM [PS85]. A real R A M can read, write, and store a real number, to infinite precision, in unit time and space. It also can execute the usual arithmetic and comparison operations between two real numbers in unit time each. Again, this thesis augments the real R A M with a unit-time floor function. A disadvantage of using a real R A M is the temptation to exploit the unit memory and operation times for infinite precision numbers. Again, such exploitation is not the intention of this thesis. We will use the real R A M model to describe geometric algorithms when it simplifies the presentation. Although the R A M or real R A M is the underlying computational model, this thesis does not actually describe algorithms in terms of these primitive models. Rather, it describes algorithms with a much higher-level pseudocode. This pseudocode follows the conventions laid out by Cormen, Leiserson, and Rivest ([CLR90] pp. 4-5). You should have no trouble reading it even without consulting [CLR90]. The only exception is that this thesis uses the following notation for comments: /* This is a comment. */ Chapter 1. Introduction 21 The thesis assumes that you are familiar with the elementary algorithms and data structures usually considered in any introductory textbook [Sed83, CLR90] on that sub-ject. In particular, you should understand linked lists, balanced trees, and binary search. You should be familiar with the representation of a graph G = (V, E) as an adjacency list, for example as an array of linked lists each representing Adj(u) for all v G V, and as an adjacency matrix, for example as a | V | x \V\ array M , where M U )„ = 1 if and only if (u,v) G E. Also, you should be aware of fundamental algorithms on these graph representations, such as depth-first and breadth-first search. As a matter of routine "programming" style, this thesis will use a sentinel [Sed83], in place of an explicit test for boundary conditions, wherever possible. A sentinel is just a "dummy" element in a data structure that holds the values for the boundary condition. For example, to search a list for a value without having to test for an end-of-list condition, first append a sentinel holding the desired value to the end of the list. In analyzing algorithms in this thesis, we are interested primarily in their consumption of time and space resources. The run-time of an algorithm on an input is the number of primitive (unit-time) operations executed by the algorithm. We are interested in how the run-time changes as a function of the input size, which depends on the problem, but will always be explicit in this thesis. In nearly all cases, we will be interested in an algorithm's worst case behaviour (taken over all possible inputs) in the limit, that is, as the input size goes to infinity. This is normally expressed in asymptotic notation, as summarized by the following equations. 0(g(n)) — {f(n) : there exist positive constants c and N such that | /(n) | < c\g(n)\ for all n > N} ti(g(n)) = {/(ra) : there exist positive constants c and N such that |/(ra)| > c\g(n)\ for all ra > N} Chapter 1. Introduction 22 e(g(n)) = {/(n) : f(n) € 0(g(n)) and g(n) € 0(/(n))} o(g(n)) = {/(n) : for all positive constants c, there exists a positive constant ./V such that | /(n)| < c|g(n)| for all n > N} Following the lead of Cormen, Leiserson, and Rivest [CLR90], this thesis drops the set-cardinality symbols from asymptotic notation. For example, interpret 0 ( V l o g V + E) as (9( |V| log |V| + \E\). By convention, we write f(n) = 0(g(n)) instead of f(n) € 0(g(n)), and say that f(n) is order (at most) g(n). The computational complexity of an algorithm refers to its worst case run-time (and memory usage). Say that an algorithm has polynomial time (or space) complexity if its worst case run-time (or memory usage) satisfies f(n) = 0(nk) for some constant k. Several algorithms in this thesis use matrix multiplication. The best upper bound known for the time to multiply two n x n matrices is n 2 - 3 7 6 [CW87]. Most often, we will be multiplying two | V | X | V | adjacency matrices, so let us abbreviate the upper time bound on this operation by 0(M(V)) = 0 ( V 2 - 3 7 6 ) . This thesis addresses several combinatorial optimization problems [GJ79, PS82]. Es-sentially, an instance of a combinatorial optimization problem is a pair (S,c), where S is a finite (hence combinatorial) set of candidate (or feasible) solutions, and c : S —> R is a cost function, telling us how expensive a feasible solution is. The goal is to find an optimal solution for (S,c), that is, a candidate solution s* of minimum (or possibly maximum) cost: c(s*) = min{c(.s) : s G 5"}. A combinatorial optimization problem is a set of instances of a combinatorial optimization problem. For example, the chromatic number (combinatorial optimization) problem is to use the minimum number of positive integers to colour the vertices of a graph such that Chapter 1. Introduction 23 adjacent vertices do not get the same colour. An instance (S, c) of this problem comprises all (correct) ways S of assigning colours {1 ,2 , . . . , |V(G)|} to the vertices of a given graph G, and the number c of colours used for each way. Let IT be an optimization problem. An approximation algorithm for LT is an algo-rithm that takes instances of IT as input and returns candidate solutions as output. An approximation algorithm A achieves a performance ratio R if c(A(I)) < R • c(OPT(I)) for every instance / E LT, where OPT(I) is an optimal solution for instance I. Other problems in this thesis are decision problems: they have a yes or no an-swer. A decision problem corresponds to "recognizing" the yes instances of the prob-lem. By convention, a problem name typeset in all capital letters is a decision prob-lem, sometimes even the decision version of an optimization problem. For example, K - C O L O U R A B I L I T Y is the decision version, "Can the vertex set of the input graph be correctly coloured with k colours?", of the chromatic number problem. Although researchers have identified many complexity classes, this thesis is concerned with only some of these: P, NP, NP-complete, and NP-hard. The class P consists of those decision problems that can be recognized (answered) in polynomial time. The class N P comprises those decision problems that can be "certified" in polynomial time. That is, a (decision) problem is in NP if there is a (necessarily polynomial size) "certificate" that the answer is yes3, and if the certificate can be checked in polynomial time. For example, K - C O L O U R A B I L I T Y is in NP since a graph coloured with k colours can be checked in polynomial time by examining every adjacent pair of vertices in the graph. Clearly, P C NP. If you could solve every NP-problem in polynomial time by solving problem A in polynomial time, then problem A is said to be NP-hard. 3 A problem is in co-NP if there is a polynomial-time-checkable "certificate" that the answer is no. Chapter 1. Introduction 24 A decision problem is NP-complete if it is both in N P and NP-hard. The NP-hardness of a problem A is typically demonstrated by providing a polynomial-time "re-duction" of some known NP-complete problem to problem A. These complexity classes are normally defined more rigorously using notions of language recognition on determin-istic and nondeterministic Turing machines. See the book by Garey and Johnson [GJ79] for these definitions. The significance of these definitions rests in the fact that there are no known polynomial-time algorithms for any NP-complete problems. That is, no one has shown that P = NP. Therefore, proving a problem to be NP-hard gives an effective lower bound on the complexity of any algorithm for solving the problem. On the other hand, no one has shown that P ^ NP. 1.3.5 Unit Disk, Strip, and 2-Level Graphs The background of the last few sections allows us to define unit disk graphs more carefully. A unit disk graph is, an S-intersection graph where S is the set of unit diameter disks in the plane. That is, a graph is a unit disk graph if each vertex can be mapped to a closed, unit diameter disk in the plane such that two vertices are adjacent (in the graph) if and only if their corresponding disks intersect (on the plane). This thesis briefly (§3.4) studies disk graphs also; these are S-intersection graphs where S is the set of disks with arbitrary (not just unit) diameter in the plane. As mentioned in Section 1.2, an alternative to the unit disk model is the proximity model. That is, a graph is a unit disk graph if each vertex can be mapped to a point in the plane such that two vertices are adjacent (in the graph) if and only if their corresponding points are within unit distance of one another. When a set of points P is given and generates a unit disk graph, then clearly the unit of distance does matter, since different graphs will be generated by different "threshold" values. When this is important, we will write G$(P) = (P, E) to signify the unit disk Chapter 1. Introduction 25 graph generated by points P , where adjacent endpoints are at most distance S apart. The following, more formal, definition summarizes this discussion. Definition 1.2 The unit disk graph G$(P) generated by a set of points P and a real threshold 5 is the graph (P, E) where E = {(p,q) : \\p — q\\ < S}. Clearly Gs(P) is isomorphic to G i ( | P ) for every set of points P . A graph G is a unit disk graph if it is isomorphic to Gi(P) for some P C R 2 . Extending the codomain of the isomorphism / : V(G) —> P to the plane leads to the following equivalent definition. A graph G is a unit disk graph if there is a mapping / : V(G) R 2 such that (u, v) £ E(G) if and only if || f(u) — f(v)\\ < 1. The function / is called a realization of the unit disk graph. We write f(v) = (xf(v),yj(v)), where v £ V and f(S) = {f(v) : v G S}. Sometimes, when it is clear what the mapping is, we will drop the subscripts and write x(v) and y(v). | A unit disk graph is a strip graph if there is a realization that maps all vertices onto a thin strip. For technical "reasons that will be clear later (e.g., Theorem 3.7), we are interested only in sufficiently thin (at most \/3/2 units for the Euclidean metric) strips. The following, more formal, definition summarizes. Definition 1.3 A graph G = (V, E) is a T-strip graph if there is a mapping / : V —> R x [0,r] such that (u,v) £ E if and only if ||/(u) — /(t>)|| < 1- The function / is called a T-strip realization of the graph. Normally, we are concerned with the L2 metric, where ||(x,y)|| = ||(a;,y)||2 = \/x2 + y2. For these usual L2 T-strip graphs, restrict r to the interval [0,\/3/2]. Occasionally, we will be interested in the L\ metric also, where ||(#,2/)||i = \x\ + |y|. For L\ T-strip graphs, restrict T to the interval [0,1/2]. | Say that a strip graph is a two-level graph if it has a realization that maps every vertex onto the boundary of the strip. The following definition states this more formally. Chapter 1. Introduction 26 Definition 1.4 A graph G = (V, E) is a two-level r-strip graph if there is a mapping / : V -» R x {0,T} such that (u,v) £ E if and only if ||/(u) - f(v)\\ < 1. The function / is called a two-level r-strip realization of the graph. | 1.4 Overview: A Reader ' s Guide The following chapters present and derive the results discussed in the introduction. Chap-ter 2, Related Research, sets the stage by reviewing the context of the research reported in this thesis. In particular it defines several related classes of graphs, some of which appear in this introduction without definition. One class is particularly relevant—indifference graphs are precisely 0-strip graphs—so Chapter 2 gives it extra attention. The remaining chapters study the "gradual introduction" of the second dimension in reverse order. They begin with Chapter 3, Unit Disk Graphs, in which the y-coordinate of the disk centres is unrestricted, as it is for the x-dimension. Chapter 3 shows how to exploit an explicit realization of a unit disk graph to find least vertex-weight paths. To do so, it makes use of a data structure whose details have been relegated to Appendix A. It also shows how to use a realization to report an adjacent vertex, and to delete a vertex, in O(log V) amortized time. This result is used by Chapters 5 and 6 to design efficient domination algorithms for strip graphs and two-level graphs. Chapter 3 continues by showing how to build the entire adjacency structure of a unit disk graph in 0(V log V+E) time, given only the | V | points that form the image of a realization / : V —, R 2 . Chapter 3 also shows how to prove that some problems are NP-complete for unit disk graphs. In particular, it conjectures that any problem that is NP-complete for degree 4 planar graphs is also NP-complete for unit disk graphs. In particular, it applies a generic transformation schema to show the NP-completeness of both I N D E P E N D E N T SET and C H R O M A T I C N U M B E R . On the other hand, it shows how to find a maximum clique in Chapter 1. Introduction 27 polynomial time. The main result of this chapter is that unit disk graph recognition is NP-hard. In proving this result, we will see that penny4 graph, bounded-diameter-ratio coin graph, and bounded-diameter-ratio disk graph recognition are all NP-hard. The next three chapters limit the second dimension to a strip of thickness r. Since r-strip graphs are cocomparability graphs for all r £ [0, \/3/2], Chapter 4 studies cocom-parability graphs. In particular, it develops efficient algorithms for several dominating set problems on cocomparability graphs, which are then automatically applicable to strip graphs. In the process, the chapter develops a novel, efficient algorithm for finding a min-imum weight maximal clique in a comparability graph, and therefore also a minimum weight maximal independent set in a cocomparability graph. Chapter 4 also discusses properties of the "forcing" relation on cocomparability graphs, which are used in the next chapter. Chapter 5 studies strip graphs. It begins by demonstrating how to combine an al-gorithm for finding least-weight paths in unit disk graphs and an algorithm for finding least-weight Steiner sets in cocomparability graphs. The resulting algorithm for finding a minimum weight Steiner set in a strip graph is more efficient than any known for either unit disk graphs or cocomparability graphs. Chapter 5 then characterizes strip graphs with systems of difference constraints. It uses this characterization to find efficiently a realization for a strip graph that has been "levelled" and whose complement graph has been oriented to meet certain constraints. Chapter 5 continues to explore this charac-terization by determining which stars are r-strip graphs for different values of r. It also shows that the characterization implies that a strip graph is an indifference graph if it is free of induced squares and claws. The chapter concludes by characterizing those strip graphs that are trees. 4 A graph is a coin graph if it is the intersection graph of a set of interior-disjoint disks. A coin graph is a penny graph if it is the intersection graph of a set of unit-diameter interior-disjoint disks. Chapter 1. Introduction 28 Chapter 6, Two-Level Graphs, is the final technical chapter. It begins by examining some elementary properties of two-level graphs. For example, it exhibits small examples (the square and the claw) that highlight the difference between two-level graphs and indifference graphs. It continues by showing again how the realization can be exploited, this time by improving on the minimum weight independent set algorithm for cocompa-rability graphs. Chapter 6 also examines how intimately two-levels graphs are related to other classes of graphs, such as bipartite permutation graphs and trapezoid graphs. The chapter concludes by studying the recognition problem for two-level graphs. Finally, Chapter 7, the Conclusion, summarizes the thesis. Two Notes About the Bibliography This thesis mentions many related results. For the most part, the bibliography cites the originators, unless a result is very well known. The rule of thumb is that a result is very well known if it is discussed in a standard textbook, such as Introduction to Algorithms by Cormen, Leiserson, and Rivest [CLR90]. In such a case, the bibliography normally cites the standard text, rather than the original. Also, the thesis mentions some results in passing, for completeness, and I may not have been able to obtain the paper in question. Also, the bibliography may cite a result used by another researcher, but not directly by this thesis. In such cases, the bibliography cites the paper, but includes an annotation of the form "cited by [abc]", where "abc" is another paper that discusses and cites the result. Chapter 2 Related Research This chapter describes some of the context for the research described in this thesis, including other classes of graphs that are related to unit disk graphs. It begins (§2.1) by describing how and where unit disk graphs have arisen previously. This chapter also describes other kinds of intersection graphs (§2.2), perfect graphs (§2.3), and other kinds of proximity graphs (§2.4). 2.1 Unit Disk Graphs Unit disk graphs are the subject of several algorithmic investigations. In particular, Clark, Colbourn, and Johnson [CCJ90] show that several well-known problems remain NP-complete for unit disk graphs (see Section 3.3 for details). Marathe, Breu, Hunt, Ravi, and Rosenkrantz [MHR92, MBH + 95] develop several approximation algorithms for unit disk graphs (again, see Section 3.3 for details). Unit disk graphs are also simplified models for several applications. In general, they model the interaction of objects in a physical situation where the interaction (forces, visibility, or whatever) is "cut off" after a fixed distance. Sections 2.1.1 to 2.1.4 present some examples. 29 Chapter 2. Related Research 30 2.1.1 C luster Analys is Unit disk graphs arise as a conceptual mechanism in cluster analysis [Zah71, DH73, God88] when "dissimilarity" is measured by Euclidean distance. More precisely, the sim-ilarity of a set of objects 0 = {Oi, . . . , On} is typically described by an n x n dissimilarity matrix D that is not necessarily Euclidean nor even metric. Godehardt [God88] defines a "similarity" graph T(8) = (0,E) where [O^Oj) G E if Dl3 < 5. Note that G5{0) is a unit disk graph if the 0, are points in the plane and Dij =f | |0; — Oj\\. Godehardt mentions that the clusters produced by two well known techniques, single-linkage clusters and complete linkage clusters, correspond to the components and the cliques of the graph T, respectively. Efficient algorithms for constructing the components and a maximum clique of a unit disk graph are given in Sections 3.2.3 and 3.3.2 of this thesis. 2.1.2 Random Test Case Johnson, Aragon, McGeogh, and Schevon [JAMS91] use a class of random graphs to test heuristics for the travelling salesman problem in arbitrary graphs. Their random geometric graph Un,r is, in the terminology of this thesis, the unit disk graph Gr(P) where \P\ = n, and where P is chosen randomly from a unit square1. Efficient algorithms for unit disk graphs could therefore be used to give exact solutions to various problems on random geometric graphs. These exact solutions could then be compared with those produced by the heuristic under evaluation. For example, Philips [Phi90] uses Unio,5 to evaluate his heuristic for finding the maximum clique, which he does by comparing it with another heuristic. The 0 ( n 3 5 log n) maximum clique algorithm presented in Section 3.3.2 (or the 0(n4-5) algorithm in [CCJ90]) would have allowed him to make a comparison with the exact solution in polynomial time. 1 Johnson et al. generate P by picking 2n independent numbers uniformly from the interval (0,1) and viewing these as the coordinates of n points. Chapter 2. Related Research 31 2.1.3 Molecular Graphics and Decoding Noisy Data Bentley, Stanat, and Williams [BSW77] mention that merely listing the edges of a unit disk graph, a problem that they call "finding fixed-radius near neighbors", also has applications in molecular graphics and decoding noisy data. Their algorithm, along with others, is discussed in Section 3.2.3. 2.1.4 Radio Frequency Assignment Suppose that a radio spectrum manager wishes to assign frequencies to a set of trans-mitters. Perhaps the simplest two-dimensional setting is that these transmitters all have the same power, and each has a fixed location on a uniform terrain so that their effective ranges are the same. Suppose further that the available frequencies, called channels, are a discrete set. Two transmitters must not be assigned the same channel if they might interfere, which they would do if they are within a fixed distance of one another. The manager's objective is to use the minimum number of channels. In his review of spectrum management, Hale [Hal80] calls this problem the F*D constrained cochannel assignment problem. He formalizes it as follows. Given a finite set of points V in the plane, and a positive rational number 8, find an assignment A : V —> Z + such that (1) max A(V) is as small as possible and (2) if u, v € V, u ^ v, and —1>|| < 5, then A(u) / A[y). This is just the problem of finding an optimal colouring of a unit disk graph. Hale's survey concludes with some relevant open questions: There is no graph coloring algorithm or heuristic which exploits the special structure of unit disk or disk graphs. There is no known intrinsic characteri-zation of unit disk graphs (a reasonable forbidden subgraph characterization Chapter 2. Related Research 32 seems out of the question, as a large list of infinite families of forbidden sub-graphs continues to grow). What is the complexity of the clique problem . . . for unit disk graphs? These questions are at least partially resolved in this thesis, and elsewhere. In re-verse order, the clique problem is in solvable in 0 ( V 3 ' 5 log V) time (§3.3.2), UNIT DISK G R A P H R E C O G N I T I O N is NP-hard (§3.4), and there are simple polynomial time algo-rithms that colour unit disk graphs with no more than three times the chromatic number (§3.3.3). The most comprehensive study of this frequency assignment (or unit disk graph colouring) problem is due to Graf (see [Gra95] and §3.3.3). In his PhD thesis, Graf also identifies and colours strip graphs (which he calls y/d/2 stripe graphs). 2.2 Geometric Intersection Graphs A graph G is an S-intersection graph if S is a family of sets, and there is a mapping / : V(G) -> S such that (u,v) G E(G) if and only if f(u) f) f(v) ^ 0 for all vertices u and v in V. The function / is called an S-realization (or a representation, or a model) of G in terms of S. Any graph is an intersection graph for some model. For example, model each vertex of a given graph by the set of edges incident with it. Geometric models promise to constrain the available intersection graphs that can be represented in this way. But even greatly constrained geometric models can generate many graphs. Any planar graph is an intersection graph of a set of curves in the plane. In fact, an intersection model made up solely of star-shaped polygons can be constructed from a straight line embedding2 of a planar graph. The stars are concentric with the vertices, and their "arms" reach up 2 Every planar graph has a straight line embedding (c/. [NC88]). Chapter 2. Related Research 33 along each line segment corresponding to an edge. More remarkably, every finite planar graph is the intersection graph of a set of interior-disjoint disks. Since such intersection graphs, also called coin graphs, are themselves clearly planar, this means that the class of finite planar graphs is equivalent to the class of coin graphs. This result was first demonstrated by Koebe in 1935. See Sachs's [Sac94] account for a more detailed history and for references. Wegner [Weg67, CM78] proves that any graph is the intersection graph of a set of con-vex objects in R 3 . Unfortunately, Capobianco and Molluzzo [CM78], who cite Wegner's result, do not give the proof. Researchers have studied numerous kinds of geometric intersection models other than disks and unit disks. In particular, graphs with the following one- and two-dimensional intersection models, some with efficient algorithms, can be found in the literature: • unit intervals [Rob68a, L093], • intervals [FG65, GH64, BL76, GLL82, Kei85, Kei86, Ber86, BB86, RR88a, RR88b], • squares [Ima82, BB86], • rectangles [IA83, Kra94], • line segments [IA86, KM94], • strings [Kas80, Kra91a, Kra91b], • circular arcs [Kas80, GLL82], • chords of a circle [Kas80]. Recognition complexity questions have also been studied in the literature. In particu-lar, there are polynomial time algorithms for recognizing intersection graphs on unit inter-vals [L093], intervals [BL76], circular arcs [Tuc80], and chords of a circle [GHS89]. In his Chapter 2. Related Research 34 summary of graph algorithms [van90], van Leeuwen states wistfully that "It should prob-ably be required of any class of graphs that is distinguished that its recognition problem is of polynomial-time bounded complexity." This desire notwithstanding, many interesting classes are difficult to recognize. In particular, it is NP-hard to recognize intersection graphs for strings [Kra91b], isothetic rectangles [Kra94], and segment graphs [KM94]. The complexity of recognizing unit square graphs was still open. However, Theorem 3.46 proves that this problem is NP-hard. 2.3 Perfect Graphs Historically, there were two kinds of perfect graphs, both identified by Berge [Ber61]. A graph G is \-perfect if the chromatic number x °f every induced subgraph G(S) is the same as the size to of its largest clique. More formally, a graph G is x-Perfect if u(G{S)) = x{G(S)) for all S C V. Clearly, X(G(S)) > io{G(S)) for all graphs, perfect or otherwise, since every vertex in a clique must get a different colour. A graph G is a-perfect if the number k of cliques required to cover any induced subgraph is the same as the size a of its maximum independent set. More formally, a graph G is a-perfect if a(G(S)) = k(G(S)) for all S CV. Clearly, k(G(S)) > a(G(S)) for all graphs since every clique can cover at most one vertex in an independent set. Clearly, too, a graph is x - P e r f e c t if and only if its complement is a-perfect, since a clique in a graph is an independent set in its complement. A graph is perfect if it is both ^-perfect and a-perfect. In fact, a graph is %-perfect if and only if it is a-perfect, as first conjectured by Berge [Ber61] and proved ten years later by Lovasz [Lov72]: Theorem 2.1 (The Perfect Graph Theorem [Ber61, Lov72]) The following state-ments are equivalent for all graphs G: • u(G(S)) = X(G(S)) for all SCV, Chapter 2. Related Research 35 • a(G{S)) = k(G(S)) for allS CV, • u(G(S))a(G{S)) > \S\ for all S CV. Grotschel, Lovasz, and Schrijver [GLS84] show that these invariants (a(G), UJ(G), x(G), and k(G)) can be computed in polynomial time for perfect graphs. Their algo-rithms use the ellipsoid method, a general technique that has also been used to solve linear programming in polynomial time (see [GLS84] for details and references). Grotschel et al. do not recommend their algorithms for practical use, due to numerical stability problems with the ellipsoid method. Furthermore, the complexity of P E R F E C T G R A P H R E C O G -NITION is unknown, though Grotschel et al. show that it is in co-NP. Consequently, there are many results on subclasses of perfect graphs in the literature (see in particu-lar Golumbic's book [G0I8O]). There are many such subclasses. For example, bipartite graphs are clearly x-perfect; assuming a bipartite graph has at least one edge, both its clique number and chromatic number is 2, and it is trivial to find such a maximum clique or such a colouring. Finding a maximum independent set of a bipartite graph is not that easy, but can be accomplished in polynomial time by matching (c/. §3.3.2). In this thesis, cocomparability graphs (see Section 2.3.1 and Chapter 4), which are perfect, are especially relevant. The family of cocomparability graphs properly includes the family of trapezoid graphs (see Section 6.4.1). In turn, the family of trapezoid graphs properly includes the families of permutation and interval graphs (see below for defini-tions). Finally, the family of interval graphs properly include the family of indifference graphs. On the other hand, unit disk graphs are not perfect. For example, C 5 is a unit disk graph (as is any cycle) but it is not perfect since u>(Cn) = 2 and x{Cn) = 3 for all odd n > 5. However, strip graphs are perfect since they are cocomparability graphs. Chapter 2. Related Research 36 2.3.1 Cocomparab i l i ty Graphs An undirected graph G — (V, E) is a comparability graph if there exists a transitive orientation of its edges. We can also say that a comparability graph is a transitively orientable graph. Recall from Section 1.3.2 that a transitive orientation of an undirected graph G = (V, E) is an oriented subgraph G = (V, A) where E = A + A - 1 and A is tran-sitive. Note that transitive orientations are acyclic since A is asymmetric. Conversely, we can think of comparability graphs as being generated by partially ordered sets. The edges of the comparability graph G = (V, E) corresponding to a strict poset (V, <) are given by E = {(u,v) : u < v or v < u}. A graph is a cocomparability graph if its complement is a comparability graph. Both comparability graphs and cocomparability graphs have been extensively studied (c/. [Gol80, M6h85]). Both classes include permutation graphs as proper subclasses, and cocomparability graphs properly include interval graphs and indifference graphs (c/. [Duc84]). Comparability graphs (and therefore cocomparability graphs also) can be oriented in 0(V2) and recognized in 0(M(V)) time [Spi85, Spi94]. Previous algorithmic work on cocomparability graphs relevant to this thesis includes polynomial time algorithms for several domination problems ([KS93], but see §4.2 for improved time complexities), and a cubic time algorithm for the Hamiltonian cycle problem [DS94]. There are also algorithms for comparability graphs [M6h85], so that the complementary problem can be solved on cocomparability graphs. Once a vertex-weighted comparability graph has been transitively oriented, one can extract a maximum weight clique in 0(V2) time 3, and a maximum weight independent set in 0(V3) time [M6h85]. 3This algorithm for maximum weight clique cannot be used to find a minimum weight maximal clique. See §4.2.4 for an explanation and an algorithm that finds a minimum (and maximum) weight maximal clique in 0(M(V)) time. Chapter 2. Related Research 37 Permutation Graphs A graph G = (V, E) is a permutation graph if there is some pair of permutations (ordered lists) P and Q of the vertices such that u and v are adjacent if and only if their order differs in the two permutations. The permutations P and Q are said to realize the permutation graph G, and make up a permutation realization or permutation model. For example, if P = (3,5,1,4,2) and Q = (5,2,3,4,1) model some permutation graph G= (V,E), then (3,5) G E but (3,1) E. We can also think in terms of a permutation diagram; lay out the permutations on two parallel rows and connect corresponding vertices with line segments. Then u and v are adjacent if and only if their corresponding line segments cross; see Figure 2.1, for example. 3 5 1 4 2 2 1 5 2 3 4 1 5 3 4 (a) (b) Figure 2.1: A permutation diagram (a) and its corresponding permutation graph (b). Here, P = (3,5,1,4,2) and Q = (5,2,3,4,1). To simplify the notation, the first permutation P is traditionally (c/. [Gol80]) written as a relabelling of the vertices P = ( 1 , 2 , . . . , | V | ) . The second permutation Q is written as a list Q = (TT(1), TT(2), . . . , 7r(|V|), where n : V —> V is a permutation (that is, a bijection) on V. Then (u,v) G E if and only if (u — V)(TT~X(U) — 7r-1(t>)) < 0 (think of 7 r _ 1 (u ) as the "location" of v in Q). We can therefore think of ir as the defining permutation (corresponding to permutation P) for the permutation graph G. Spinrad Chapter 2. Related Research 38 has shown that a defining permutation for a given permutation graph can be found in 0(V2) time [Spi85]. The complement of a permutation graph is also a permutation graph. It is easy to verify that if n is a defining permutation for G, then the reverse i\r of 7r, where 7rr(z) = 7r(| V | + l — for all i, is a defining permutation for G. Since permutation graphs are comparability graphs, it follows that permutation graphs are also cocomparability graphs. In fact, this latter observation characterizes permutation graphs. That is, a graph G is a permutation graph if and only if G and G are comparability graphs [PLE71]. Recall that the dimension of a partial order P is the minimum number of linear orders whose intersection is P. If G is the comparability graph of a partial order P, then P has dimension at most 2 if and only if G is transitively orientable [DM41] (that is, G is also a comparability graph). It follows that G is a permutation graph if and only if the partial order underlying any transitive orientation of G has dimension at most 2. There has been considerable interest in solving dominating set problems on permu-tation graphs [FK85, BK87, CS90a, TH90, AR92] . These problems are discussed in Sec-tion 4.2, which also summarizes their complexity on permutation graphs. That section presents new polynomial time algorithms for these dominating problems on cocompara-bility graphs, which properly includes permutation graphs, as mentioned above. Permu-tation graphs also play a role in Chapter 6, in which bipartite permutation graphs appear in strip graphs. Function Graphs Cocomparability graphs may also be characterized as intersection graphs. Let F be a family of continuous functions / ; : [0,1] —» R. Say that two functions /t- and fj intersect if there is a value x £ [0,1] such that fi(x) = fj(x), that is, the images of [0,1] under /,• and fj intersect. Function graphs are the intersection graphs of such families F. Golumbic, Chapter 2. Related Research 39 Rotem, and Urrutia [GRU83] show that a graph is a function graph if and only if it is a cocomparability graph. Since all partial orders corresponding to a comparability graph have the same dimen-sion [TJS76], this number can equally well serve as the dimension of the comparability graph. Golumbic et al. show that a cocomparability graph is the concatenation of k — 1 permutation diagrams, where k is the dimension of the graph's complement. For example, permutation graphs (and their complements) have dimension 2. This also characterizes permutation graphs; they are exactly the intersection graphs of linear functions. Interval graphs are also the concatenation of permutation diagrams, since they are cocomparability graphs. However, interval graphs cannot be characterized as the concate-nation of some finite number of permutation graphs. For example, C\ is a permutation graph, but it is not an interval graph. However, we can characterize interval graphs with a class of piecewise-linear functions, made up of three linear pieces. Let G = (V, E) be an interval graph where n = \V\, and let I = [^2,^2]? • • •, [^ n,^ n]} be an interval realization of G. We can assume without loss of generality that lv,rv G {1 ,2 , . . . , 2n}. The function fv : [0,1] —> R corresponding to the interval [/„,r„], where /„ < rv, is given by: MX) = lv if 0 < x < Iv 2n+l (2n + l)x i f ^ T < x < ^ T if rv < x 1 1 2n+l — X This process is illustrated in Figure 2.2, which shows the "construction line" y = (2n + l)x for clarity. If two functions intersect, then they do so on the construction line, and mimic the original intervals. Clearly then, two intervals intersect if and only if the corresponding functions intersect. Chapter 2. Related Research 40 2n + 1 2n-\ r.. V r, u V u 0 0 0 Figure 2.2: Functions for intervals [/ u,r„] and [/„,r„]. 2.3.2 Indifference Graphs Roberts [Rob68a] named indifference graphs after the notion of "indifference" from psy-chology and economics. The subsection introduces the idea of indifference graphs, and views them as one-dimensional unit disk graphs. Can an individual's preference within a finite. set of options be assigned a scalar measure? Such a measure / would map the options into real numbers. To a first ap-proximation, the individual would prefer option a to b if and only if /(a) > /(&). This approximation would imply that indifference corresponds to equality and is therefore transitive. Roberts [Rob78] cites several sources that argue against the transitivity of indifference. To model indifference more accurately, he suggests that the preference measure have the property that, if the individual is indifferent between a and 6, then Definition 2.2([Rob68a]) A graph G = (V, E) is an indifference graph if there is a mapping / : V —>• R of its vertices to the reals such that (u, v) £ E if and only if \f(a)-f(b)\<6. \f(u)-f(v)\<l. | Chapter 2. Related Research 41 Although Definition 2.2 is sufficient, indifference graphs can also be defined by any of the characterizations in Theorem 2.3. A l l uncited characterizations below are due to Roberts [Rob68a]. Definitions of the emphasized terms in the characterizations follow the theorem. Theorem 2.3 Let G = (V, E) be a graph. Then the following are equivalent: 1. G is an indifference graph. 2. There is a semiorder (V, <) such that (u,v) 6 E if and only if neither u < v nor v < u, for all u, v in V. 3. G is a unit interval graph 4- G is a K\o,-free interval graph. 5. G is a proper interval graph. 6. ([Rob78] page 33) The adjacency matrix of G has the consecutive-ones property. 7. G is chordal and none of the graphs in Figure 2.3 are induced subgraphs. 8. [Duc84] The closed neighbourhoods of G constitute the edges of an interval hyper-graph. 9. [Duc84] G admits an orientation G that satisfies both: (a) G has no cycle, (b) G(N+(v)) and G(N~(t?)) are complete graphs. 10. [Jac92] G is an astral triple-/ree graph. 11. [Duc79] There exists a linear ordering of V such that, ifu<v<w and (u, w) £ E, then both (u,v) and (v,w) are in E. Chapter 2. Related Research 42 Figure 2.3: Forbidden indifference graphs. These graphs, together with Cn for n > 4, make up the forbidden subgraph characterization of indifference graphs. astral triple Three vertices in a graph such that, between any two of them, there is path P that avoids the third vertex u, and no two consecutive vertices in P are both adjacent to v. consecutive-ones property A binary matrix is said to have the consecutive-ones prop-erty if it is possible to permute the rows so that the ones in each column are consecutive. interval graph The intersection graph of a set of intervals on a linearly ordered set. interval hypergraph A (not necessarily binary) relation H = (V, E) for which there exists a linear order on the vertices V such that every element (edge) in E is an interval in the order. proper interval graph The intersection graph of a set of intervals on a linearly ordered set, none of which is contained by (is a subset of) any other. semiorder An irreflexive relation (A, <) is a semiorder if for all x,y,z,w 6 A: x < y and z < w implies x < w or z < y; and x < y and y < z implies x < w or w < z. unit interval graph The intersection graph of a set of equal-sized (unit length) inter-vals on the real line. Chapter 2. Related Research 43 Looges and Olariu [L093] have developed several optimal algorithms for indifference graphs. In particular, they show how to recognize an indifference graph G = (V, E) in 0(V + E) time. Their recognition algorithm returns an efficient representation of the ordering from Theorem 2.3.11. Using this representation, Looges and Olariu colour the graph, find a shortest path between two vertices, compute a Hamiltonian path, and compute a maximum matching, each in 0(V) additional time. It is clear from the definition of indifference graphs (Definition 2.2) that the class indifference graphs is the special case of r-strip graphs where r = 0 (cf. Definition 1.3). It would be beneficial for algorithm design if some or all of the properties for indifference graphs in Theorem 2.3 were preserved for r-strip graphs as r increases. However, such preservation does not seem to be the case. For example both the square and the claw are r-strip graphs for all r > 0. In particular, the algorithms by Looges and Olariu do not seem to generalize, since the closest analogue to Theorem 2.3.11 that applies to r-strip graphs is the spanning order (which characterizes the more general class of cocomparability graphs) discussed in Chapter 4. Roberts [Rob68b] generalizes the notion of indifference to higher dimensions by defin-ing the cubicity of a graph to be the smallest dimension k for which there exists a function / : V -> R f c, such that (u,v) G E if and only if \\f(u) — /(u)||oo < 1- The corresponding notion for the L2 metric is called the sphericity [Hav82a, Fis83] of a graph. Clearly, the graphs that have cubicity or sphericity at most one are precisely the indifference graphs. The notion of sphericity is well defined, as Maehara's theorem shows. Theorem 2.4 ([Mae84]) Every graph of order n is isomorphic to some space graph [unit sphere graph] in n-space. The sphericity (respectively, cubicity) of a graph is just the smallest dimension in which it is a unit disk graph—generalized with respect to dimension—under the L2 (respectively, Chapter 2. Related Research 44 L Q O ) metric. The (not generalized) unit disk graphs correspond exactly to graphs with sphericity at most 2. How is sphericity related to cubicity? Havel [Hav82a, Fis83] shows that cubicity can exceed sphericity. For example the cubicity and sphericity of A^ 1 ; 5 are 3 and 2 respectively. In fact, Havel shows that there are finite graphs of sphericity 2 that have arbitrarily large cubicity. Fishburn [Fis83] shows that sphericity can exceed cubicity for cubicity 2 and 3 but leaves the question open for larger cubicity. Maehara [Mae86] answers this question by showing that there is a complete bipartite graph with sphericity exceeding cubicity for every value of cubicity at least 6. 2.3.3 Grid Graphs A grid graph is a node-induced finite subgraph of the infinite grid [IPS82]. Equivalently therefore, a grid graph is a unit disk graph that has a realization with integer coordinates. More carefully, a graph G is a grid graph if there is a function / : G(V) —>• Z 2 such that, (u, v ) € E i f and only if ||/(u) - /(u)| | < 1 for all u, v e V. Some authors [Had77, BC87] prefer to define grid graphs as (not necessarily induced) subgraphs of the infinite grid (let us call these grid subgraphs). For example, the tree in Figure 2.4 is a grid subgraph, but not a grid graph, since every grid realization of this graph must put two leaves within unit distance of one another. 9 9 o -o 6 9 Figure 2.4: A grid subgraph that is not a grid graph. Grid graphs arise in several applications, including integrated circuit design (cf. [Had77]). Chapter 2. Related Research 45 The infinite grid represents the possible conducting material. A chip isolates a portion of the grid, effectively inducing a rectangular subgraph on the grid. The graph is further complicated by electronic components on the chip, which remove any covered vertices and edges from the graph. A typical problem in this context is to lay out (embed) an interconnection pattern (a graph) on the chip (in the corresponding grid graph), possibly by dilating (subdividing) edges in the pattern, so as to minimize the size of its layout. Since grid graphs (and grid subgraphs) are bipartite, and bipartite graphs are perfect, grid graphs are also perfect and subject to the same efficient algorithms (see Section 2.3). On the other hand, several familiar problems that are NP-complete for arbitrary graphs remain NP-complete for grid graphs, including finding a Hamiltonian path [IPS82] and finding an optimal Steiner tree [GJ77]. GRID S U B G R A P H R E C O G N I T I O N is NP-complete [BC87], even for grid subgraphs that are binary trees [Gre89]. The following simple reduction shows that GRID G R A P H R E C O G N I T I O N is also NP-complete. Let G = (V, E) be an instance of GRID SUB-G R A P H R E C O G N I T I O N . Create a graph G' = (V, E') from G by replacing each edge in E with the edge simulator graph shown in Figure 2.5. Since each edge simulator has Figure 2.5: Simulating edges in grid subgraphs with grid graphs. a unique induced embedding in the grid (up to rotation, reflection, and translation), it follows that G' is a grid graph if and only if G is a grid subgraph. Figure 2.6 shows an embedded grid subgraph and the embedded grid graph that results from this reduction. In his PhD thesis [Gra95], Graf generalizes the notion of grid graphs by allowing disks of larger than unit diameter. More formally, he defines the class of UD^ graphs Chapter 2. Related Research 46 Figure 2.6: The grid graph Gi corresponding to the grid subgraph Gs. to be the intersection graphs of disks with diameter d £ Z + and centres in Z 2 . Clearly, all UDd graphs are unit disk graphs, and U D i is equivalent to the class of grid graphs. Graf shows that each UD^ graph G = (V, E) has a realization that can be encoded with 0(V\og(dV)) bits, so that UD r f G R A P H R E C O G N I T I O N is in NP. By comparison, UNIT DISK G R A P H R E C O G N I T I O N is not known to be in NP. He shows that U D 2 graph recognition is NP-complete by reducing it from grid graph recognition. He also conjectures that UD^ recognition is NP-complete for all fixed d. Again by comparison, Section 3.4 shows that UNIT DISK G R A P H R E C O G N I T I O N is NP-hard. Finally, Graf shows that U D 2 G R A P H 3 - C O L O U R A B I L I T Y is NP-complete. Since VBd C VT>kd for all positive integers d and k, it follows that G R A P H 3 - C O L O U R A B I L I T Y is NP-complete for all XJDd where d is even. 2.4 Proximity Graphs Two vertices are adjacent in a unit disk graph if their realizations satisfy a certain proximity condition: they must be within unit distance of one another. Other proximity conditions lead to other kinds of graphs. Chapter 2. Related Research 47 2.4.1 The Delaunay Triangulation Hierarchy The Euclidean minimum spanning tree [Zah71, SH75, Yao82, AESW90] MST(P) of a set of points P in the plane is a minimum spanning tree of the complete graph K(P) with edge weights equal to the Euclidean distance between endpoints. The relative neighbourhood graph [Tou80, Sup83] RNG(P) — (P, EJING) of P has an edge (p, q) between two vertices if and only if the lune through p and q does not contain any other point in P. That is, (p, q) £ ERNG if and only if \\p — q\\ < \\p — r\\ and 11jo — q\\ < \\q — r\\ for all r not equal to p or q. Two vertices p and q of the Gabriel graph [GS69, MS80] GG(P) = (P, EGG) are adjacent if and only if the smallest disk through p and q does not contain any other point in P. That is, (p, q) £ EGG if and only if \\p — q\\ < \\p — r\\ + \\q — r\\ for all r not equal to p or q. Perhaps the best known proximity graph is the Delaunay graph. Three vertices p, q, and r are pairwise adjacent in the Delaunay graph [Del34] DT(P) if and only if the smallest disk through all three does not contain any other point in P. Since the (straight line) plane graph corresponding to DT(P) partitions the plane into triangles and an external face, the Delaunay graph is usually called the Delaunay triangulation. An alternative definition involves the Voronoi diagram [Vor08, PS85]. The Voronoi polygon or Voronoi cell associated with a point p is the set of points in the plane that are closer to p than any other point in P. The Voronoi diagram is the partition of the plane induced by the Voronoi polygons associated with the points in P. Define an edge in EDT to be a pair of points (p, q) if and only if the Voronoi polygons associated with p and q share an edge. That is, the Delaunay triangulation is the straight line geometric dual of the Voronoi diagram. Chapter 2. Related Research 48 These four graphs are related as follows [PS85]: MST(P) C RNG(P) C GG(P) C D T ( P ) . The number of edges in each of these graphs is linear in the number of vertices. His-torically at least, there has been little emphasis on recognizing these graphs. How-ever, Eades and Whitesides [EW94] recently showed that recognizing Euclidean min-imum spanning trees is NP-hard. There has been more emphasis on constructing these graphs from a given point set P; each graph can be constructed in 0(P log P) time ([PS85, Sup83, MS80, PS85] respectively). 2.4.2 Sphere of Influence Graphs Let P be a set of points in the plane and let D = {Dp : p £ P} be a corresponding set of disks, where Dp is the largest disk centered on p whose interior is empty of any point in P other than p. That is, radius(Dp) = min{||p — q\\ '• q £ P and q ^ p} for every p £ P. Avis and Horton [AH85] define the (closed) sphere of influence graph G(P) to be the intersection graph fi(D) of the disks D. Clearly, the sphere of influence graph is a disk graph. They show that the sphere of influence graph is neither a subgraph nor a supergraph of any of the minimum spanning tree, the relative neighbourhood graph, the Gabriel graph, or the Delaunay triangulation. They also show that G(P) has at most 29|P| edges and that the graph can be constructed in 0 ( P l o g P) time, which they show to be optimal. There is a subgraph relationship between sphere of influence graphs and unit disk graphs. In particular, G5l(P)CG(P)CGs2(P), where Si = 2 • mm{radius(Dp) : p £ P} and S2 = 2 • max{radius(Dp) : p £ P}. This also can be stated in terms of intersection graphs. The intersection graph £l(Cm\n), where Chapter 2. Related Research 49 C m i n is a set of equal-diameter disks, is a subgraph of fl(D), where D is a set of disks with varying diameters, concentric with Cm-m, each of which is no smaller than the disks in Gmin- Similarly, the intersection graph Q(D) of a set of disks D with varying diameters, is a subgraph of 0 ( C m a x ) , where C m a x is a set of equal-diameter disks, concentric with D, each of which is no smaller than the largest disk in D. Unit disk graphs share the property with sphere of influence graphs that they do not fall nicely into the Delaunay triangulation hierarchy. Nevertheless, one of the algorithms in Section 3.2.3 efficiently constructs (enumerates the edges of) a unit disk graph by thresholding the Delaunay triangulation. Chapter 3 Unit Disk Graphs How does the geometry of intersecting unit disks affect the associated intersection graphs? To address this question, Section 3.1 makes some basic observations about unit disk graphs that are exploited throughout the thesis. Section 3.2.1 then shows how to exploit a geometric realization of a unit disk graph to find least vertex-weight paths. Section 3.2.2 solves some more fundamental tasks: given a vertex v G V, report a vertex adjacent to v (LISTl(u)), and delete v (MARK(u)) . More specifically, it shows how to execute a sequence of 0(V) calls to LIST1 and M A R K in O ( V l o g V ) time and 0(V) space. Section 3.2.3 shows how to construct a unit disk graph from its geometric realization (that is, from | V | points in the plane) in 0 ( V l o g V + E) time. Recall that efficient construction has historically been of interest for other kinds of proximity graphs (§2.4). Section 3.3.2 shows how to use a realization to find a maximum clique in polynomial time. Unfortunately, the geometry is not always so exploitable. The rest of Section 3.3 shows that several familiar problems remain NP-hard for unit disk graphs. In proving some of these ( I N D E P E N D E N T SET and C H R O M A T I C N U M B E R ) , this section draws a connection between unit disk graphs and planar graphs with no degree exceeding 4. Finally, we will see in Section 3.4 that even unit disk graph recognition is NP-hard. Furthermore, penny graph, bounded-diameter-ratio coin graph, and bounded-diameter-ratio disk graph recognition are all NP-hard, even for square disks and coins. That is, both 2-SPHERICITY and 2-CUBICITY are NP-hard. 50 Chapter 3. Unit Disk Graphs 51 3.1 Basic Observations This section makes some elementary observations about disk graphs and unit disk graphs that will prove useful later in the thesis. In particular, this section proves that the star Kit6 is forbidden, and that triangle-free disk graphs are planar. We will also see that all strip graphs are cocomparability graphs. Lemma 3.1 (The Star Lemma) There are no induced stars with degree greater than 5 in any unit disk graph, and no induced stars with degree greater than 4 under the L\ and Loo metrics. Proof: At most five Euclidean disks can pack around a central disk without intersecting one another. More precisely, let / : V —»• R 2 be a realization of a unit disk graph G — (V, E). Let u be any vertex in V , and let I C Adj(u) be a set with more than five vertices, i.e., | / | > 6. Then there must be two vertices v and w in I such that the angle between the segments sv = (f(u), f(v)) and sw = (f(u), f(w)) is at most 60 degrees. But then, since the segments sv and sw have at most unit length, it follows that the segment (f(v),f(w)) also has at most unit length. Therefore (v,w) 6 E and G(I U {v}) is not isomorphic to Ki,\i\-Similarly, disks under the L\ and metrics are closed square boxes. At most four such boxes can pack around a central box without intersecting one another. | Corollary 3.1.1 The star Ki^ is not an induced subgraph of any unit disk graph. Lemma 3.2 In every unit disk graph, there is a vertex v that has degree at most 3 in any induced star. Similarly, in every unit disk graph under the L\ and L^ metrics, there is a vertex v that has degree at most 2 in any induced star. Proof: Let / : V -> R 2 be a realization of a unit disk graph G = (V, E). Let v 6 V be a vertex such that f(v) is an extreme point, a leftmost point, for example. Then the Chapter 3. Unit Disk Graphs 52 neighbourhood of v lies in a half plane through v. The rest of the proof is similar to that of the Star Lemma. | 3.1.1 Connections with Planarity Recall from Section 2.2 that Koebe showed that every finite planar graph is the intersec-tion graph of a set of interior-disjoint disks. So every finite planar graph is a disk graph. However, there are disk graphs (without the interior-disjoint restriction) that are not planar. For example, disk graphs may contain arbitrarily large cliques. This subsection shows (Theorem 3.4) that if a disk graph contains only very small cliques (with at most two vertices), then it is planar. The converse is clearly not true, as demonstrated by a triangular packing of unit disks, which generates a planar disk graph with many triangles (cliques with three vertices). Lemma 3.3 Let f : V —> R 2 (locations) and r : V —> R (disk radii) be a realization of a disk intersection graph G — (V,E). Let (a, b) and (c,d) be edges in E with distinct endpoints. If the line segments (f(a),f(b)) and (/(c), f(d)) cross, then the subgraph induced by {a, b, c, d} contains a triangle. Proof: For readability, let v denote also f(v) for v £ {a, b, c, d}. Suppose that (a, b) and (c, d) cross at some point e, as shown in Figure 3.1. Then the sum of each pair of opposite Figure 3.1: If two segments cross in a disk graph realization, then their endpoints induce a triangle. d b c Chapter 3. Unit Disk Graphs 53 sides of the quadrilateral acfed is at most the sum of its diagonals. More precisely, let uv denote the Euclidean distance ||u — v\\ between points u and v. Then ac + bd < (ae + ec) + (fee -f- ed) = (ae + fee) + (ec + ed) = ab + cd. Since (u,v) £ E if and only if itu < r(u) + r(v), it follows that ac + bd < afe + cd < r(a) + r(fe) + r(c) + r(d) = (r(a) + r(c)) + (r(b) + r(d)). Therefore, either ac < r(a) + r(c) or bd < r(b) + r(d). That is, either (a,c) £ E or (b,d) G P . Similarly, ao! + fee < afe + cd, so that either (a,d) £ E or (fc, c) £ Any of these four possibilities implies a triangle. | Theorem 3.4 Every triangle-free disk graph is planar. Proof: Let / : V —> R 2 (locations) and r : V —¥ R (disk radii) be a realization of a disk intersection graph G = (V,E). If, for some pair of edges (a,b) and (c, d) in E with distinct endpoints, the line segments (/(a),/(fe)) and (/(c),/(d)) cross, then the endpoints induce a triangle in G by Lemma 3.3. Otherwise, no such line segments cross, and the graph is planar by definition. | Corollary 3.4.1 No disk graph has an induced subgraph homeomorphic to K^^. Proof: Suppose some disk graph has an induced subgraph homeomorphic to A ^ . Then there is a disk graph homeomorphic to A ^ , since any class of intersection graphs is closed under taking induced subgraphs. This graph is clearly not planar. However, all Chapter 3. Unit Disk Graphs 54 graphs homeomorphic to A ' 3 j 3 are triangle-free. Therefore G is planar by Theorem 3.4, a contradiction. | On the other hand, there are AVfree disk graphs that are not planar. For example, the graph in Figure 3.2 is homeomorphic to K5. Note that although Figure 3.2 is K4-Figure 3.2: A K4-ivee disk graph homeomorphic to K$ free, it is naturally (by Theorem 3.4) not triangle-free . Figure 3.2 is actually a unit disk graph. To see this, notice that in the realization shown, there is exactly one disk that is larger than unit size. Simply shrink it about its centre until it is of unit size. This will make it less visible in the drawing, but will not affect its adjacency. 3.1.2 Eve ry r -S t r ip G r a p h is a Cocomparab i l i ty G r a p h Defini t ion 3.5 A directed orientation G = (V, E) of the complement of a r-strip graph G, and a realization / : V(G) —> R x [0, r] are compatible if they satisfy (u,v) £ E if and only if (u,v) £ E and xj(u) < Xf(y) for all u,v G V. | Note that G really is an orientation since, for every (u,v) € E, either xj(u) < xj(v) or xj(v) < Xf(u), but not both. That is, G is asymmetric, as required. Notice that Chapter 3. Unit Disk Graphs 55 there is precisely one compatible orientation for every realization, but that there might be several realizations that are compatible with any orientation. Lemma 3 .6 For every strip thickness r G [0,\/3/2] ; the compatible orientation associ-ated with any T-strip realization of a graph is a transitive orientation (i.e., a strict partial order). Proof: Let G = (V, E) be a strip graph, / be a strip realization, and G = (V, E) the compatible orientation. Then G is transitive. For if (a, b) £ E and (6, c) G E then xj(a) < xj(b) < Xf(c). Furthermore, since ||/(&) — / ( « ) | | > 1 and 0 < r < y/d/2, we have xf(b)-xf{a) > y/l-(yf{b)-yf(a)y > yjl - T2 > 1/2. Similarly, xj(c) — x/(b) > 1/2. Therefore, xf(c) - xf(a) = (x/(c) - xs{b)) + (xf(b) - xf(a)) > 1/2 + 1/2 = 1. It follows that (a,c) G E and (a,c) G E. By a similar proof, compatible orientations for L\ strip graphs (for which 0 < r < 1/2) are also transitive. | Theorem 3 . 7 Strip graphs form a subclass of cocomparability graphs. Proof: Follows immediately from Lemma 3.6. I Chapter 3. Unit Disk Graphs 56 Theorem 3.8 For every strip thickness r > \/3/2, there is a r-strip graph that is not a cocomparability graph. Proof: We will construct C5 in a "thick" strip. Since cocomparability graphs are perfect but C5 is not, it follows that C 5 is not a cocomparability graph. Chapter 4 proves the stronger claim that no induced cycle (odd or even) on five or more vertices is a cocomparability graph (Theorem 4.6). Let y be any value that satisfies < y < min{r, V15/4}. Construct five points P = {Pi5P2,P3,P4,j05} in the strip, where P l = (-3/4,0), p2 = (-1/2, y), Ps = (1/2,y), P4 = (3/4,0), PB = (0,0), as shown in Figure 3.3. Then the unit disk graph G(P) generated by P is isomorphic to P2 P3 Pi Ps P4 Figure 3.3: A set of five points that generate an induced cycle. C 5 (although this is clearly not the only way to realize C 5 in a suitably thick strip). | Chapter 3. Unit Disk Graphs 57 3.2 Exploiting a Geometric Model 3.2.1 Least Vertex-Weight Paths in Unit Disk Graphs Finding short paths is a natural operation on graphs, see ([CLR90] pages 514-578) for example. This section develops an algorithm that finds a path (that minimizes the sum of the weights of its vertices) from a set of source vertices to all other vertices in a unit disk graph. That is, it solves the multiple-source least vertex-weight path problem on unit disk graphs. Any algorithm for finding a least edge-weight path must examine at least Q(E) edges, so that fl(E) is a lower bound for the algorithm's run time. To see this, consider an algorithm on the bipartite graph in Figure 3.4 in which all edges but one, to be determined U V Figure 3.4: Adversary argument for shortest path by an adversary, have weight 1. Suppose the algorithm fails to examine some edge (u, v) from the \U\• \ V\ = Q(E) edges between sets U to V. The adversary then makes (u, v) the 0-weight edge. Therefore, the algorithm clearly fails to find the shortest path (s,u,v,t) from s to t. Chapter 3. Unit Disk Graphs 58 This adversary argument does not hold for least vertex-weight paths. If we could implicitly represent the edges, then we might be able to design algorithms whose com-plexities do not depend on the number of edges. This section does so by exploiting a geometric representation for unit disk graphs. In particular, it solves the multiple-source least vertex-weight path problem for unit disk graphs with nonnegative vertex-weight in O ( V l o g V ) time, where the unit disk graph G = (V, E) is represented by a realization / : V —> R 2 . We will derive the final algorithm by modifying Dijkstra's algorithm in several phases. By way of background, Dijkstra's algorithm solves the single-source shortest path problem on graphs with nonnegatively weighted edges; see ([CLR90] pages 527-532) for example. The version in Table 3.1 solves a variation of this problem; it finds the least edge-weight path from any vertex in S to all vertices in V. A well-known way of solving this multisource variation, without modifying Dijkstra's algorithm, is to modify the input graph by adding a new source vertex that is incident, via zero-weight edges, to all vertices in S. We will usually not have the luxury of effecting such modifications to our unit disk graphs when they are represented by sets of points in the plane. Therefore, Dijkstra's algorithm in Table 3.1 simulates the new source vertex and the l ^ l new edges by "priming" the queue with the source vertices in S. Dijkstra's algorithm uses a priority queue Q, a data structure that supports the following operations. <5.construct(V,c/) builds the priority queue containing the set V with keys d. Q.not-emptyQ is true if Q does not contain any elements. Q.extract-minQ removes and returns the highest priority item, i.e., the one with the lowest key dv, from the queue. Chapter 3. Unit Disk Graphs 59 Table 3.1: Algorithm: DI JKSTRA(G,S) [Dijkstra's Algorithm] Input: A nonnegatively weighted graph G = (V,E,w) and a set of source vertices S C.V. Output: A shortest path forest, given by the parent array TT, and the distance from S to every vertex, given by d. 1 for each vertex u G V 2 do if u G 5 /* Prime the queue with S. */ 3 then du <— 0 /* i.e., simulate w(s,u) = 0 */ 4 else du i— oo 5 TT u «- NIL 6 <5-construct( V, d ) 7 while Q.not-empty() 8 do u <r- Q.extract-min() 9 L <- {v : G £} 10 for each vertex u G L 11 " do if dv > d„ + w(u, v) 12 then d^ d u + w(u,v) 13 Q.decrease-key(v, dv) 14 7T,; U 15 return (7r , d) Chapter 3. Unit Disk Graphs 60 Q.decrease-key(v,dv) increases the priority of v by lowering its key to dv. Dijkstra's algorithm finds least edge-weight paths in 0((V + E) log V) time when its priority queue is implemented as a binary heap [CLR90]. Note that this performance can be improved to 0 ( V l o g V + E) time using Fibonacci heaps [CLR90], but the algorithms presented in this section do not need to do so since they attain their efficiency in other ways. Before modifying the algorithm any further, let us prove1 that the key of any vertex removed from the queue (at Step 8) is no more than that of any subsequently removed vertex. Lemma 3.9 (The Monotone Lemma) The vertex keys form a nondecreasing sequence in the order that Dijkstra's algorithm removes them from the queue. Proof: We need only prove that Dijkstra's algorithm maintains the invariant that no vertex in the queue has a key less than that of an already removed vertex. This invari-ant holds trivially after the algorithm constructs the queue. Thereafter, the algorithm changes the priority of a vertex v only at Step 13, by calling Q.decrease-key(u, dv). At this stage, vertex u has just been removed from the queue, and dv <— du + w(u, v). There-fore dv > du, since all weights are nonnegative. By the invariant, du is no less than the key of any other removed vertex. It follows that dv also is not less than that of any removed vertex. | Vertex Weights We can further modify Dijkstra's algorithm to solve the least vertex-weight path problem. Here, the weight of a path is- the sum of the weights of its vertices. The vertex weights must be nonnegative, as was also true for the edge weights in the previous algorithm. 1This fact is probably well-known. Nevertheless, I was unable to find it stated in a standard reference, so I have chosen to prove it here for completeness. Chapter 3. Unit Disk Graphs 61 Again, the usual way of solving this variation is to modify the graph, not the algorithm. To simulate the weight of a vertex with edge weights, set the weight of every incoming edge to the vertex weight. Since we wish to avoid explicit edge weights, we must change the algorithm by changing every occurrence of w(u,v) to w(v). In particular, we must change the already-simulated w(s, u) in Step 3 from the constant 0 to w(u) for all u £ S. Also, in Steps 11 and 12, we must change the edge-weight term w(u,v) to the vertex-weight term w(v). The modified algorithm V E R T E X - D I J K S T R A is shown in Table 3.2. Table 3.2: Algorithm: V E R T E X - D I J K S T R A ( G , S ) [Dijkstra's algorithm for vertex weights] Input: A nonnegatively vertex-weighted graph G = (V, E,w) and a set of sources S C V. Output: A least-weight path forest, given by the parent array TT, and the weight from S to every vertex, given by d. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 for each vertex u £ V do if u £ S /* Prime the queue with S. */ then du f- to (it) /* i.e., simulate w(s,u) = to (it) */ else du oo TTU <- NIL Q.construct(V, d) while Q.not-empty() ' do u f- Q.extract-minQ L <- {v : (u,v) £ E} for each vertex v £ L then dv •(— du + w(y) Q. deer ease-key (v, d. 'V return (TT, d) The running time has not changed; it is still 0((V + E)logV). We can do better, though, because algorithm V E R T E X - D I J K S T R A changes the priority of each vertex Chapter 3. Unit Disk Graphs 62 exactly once, when the algorithm visits it for the first time. This is shown by the following lemma. Note that the lemma would not be true for Algorithm D I J K S T R A , that is, if the algorithm uses arbitrary edge weights. L e m m a 3 . 1 0 Algorithm VERTEX-DIJKSTRA determines the final priority of a non-source vertex the first time it removes an adjacent vertex from the queue. Proof: Let v be a nonsource vertex, and let u be the first adjacent vertex removed from the queue (at Step 8). Then dv 4— du + w(v) at Step 12, and dv either remains infinite if du is infinite, or attains some finite value if du is finite. Now, let v! be any subsequently removed vertex adjacent to v. By the Monotone Lemma, du < dui when the algorithm removes u' from the queue (at Step 8). Therefore, dv = du + w(v) < du> + w(v), so that the test at Step 11 fails, and Step 13 does not get called to change the priority of v. | This lemma implies that Algorithm V E R T E X - D I J K S T R A , when recalculating prior-ities, need not examine all vertices adjacent to a removed vertex. Rather, it need only examine those nonsource vertices that are not adjacent to any previously removed ver-tices. To facilitate this, we can modify the algorithm to mark source vertices at Step 3.5 and adjacent nonsource vertices at Step 9.5. The new algorithm must also compute the set of adjacent unmarked vertices L accordingly. The complete algorithm M I N - P A T H is shown in Table 3.3. L e m m a 3 . 1 1 Algorithm MIN-PATH finds a lightest path forest in 0(V log V + E) time. Proof: The correctness of the algorithm follows from the correctness of Dijkstra's algo-rithm and the preceding discussion. Using a binary heap priority queue, Q.construct () takes 0(V) time, 0;- e xtract-min() takes 0(log V") time, and Q.decrease-key() takes 0(log V) time (see [CLR90] pages 140-152). Vertex operations therefore take O ( V l o g V ) time Chapter 3. Unit Disk Graphs 63 Table 3.3: Algorithm: MIN-PATH(G ,5) [Least Vertex-Weight Path] Input: A nonnegatively vertex-weighted graph G — (V, E, w) and a set of sources S C V. Output: A lightest path forest, given by the parent array 7r, and the weight of the lightest path from S to every vertex, given by d. 1 for each vertex it £ V 2 do if u £ S 3 then du «— w(u) 3.5 4 5 6 7 8 9 9.5 10 11 12 13 14 15 M A R K ( u ) else du ^ — co 7T„ <r- NIL <5-construct(V, d) while Q.not-empty() do it <— Q.extract-minQ L <— {i; : v is not marked and (it, u) £ E} M A R K ( L ) for each vertex v £ L do if dv > du + iu(u) then c?„ —^ d u + w(v) Q.decrease-key(i;, dv) TTy f- It return (7r, d) Chapter 3. Unit Disk Graphs 64 since the algorithm removes each vertex from the queue exactly once, marks it exactly once, and therefore decreases its key at most once. The remaining step is Step 9, which examines every edge twice in a straightforward implementation. This takes 0(E) time and the lemma follows. | Exploiting Unit Disk Graphs Now that we have isolated the dependence of Algorithm M I N - P A T H on the edges of the graph, we can eliminate this dependence for unit disk graphs. We will do this by efficiently implementing vertex marking, as well as Step 9 in Algorithm M I N - P A T H , for unit disk graphs represented as a set of points in the plane. We will follow the theoretical framework proposed by Imai and Asano [IA86], who solve depth first search and breadth first search in 0 ( V l o g V) time for another class of geometrically represented graphs. In the process, we will also derive depth first and breadth first search algorithms that run in 0(V\ogV) time on unit disk graphs. We will require two basic operations: LISTl(u) and M A R K ( u ) . The MARK(u) operation marks a vertex v, as in Algorithm M I N - P A T H . The operation LISTl(u) returns an unmarked vertex adjacent to v if one exists, and returns NIL otherwise. We can therefore implement Step 9 (which includes Step 9.5) in the M I N - P A T H algorithm by calling Algorithm L I S T - D E L E T E in Table 3.4. The following property of L I S T - D E L E T E is straightforward. Property 3.12 ([IA86]) Every sequence of 0(V) calls to LIST-DELETE makes 0(V) calls to LIST1 and to MARK. This, together with our previous discussion of Algorithm M I N - P A T H , proves the following lemma. Chapter 3. Unit Disk Graphs 65 Table 3.4: Algorithm: L IST-DELETE(G, u) [Return and mark (delete) all un-marked vertices adjacent to u in G] Input: A graph G = (V, E) and a vertex u £ V Output: L = {v : v is not marked and (u,v) £ E} Side Effect: AU vertices in L are marked. 1 v <- LISTl(u) 2 L 4- 0 3 while v ± NIL 4 d o i f - L + W 5 M A R K ( u ) 6 v <- LISTl(w) 7 return L Lemma 3.13 Suppose any sequence ofO(V) calls to LIST1 and MARK can be executed, on-line, in gt(V, E) (= ffc(V)) time and gs(V,E) (= 0(V)) space. T/ien Algorithm MIN-PATH finds a lightest path forest in 0(gt(V,E) + V l o g V ) time and 0(gs(V,E)) space. Proof: As in the proof of Theorem 3.11, 0;- c o n s truct() takes 0(V) time, Q.extract-min() takes O(log V), time, and Q.decrease-key() takes O(log V) time using a binary heap priority queue. Vertex operations therefore take 0(gt(V,E) + V l o g V ) time since the algorithm removes each vertex from the queue exactly once, marks it exactly once, and therefore decreases its key at most once. The remaining step is Step 9, which the algorithm executes exactly once for each of | V | vertices on the queue. This therefore takes 0(gt(V, E)) time by Property 3.12, and the lemma follows. | Suppose a unit disk graph G = (V,E) is represented by a realization / : V —y R 2 . Then we can implement a call to LISTl(u) with a disk query, that is, by finding a point in / ( V ) that lies within the unit radius disk centered at f(v). We can implement a call to Chapter 3. Unit Disk Graphs 66 M A R K ( u ) by simply deleting v from V. We will see how to implement these operations efficiently in Section 3.2.2. In particular, we will prove the following theorem. Theorem 3.17 (Section 3.2.2) Given a finite set of points f(V) in the plane, a sequence of 0{V) disk queries and point deletions execute in 0(V log V) time, given 0(V log V) preprocessing time with 0(V) space. Since LIST1 and M A R K are effectively equivalent to disk query and site deletion respectively, we immediately have the following corollary. Corollary 3.13.1 A sequence of 0(V) calls to LIST1 and MARK on unit disk graphs can be executed in 0(VlogV) time and 0(V) space. Lemma 3.13 and Corollary 3.13.1 therefore prove the main result of this section, as stated by the following theorem. Theorem 3.14 Given a representation of a unit disk graph by its realization (V,f), a least vertex-weight path can be found in 0 ( V l o g V) time and 0(V) space. In addition to finding least weight paths, efficiently, we can also perform an efficient depth first search on a unit disk graph, given a realization. Imai and Asano [IA86] show that depth first search can be executed quickly if LIST1 and M A R K can be executed quickly. The following theorem makes this more precise. Theorem 3.15 ( [ I A 8 6 ] ) Suppose a sequence of 0(V) calls to LIST1 and MARK can be executed, on-line, in gt(V, E) (= O(V)) time and gs(V,E) (= O(V)) space. Depth first search can be executed in 0(gt(V, E)) time and 0(gs(V, E)) space. Theorem 3.16 below applies Theorem 3.15 to unit disk graphs. We will use Theo-rem 3.16 in Section 5.1 to determine if unit disk graphs are connected in O ( V l o g V ) time. Chapter 3. Unit Disk Graphs 67 Theorem 3.16 Depth first search on unit disk graphs can be executed in 0(V log V) time and 0(V) space, given a geometric representation. Proof: The theorem follows from Theorem 3.15 and Corollary 3.13.1. | 3.2.2 Semi -Dynamic Single-Point Ci rcu lar -Range Queries We are now ready to discuss disk queries in more detail. Given a finite set S of sites (points in the plane), a disk query asks for just one site inside a (closed) query disk with unit radius, or a report that no such site exists. Equivalently, it asks for a site within unit distance of a query point, that is, the center of the query disk. In addition, we want to delete sites from the set. This section presents an algorithm that executes a disk query operation in O(log S) time and deletes a site in 0(log S) amortized time, given 0(S log S) preprocessing time with O(S) space. The disk query problem resembles circular range searching—report all sites in a query disk [PS85]. Another related problem is nearest neighbour search—report the site closest to the query—which can be solved with Voronoi diagrams [PS85]. These results are not immediately useful for our needs, since they do not support deletions efficiently. Note, however, that the algorithms developed in this section are not fully dynamic in that they do not support insertion. The D a t a Structure At the coarsest level, our disk-query data structure is a square grid in the plane. The \S\ sites fall into the cells of this grid, and the contents of each cell are organized in a data structure. At a finer level, the data structure explicitly represents only those cells that contain sites, \S\ cells at most. More concretely, construct the data structure by conceptually partitioning the plane into square cells with unit diagonals, that is, with Chapter 3. Unit Disk Graphs 68 \Jl/2 sides. To avoid ambiguous site placement, assume that each cell is closed on the left and bottom, and open on the right and top. Label a cell with the coordinates of its lower left corner. We can determine the label of the cell that contains a given site in constant time since our model of computation includes the floor function. The data structure is a balanced tree (ordered lexicographically by label) of those cells that contain sites. In this way we can determine in which cell in the tree a site lies, or determine that the site is not in any cell in the tree, in 0(log S) time. We can also insert a new cell in ©(logs') time. We can therefore create the complete tree of cells in 0(S log S) time as follows. For each site v, determine if the cell containing v is already in the tree. If not, create a new cell, insert it into the tree, and associate a linked list (containing just v) with the cell. Otherwise, just append v to the end of the list associated with the cell in constant time. Each cell must now organize its list of sites into its own data structure. Searching Cells Before discussing how the sites within a cell are organized, let us first examine the structure of a query with respect to these cells. Note that the cells covered by any query disk come from a set of 21 cells as shown in Figure 3.5. We can therefore answer the query by looking in at most 21 cells. Consider the cell in which the center of the query disk lies. This cell always lies completely within the query disk. Therefore, if it is in the tree, any site within it satisfies the query. The other covered cells lie either strictly to the left, to the right, above, or below the query point. Suppose first that the cell lies strictly below the query point q = (xq,yq); the other three cases are handled symmetrically. We can therefore assume that our query disk is in fact a query "test tube", that is, the union of the query disk and the infinite Chapter 3. Unit Disk Graphs 69 Figure 3.5: Any unit-radius query disk covers at most 21 cells. rectangle [xq — 1, xq +1] x [yq, oo). The center of the test tube is still defined to be (xq,yq). The following definitions and observations are from Appendix A. A tube T contains a subset P C S of sites if P C T, and it is empty if it does not contain any sites. A tube and a site on its boundary are said to be incident. A tube is said to be supporting (with respect to S) if it is incident to exactly one site and is otherwise empty. A site in S is said to be a test tube maximal site with respect to S, or just a maximal, if it is incident to some supporting tube. By the following lemma, if we wish to report a site in a test tube, it suffices to examine only maximal sites. Lemma A.2 (Appendix A) A test tube contains a site if and only if it contains a maximal site. Chapter 3. Unit Disk Graphs 70 Binary Searching Maxima Order the maxima from left to right. Say that two maxima are neighbouring if they are adjacent in the order. Then binary search can be used to report a maximal in a query test tube. The following lemmas provide the details. The proofs are presented in the appendix. Lemma A.3 (Appendix A) If a site p is incident to a test tube that contains a site to the left of p and a site to the right of p, then p is not maximal. Lemma A . 5 (Appendix A) Let p be a maximal and let Tp be any incident test tube. If Tp does not contain the right neighbour of p, then Tv does not contain any sites to the right of p. Similarly, if Tp does not contain the left neighbour of p, then Tv does not contain any sites to the left of p. We are now ready to search for a maximal in a test tube. Let p be an arbitrary maximal and let T be the query test tube. If T contains p, then we are done. If p lies strictly left of T, then any sites in T must lie to the right of p. Similarly, if p lies strictly to the right of T, then any sites in T must lie to the left of p. Otherwise, if p lies neither strictly to the left nor strictly to the right of T, then we can lower T onto p to get test tube T". If T' does not contain any sites, then neither does T (since T is strictly contained by T'). By Lemma A.3, at most one neighbour of p lies in T'. By Lemma A.5, if no neighbour lies in T", no sites at all lie in T' and we are done. Also by Lemma A.5, if a neighbour does lie in T", then any other sites in T' (and therefore any sites in T) must lie on this neighbour's side of p. Any data structure that supports binary search can therefore determine, in O(log S) time, if one of the (at most \S\) maxima lies in a given test tube T. Such a data structure is developed in Appendix A; the test tube data structure represents the test Chapter 3. Unit Disk Graphs 71 tube maxima of the set of sites S. The following two theorems from the appendix describe the performance of this data structure. Theorem A.11 (Appendix A) The test tube data structure (B(S), D(S)) of sites S can be built in 0(S log S) time and O(S) space. Theorem A.17 (Appendix A) O(S) sites can be deleted from the test tube data struc-ture (B(S), D(S)) of sites S in 0(S\ogS) time. Since any cell could be either to the left, right, above, or below a query point, each cell must maintain four sets of maxima. We can derive the other three directions for cells symmetrically, by rotating the coordinate system in 90 degree increments. This completes the disk query data structure. A Summary of Algorithms In summary, the data structure requires the following three algorithms. The preprocessing algorithm builds the data structure by constructing the tree of cells in 0(S log S) time. Then, for each cell 6, it builds four sets of maxima from its n& sites in 0(nf,logn(,) time. Since Yl,bnb = \S\, and log b < log \ S\, the sum of the 0(nt, log rib) costs over all nonempty cells is 0(S log S). To find a site in a query disk, the query algorithm must examine at most 21 cells. The locations of these cells are all closely related to the location of the cell containing the query point, so the query algorithm can find each each such cell in O(log S) time. The query algorithm can then extract a suitable site in constant time from the center cell, or in Oilognb) time from any of the others. To delete a site, the deletion algorithm finds the cell that contains the site in the tree in (9(log S) time. It then deletes the site from all four maxima lists in 0(log rib) = O(log S) amortized time. This proves the following theorem. Chapter 3. Unit Disk Graphs 72 Theorem 3.17 Let S be a set of sites in the plane. A sequence of O(S) disk queries and site deletions execute in 0(S log S) time, given 0(S log S) preprocessing time and O(S) space. 3.2.3 Building an Adjacency List from a Mode l Recall from Definition 1.2 that the unit disk graph G$(P) generated by a set V of points and a real threshold 8 is the graph (V, E) where E = {(p,q) : \\p — q\\ < 8}. How does Gs(P) change as 8 changes? In particular, how does the number of edges change? Surprisingly, the number of edges generated by two different distance units are linearly related (Theorem 3.18 below). We saw how a geometric realization of a unit disk graph can be used to implicitly represent the edges, and consequently to design algorithms whose computational com-plexity does not depend on the number of edges. What can we do if we have only the geometric representation of the graph, and we wish to compute its edges? One solution would be to simply look at every pair of vertices, and check if the corresponding points are within unit distance of one another. This method would take Q(V2) time, which is optimal for dense graphs. We can do better if our graphs are sparse. Theorem 3.18 makes it possible to report efficiently the edges of a generated unit disk graph Gs(V) = (V, Es) in 0(V log V + Es) time. This section demonstrates this property by presenting and analyzing two different algorithms. Given a set V of points in the plane, these algorithms report the edges Es in the unit disk graph Gs{V, Es) for all distance thresholds 8. Although these two algorithms are correct for different reasons, we will see that their efficiency is due to Theorem 3.18. This reporting problem is also known in the literature [BSW77, DD90, LS94] as the fixed-radius all-nearest neighbours problem: given a set P G Hk of points and a fixed radius 8, report all pairs of points within 8 of each other. Chapter 3. Unit Disk Graphs 73 Theorem 3.18 Let 8 < A be two nonnegative distance thresholds. If G$(V) = (V, E$) and GA(V) = (V, E&) are unit distance graphs generated by the same set V of sites, then \EA\ = 0(V + Es). Proof: This proof generalizes a demonstration by Dickerson and Drysdale [DD90] show-ing (effectively) that \E2\ = 0(V + E\). Let NA[v) denote the closed neighbourhood of v in GA. That is, NA(v) is the set of sites in P within A units of f(v). To simplify the presentation, let us assume that all graphs have loops, that is, that (v,v) G E(G) for every v £ V(G). Let us also assume that each undirected edge is really two undirected edges. Then EA = \J{{u,v):ue NA{v)} vev and \EA\ = J2\NA(v)\. (3.1) vev The rest of this proof shows that J2vev \NA(V)\ = O(Es) by deriving a lower bound for \E$\. The basic idea is to partition the plane into regions with diameter at most 8. Any pair of sites in such a region will generate an edge in E$. We will then associate, with each site v, a region containing at least a constant fraction of the sites in NA(v). Each such region therefore contributes roughly |A^A("y)|2 edges to E$. Although a region might be associated with more than one site, it will not be associated with more sites than a constant multiple of \NA(v)\. The (disjoint) sum of edges in these associated regions is therefore at least roughly J2(\NA(V)\2/\NA(V)\)=Y,\NA(V)\. vev vev More concretely (and more carefully), overlay the plane with a square grid whose cell diagonals are 8 units long, that is, whose sides are 8/y/2 units long. Let the cells be closed on the left and bottom, and open on the right and top. These cells clearly partition the plane and have diameter at most 8, as outlined in the previous paragraph. Chapter 3. Unit Disk Graphs 74 For any vertex v, the neighbourhood NA(V) comes from at most a constant number k\ of these grid cells. The exact value of this constant is not crucial, but it is nevertheless easy to exhibit a value that will suffice. For example, there are at most k = [ s k l = [ ^ 2 1 (3'2) columns strictly between v and the leftmost site in N&(v), since each cell has width 8/\/2. Therefore, counting the leftmost, rightmost, and center columns, the sites in N&(v) come from at most 3 + 2k columns. The same argument holds for rows, so N&(v) touches at most ki = (3 + 2k)2 cells. Let the maximum cell Cv be the cell that contains the most sites from NA(v) (break ties arbitrarily). The Pigeonhole principle implies that the maximum cell Cv contains at least \NA{v)\/kx sites from NA(v). That is, i c i > (3.3) fci where \CV\ is the number of sites in Cv (note that not all of these \CV\ sites need belong to NA(V)). AS promised, every pair of sites in Cv generates an edge in E$ since every such pair is separated by at most 8 units. Does this mean that \E$\ > Z^ev jC^.|2? No, because cell Cv may be counted more than once in this way if it is the maximum cell for more than one site, that is, if Cv — Cu for some site u ^ v. Fortunately, there is a constant k2 such that no cell Cv is counted more than /c2 j C v | times. Again, the exact value of k2 is not crucial, but we can easily derive a suitable one. Suppose Cu = Cv for some site u. Then there are again at most k columns (see Equation 3.2) strictly between u and Cv. By the same argument as above, u must come from one of k2 = (3 + A;)2 cells. But if the cell C* containing u contains more than \CV\ sites, then Cv would could not be the maximal cell for u, since C* contains only sites from NA(U). Therefore u must be one of at most A^ICJ sites. Chapter 3. Unit Disk Graphs 75 A lower bound for the number of edges in Gs is, therefore, ^ number of pairs in Cv 1 sl - h \{u-c* = cv}\ > ^2 \Cy\2 veVk2\Cv\ Y E 1^1 (3-4) Substituting Inequality 3.3 into Inequality 3.4 yields 1 k\ ko ^-tr That is, E I ^ H I ^ ^ I ^ I ( 3- 5) vev Finally, substituting Inequality 3.5 into Inequality 3.1, \EA\ < E 1^4*01 < kx k21£7$1 • Finally, if E's and E'A are versions of and EA without loops and not directed, then \E'A\ = 0(EA) = 0{ES) = 0(V + E's), and the theorem follows. Note that we cannot do without the 0(V) term in general. For example, if V = {(x, 0) : x = 0 ,1 ,2 , . . . , n}, 8 = 0.5, and A = 28, then Gs(V) has no edges at all, while GA(V) has n edges. However, this is only true if Gs(V) is not connected, as shown by the following corollary. | Coro l l a ry 3.18.1 Let 8 < A be two nonnegative distance thresholds. If G$(V) = (V, Es) and GA(V) = (V,EA) are unit distance graphs generated by the same set V of sites, and Gs(V) is connected, then \EA\ = 0(E$). Chapter 3. Unit Disk Graphs 76 Proof: The corollary follows from the theorem since | V | = 0(E$) for connected graphs Gs = (V,Es). I A Plane-Sweep Algorithm To simplify the notation for Algorithm A D J - P L A N E - S W E E P (Table 3.5), assume a unit distance threshold S — 1. Algorithm A D J - P L A N E - S W E E P maintains three indices p, q, and r; and two sets L and Bar of points. The set L is a one-dimensional array of sites, indexed from 1 to | V | , and p, o, and r are indices into L; for convenience, write (xp,yp) = Lp. Array L stores a lexicographically sorted list of the sites in V. That is, p < q implies xp < xq, or xp = xq and yp < yq. The array L supports two operations: L.build(V), which constructs the lexicographically sorted list; and L,-, which returns the site indexed by i. For each p 6 V, define the set Barp = {q : q £ V, q < p, and xp — 1 < xq}. A D J - P L A N E - S W E E P reuses storage by dropping the subscript, thereby representing each such set with the single data structure Bar. See Figure 3.6 for a typical event in the plane-sweep. Algorithm A D J - P L A N E - S W E E P examines each site in L from left to right using the index p. It maintains the invariant that indices r and p delimit the contents of the set Bar. To do so, it increases r until xp — 1 < xr. In the process, it removes all sites that have dropped out of the bar. Correctness Let G(V) = (V, E) be the unit disk graph generated by V and suppose that (Lp, Lq) G E. By definition, | | L p — Lq\\ < 1. Suppose, without loss of generality, that q < p. Therefore q will be in the Bar when the algorithm encounters p since \xp — xq\ < 1. The algorithm therefore correctly reports edge (Lp,Lq) at Step 9 when it examines p. Furthermore, the test in Step 8 ensures that it reports only edges in G(V). Chapter 3. Unit Disk Graphs 77 • • • • • • • P m a 1 » • " 1 * • Figure 3.6: Algorithm A D J - P L A N E - S W E E P examines site p Table 3.5: Algorithm: A D J - P L A N E - S W E E P ( V ) [Construct the list of edges for the unit disk graph G(V) generated by V by sweeping the plane.] Input: A set V of sites. Output: The edges E of the unit disk graph G(V) = (V, E) generated by V. 0 L.build(V) 1 2 p <r- r <- 1 3 while p < \V\ /* examine Lp */ 4 do while r < p and xr < xp - 1 /* Bring Bar up to date */ 5 do Bar <— Bar \ {r} 6 r <- r + 1 7 for each site q 6 fiar such that y g £ [yp — 1, yp + 1] 8 do if ||L„-Z,g|| < 1 9 then E ^ E U {(Lp, Lq)} 10 Bar <- 5ar U {p} 11 p - ( - p + l 12 return E Chapter 3. Unit Disk Graphs 78 Complexity This algorithm can be implemented to run in 0(V log V -f E) time. The preprocessing operation L.build(V) just sorts the sites in 0 ( V l o g V) time. Since the algorithm inserts each site into the Bar exactly once and removes each site at most once, it makes 0(V) insertions and deletions. The implementation represents the Bar as a balanced tree, a red-black tree for example [Sed83], sorted by y-coordinate. Inserting (Step 10) or deleting (Step 5) a site therefore takes 0(log V) time for a total of 0(V log V) time. Balanced trees support the extraction of sites in Step 7 in O(log V + Ip) time, where Ip is the number of sites in the unit interval about yp [Sed83]. Examining all V intervals therefore takes o ( v i o g v + £ ; / p ) P=i time. The sites examined by Step 7 are all within y/2 of p (see Figure 3.6 again). Therefore E p = i IP < \E^\ where G^(V) = ( V , E ^ ) . By Theorem 3.18, it follows that: \v\ J2lP = 0(V + E). P=i The entire algorithm therefore takes 0(V log V) + 0(V + E) = 0(V log V + E) time. We have established the following theorem. Theorem 3.19 Algorithm ADJ-PLANE-SWEEP returns the edges of the unit disk graph G(V) = (V, E) generated by a set V of points in 0(VlogV + E) time. A Delaunay Triangulation Algorithm The next algorithm is due to Dickerson and Drysdale [DD90]. It has the advantage that it allows a preprocessing stage, which is time-critical, and which is independent of the distance threshold 5. This preprocessing stage takes 0 ( V l o g V ) time. The algorithm can then report the edges in Gs(V) in 0(V + E) additional time. Briefly, the algorithm Chapter 3. Unit Disk Graphs 79 performs a depth-first search on the thresholded Delaunay triangulation TDT(V)—the Delaunay triangulation of V less any edges that are longer than 8—from every vertex p G V. The algorithm stops the current branch whenever it visits a vertex that is more than 8 units from p. The algorithm reports an edge from p at all other visited vertices. Table 3.6 provides more details. Table 3.6: Algorithm: A D J - D E L A U N A Y ( V , 8) [Construct GS(V) by searching a thresholded Delaunay triangulation] Input: A set V of sites and a distance threshold 8. Output: The edges Es of the unit disk graph Gs(V) = (V, Es) generated by V. 0 Construct the Delaunay triangulation DT(V) of V. 1 Tm(V)<-DT(V)\{{p,q):\\p-q\\>6} 2 E§ f- 0 3 for every vertex p G V /* examine p */ 4 do depth first search T D T ( V ) from p, but stop each branch when the current site is farther than 8 from p. 5 for each visited vertex q 6 do if | | p - q\\ < 8 7 then Es^ EsU{{p,q)} 8 return Es Correctness Let (p, q) be an edge of the unit disk graph generated by V. Then there is a path from p to q, in the Delaunay triangulation of V, such that every site on the path is within 8 units of p. To see this, consider the line segment between p and q. This line segment intersects several adjacent cells in the Voronoi diagram of the sites V. The sequence (p = O i , O2, • • •, 0k = q) of sites corresponding to these Voronoi cells is therefore a path in the Delaunay triangulation by the duality of the Delaunay triangulation and the Voronoi diagram (see Section 2.4.1). Now consider any site (Voronoi center) O t in this sequence, and let x be any point on the part of the segment intersecting the Voronoi Chapter 3. Unit Disk Graphs 80 Figure 3.7: The disk through p and q contains Voronoi center Oi cell corresponding to 0, (see Figure 3.7). By the definition of Voronoi diagram, x is closer to Oi than to any other site in V. It follows that the circle about x and through Oi (that is, of radius — x||) does not contain any other site in V. In particular, it does not contain p or q (unless Oi equals p or q respectively). The circle with diametric points p and q therefore contains the circle about x and so contains Oi also. Since in addition \\p — q\\ < 5, the site Oi is within 8 units of p and q and of all other sites Oj in the path. That is, ( 0 1 ; 02, • • •, Ok) is a path from p to q in the thresholded Delaunay triangulation. Consequently, Step 7 will report the edge (p, q) when it examines p. Complexity Let Gp — (VP,EP) be the connected component (of the thresholded De-launay triangulation TDT(V)) that contains p. Suppose that Step 4 visits a vertex q so that q € Vp. Since Step 1 thresholds the Delaunay triangulation, q must be within 8 of its parent in the depth first search. This parent must be within S of p since Step 4 Chapter 3. Unit Disk Graphs 81 cuts off depth first search when the distance from p to the current site exceeds 8. There-fore, q is within 28 of p by the triangle inequality. It follows that (p, q) G E%s where G2siyp) = (VP,E%$). Since Gp is connected, Corollary 3.18.1 applies to show that El& = 0(EP). Since £ p e v \ E P \ = \E\, it follows that £ p £ l / \E% S \ = 0(E) so that Step 6 executes 0(E) times. The Delaunay triangulation can be constructed in 0(V log V) time [PS85], and thresh-olded in 0(V) time (the number of edges of a Delaunay triangulation is linear in the number of sites). However, Volker Turau has discovered a way to implicitly threshold the Delaunay triangulation in the preprocessing step [Tur91], thereby saving the 0(V) thresholding cost and the Q(V) cost of examining each vertex in Step 3. Represent the Delaunay triangulation as an adjacency list. List every vertex p in nondecreasing order of the length of the shortest edge incident with p. List every neighbour q of each vertex p in nondecreasing order of the length of edge (p,q). Then, for any given threshold 8, Step 3 can examine the vertices in order, stopping when it encounters a vertex with every incident edge longer than 8. In this way, it examines only vertices that are not isolated in Gs (and at most one more). Similarly, Step 4 examines edges out of q in order, stopping when it encounters an edge longer than 8 (charge the cost of detecting a long edge to visiting q). The following theorem summarizes. Theorem 3.20 ([DD90, Tur91]) Given a set V of points in the plane, algorithm ADJ-DELAUNAY takes 0(V \ogV) preprocessing time and, given a thresholds, returns the edges of the unit disk graph Gs(V) = (V, E) in 0(E) additional time. The discussion above implies that the connected components of a unit disk graph are equivalently the connected components of the underlying thresholded Delaunay triangu-lation. These connected components can therefore be extracted in linear time by depth Chapter 3. Unit Disk Graphs 82 first searching the thresholded Delaunay triangulation. Theorem 3.21 The connected components of a unit disk graph G$(V) can be found in 0(V log V) time, given a set V of points in the plane. Other Work There are other solutions to the fixed-radius all-nearest neighbour problem in the liter-ature. Bentley, Stanat, and Williams [BSW77] have developed a solution that involves partitioning the plane into a grid of square cells, much like the proof oi Theorem 3.18. They presented their algorithm in terms of the metric in fc-dimensional space, but it is not difficult to modify it to work with the Li metric in the plane. Again, Theorem 3.18 explains the computational complexity of their algorithm. However, the time required to access the cells containing sites in their algorithms is also a factor in the computational complexity. Bentley et al. mention three possible ways to implement this set of cells, leading to three different time bounds for the complete algorithm, as the following table summarizes. Implementation of Occupied Cells Time Complexity hash table balanced tree two-dimensional array 0(V + E) average case 0(V log V + E) worst case 0(V + E) worst case The two-dimensional array solution actually allocates space for all cells, occupied or not, that intersect the bounding rectangle of the sites. Since the 16*1 sites may have an arbitrarily large range of coordinates, the two-dimensional array may require arbitrarily large space. The hash table or the balanced tree solutions, on the other hand, require only a linear amount of space. Subsequent to (and independent of) the development .of Algorithm A D J - P L A N E -SWEEP, Lenhof and Smid [LS92, LS94] described a similar algorithm that runs in Chapter 3. Unit Disk Graphs 83 Problem Unit Disk Performance Ratio Arbitrary Graph Performance Ratio chromatic number 3 0(V(G)^l^f) [Hal90] vertex cover 3/2 2 [GJ79] independent set 3 not constant [ALM+92] domination 5 O(log V) [Joh74] independent domination 5 not constant [Irv91] connected domination 10 ? total domination 10 ? Table 3.7: Performance ratios for approximation agorithms on unit disk graphs 0( V l o g V + E) time. Their approach is in effect a combination of our plane-sweep algorithm and the cell method described in the last paragraph. This combination makes it easier to adapt to higher dimensions than the pure plane-sweep algorithm. One at-tractive feature of their presentation is that they have recorded an animation of their algorithm on an easily-obtained videotape [LS94]. 3.3 NP-Complete Problems Restricted to Unit Disk Graphs Many well-known NP-complete problems on arbitrary graphs, in particular chromatic number, vertex cover, independent set, domination, independent domination, and con-nected domination [GJ79] remain NP-complete for unit disk graphs. Clark, Colbourn, and Johnson provide reductions for all of these problems in their recent paper [CCJ90]. Remarkably, all of these problems have efficient approximation algorithms. These algo-rithms, due to Marathe, Breu, Hunt, Ravi, and Rosenkrantz [MHR92, M B H + 9 5 ] , achieve the performance ratios in Table 3.7, which compares them with the best-known perfor-mance ratios for arbitrary graphs. Section 3.3.3 presents the details for the chromatic number problem. Chapter 3. Unit Disk Graphs 84 Clark et al. notice that "For all of the problems2 mentioned here, the complexities for unit disk graphs and for planar graphs agree." We saw in Chapter 1 that there are some tantalizing connections between planar graphs and unit disk graphs. Nevertheless, the observation by Clark et al. may be misleading in its generality. In fact, the complexities of these problems agree for unit disk graphs and for planar graphs with maximum degree 4. Conjecture 3.22 If a problem V is NP-complete for planar graphs with maximum degree 4, then V is NP-complete for unit disk graphs, too. Discussion For example, every grid graph is a planar graph with no degree exceeding 4. If a problem is NP-complete for grid graphs then it is clearly NP-complete for unit disk graphs, which include grid graphs as a special case. For example, the problem D O M I N A T I N G SET has this characteristic [CCJ90]. But there seem to be stronger connections. If a problem is NP-complete for planar graphs with maximum degree 4, then there is a "plan of attack" for proving that the problem is also NP-complete for unit disk graphs. The idea is to embed the planar graph instance in a grid graph, then simulate the embedded edges with a string of disks. The nature of the simulation naturally depends on the problem under consideration. This "plan of attack" is unfortunately not sufficiently well-defined to yield an easily applied theorem. Instead, the next subsection applies the idea to an illustrative example, namely I N D E P E N D E N T SET. The NP-completeness proof for C H R O M A T I C N U M B E R on unit disk graphs (Section 3.3.3) provides another example. Clark et al. [CCJ90], who provide an explicit reduction for V E R T E X C O V E R rather than I N D E P E N D E N T SET, provide even more examples. 2Besides these NP-complete problems, Clark et al. also consider the maximum clique problem, which is polynomial for both planar and unit disk graphs. See Section 3.3.2 for more details. Chapter 3. Unit Disk Graphs 85 3.3.1 Independent Set An independent set, also called a stable set in a graph is a set of vertices no two of which are adjacent. The maximum independent set problem is to find an independent set of greatest cardinality. This fundamental problem is very well known; the correspond-ing decision problem is given below, and is taken verbatim from Garey and Johnson's book [GJ79]. [GT20] .INDEPENDENT SET INSTANCE: Graph G = (V, E), positive integer K < \V\. QUESTION: Does G contain an independent set of size K or more, i.e., a subset V C V such that \V'\ > K and such that no two vertices in V are joined by an edge in El Theorem 3.23 ([CCJ90]) INDEPENDENT SET is NP-complete for unit disk graphs under the L\, Li, and L^ metrics. Proof: The problem is in N P since I N D E P E N D E N T SET is NP-complete for arbitrary graphs [GJ79]. Let us reduce I N D E P E N D E N T SET, which remains NP-complete [GJ79] for planar cubic graphs, to I N D E P E N D E N T SET for unit disk graphs. To this end, let Gpc and k be an instance of I N D E P E N D E N T SET on a planar cubic graph. Begin by embedding the instance in a grid graph; this is done only to leave enough room between vertices and edges for the next step. Let Gdu be the grid graph induced by the vertex set {(i,j) : 0 < i < I, 0 < j < J}. Valiant [Val81] shows how to embed, without crossovers, any planar graph of degree 4 (or less) with n vertices into Gd3rit3n. His polynomial time method is simple: given any embedding of the graph in the plane, strip off one vertex at a time, from the "perimeter" of the embedding. Embed these vertices in reverse order into the grid. The rath vertex is typically embedded as shown in Figure 3.8. Chapter 3. Unit Disk Graphs 86 Embedding of first m — 1 vertices Figure 3.8: Embedding the mth vertex [after [Val81]]. Next, simulate the vertices of the planar graph with disks, and simulate the edges of the planar graph with a string of an even number of disks. We can use the grid embedding to make this concrete. A l l disks in the layout have diameter 1/3. Center the disks at the points P determined as follows. We must simulate three kinds of grid graph components: nodes, pseudo-nodes, and segments. Here, nodes are those grid graph vertices in the embedding that correspond to vertices of the planar cubic graph and pseudo-nodes are those that do not. Segments are grid graph edges in the embedding. Figure 3.9 shows how to simulate each of these components with disks. The figure shows two types of pseudo-nodes, those that bend and those that do not, even though they are simulated in the same way. In terms of coordinates, simulate • the node with a disk centered at (i,j), • the segment [(i,j), (i ± 1, j)] with two disks, centered at (i ± |, j) and (z ± §, j ) , • the segment [(i,j), ± 1)] with two disks, centered at (i,j ± | ) and (i,j ± |), Chapter 3. Unit Disk Graphs 87 node Q segment •CO* non-bending psuedo-node # Q Q • bending psuedo-node w Figure 3.9: Components for simulating grid embeddings • the pseudo-node at ( i , j ) by putting a disk at ( z ± | , j ) for each embedding fragment of the form [(i,j), (i ± 1, j)], and a disk at (i,j ± | ) for each embedding fragment of the form [(i,j), (i,j ± 1)]. Figure 3.10 simulates the embedding of a small graph (5 vertices, 6 edges). Note that every edge of the planar cubic graph is simulated with an even number of disks, as required. Let n denote the number of nodes, p the number of pseudo-nodes, and s the number of segments in the embedding. The size of the simulation is polynomial in the size of the planar cubic graph since, by the properties of Valiant's embedding, n + p + s < (3 |V|) 2 + 2(3|V| 2) + 2(3|V| - 1)(3|V|) = 0(V2) The planar cubic graph Gpc has an independent set Ipc of size k or more if and only if the unit disk graph Gud = (V, E) has an independent set IU(i of size k' > k -f p + s. The following definitions and observation aid in seeing this. Say that a disk in the realization of Gud is selected if it is in the independent set Iu^. Say that an edge in Gpc is saturated if one half of the disks that simulate it are selected. Equivalently, an edge is saturated if exactly one disk is selected from each segment or pseudo-node simulating the edge. Figure 3.11 shows a saturated edge along with a selected node disk. The following Chapter 3. Unit Disk Graphs 88 Figure 3.10: The embedding has 5 nodes, 3 bending pseudo-nodes, and 3 non-bending pseudo-nodes. For easier identification, the figure outlines in bold the disks that simulate nodes, and shows the underlying grid graph embedding. Figure 3.11: A saturated edge and a selected node disk Chapter 3. Unit Disk Graphs 89 observation follows from the construction of the simulation. Observation 3.24 An edge in the planar cubic graph Gpc can be saturated if and only if at most one of its node disks is selected. Now suppose that Ipc is an independent set of k vertices in Gpc. Select the node disks corresponding to Ipc. By the observation above, we can now saturate every edge of Gpc by selecting exactly one disk from each segment and pseudo-node in such a way that no two disks overlap. This process yields k' non-overlapping disks, which make up Iud as required. Conversely, suppose that Gud(P) has an independent set of k' disks, and let Iud be the maximum independent set that maximizes the number of saturated edges. Then every selected node disk v is adjacent only to saturated edges. For otherwise the number of saturated edges could be increased: remove v and the less than e/2 disks in the adjacent unsaturated edge from Iug, and replace them with the e/2 disks that saturate the edge, as shown in Figure 3.12. By the observation, Ipc, the set of vertices in Gpc corresponding to node disks in Iud, is an independent set. Since the edges can contribute at most p + s disks to the k! disks in Iud, it follows that k' < \Ipc\+ p-\-s. That is, \Ipc\ > k' —p — s = k, as required. Squares rotated by 45 degrees can substitute for disks in the above proof. If the grid graph is first rotated by 45 degrees, then upright squares can substitute for the disks, also. Therefore the proof is valid for disks under the and L\ metrics, as well as the L2 metric. | Corollary 3.24.1 VERTEX COVER is NP-complete for unit disk graphs under the L\, L2, and metrics. Proof: VERTEX COVER has the same complexity as INDEPENDENT SET problem with respect to restrictions on the graph [GJ79]. | Chapter 3. Unit Disk Graphs 90 Figure 3.12: Saturating an unsaturated edge. The bold circles on the left form a max-imal independent set of disks from the edge. However, the edge can be saturated by redistributing one of the node disks into the edge, as shown on the right. 3.3.2 M a x i m u m Clique Finding a maximum clique in an arbitrary graph is a well-known NP-complete prob-lem [GJ79]. This problem is also closely related to finding a maximum independent set, since a clique in a graph is an independent set in its complement. We have already seen that I N D E P E N D E N T SET is NP-complete for unit disk graphs (Section 3.3.1). Nev-ertheless, we will see in this section that a maximum clique in a unit disk graph can be found in polynomial time, given a realization of the graph. Let G = (V, E) be a unit disk graph and let / : V —> R 2 be a realization of G. To simplify the notation, let us assume that vertex names are synonymous with their realizations; that is, v — f(v) for every vertex t i f V . Although this practice might lead to ambiguity, the role of a vertex will always be clear from context. First note that if C is a maximum clique with more than one vertex in G, then it has a pair of sites p and q that are farther apart than any other pair in C. The distance A between p and q must Chapter 3. Unit Disk Graphs 91 naturally satisfy A < 1. Furthermore, all sites in C must be within A units of p and q. That is, let the lune through points p and q be the set of points in the region Rp,q, where Rp<q = {x : x £ R 2 and \\x — p\\ < A and ||x — q\\ < A } as shown in Figure 3.13. Then C C RP,q, and in particular, C C LPt9, where LPA = V fi Rp>q is the discrete lune through p and q. Figure 3.13: The lune through a pair of sites. A maximum clique containing p and q is therefore a maximum clique in the induced subgraph G(Lp<q). We can now compute a maximum clique in G(LPtq) by computing a maximum independent set in the graph's complement G(Lp>q), since these sets are identical. Clark, Colbourn, and Johnson [CCJ90] prove that G(LPiq) is cobipartite, that is, that G(LPtq) is bipartite. This is easy to see. The line through p and q partitions the lune through p and q into two halves, say an upper half and a lower half. It is easy to verify that each half has diameter A , which is at most 1. Therefore, any edge in the complement must join a vertex in one half with a vertex in the other. Clark et al. show how to extract Chapter 3. Unit Disk Graphs 92 a maximum independent set from a maximum matching in G. Furthermore, a maximum matching in a bipartite graph can be found in 0 ( V 2 5 ) time ([HK73], see for example [PS82] pages 225-226). One algorithm for finding a maximum clique in a unit disk graph G(V) = (V, E), therefore, is to examine all \E\ adjacent pairs of vertices in V. For each pair {j>, q}, compute G(LPtg) in 0(V2) time. Finally, compute a maximum independent set of the subgraph G(Lp>q) in 0(V2'5) time. The complete algorithm therefore runs in 0(V2'5E) = 0(V4'5) time. This proves the following theorem. Theorem 3.25 ( [CCJ90] ) One can find a maximum clique of a unit disk graph G = (V, E) in 0 ( V 4 ' 5 ) time, given a realization. We can actually do better if we exploit the realization. The following algorithm is similar to one developed by Aggarwal et al. [AIKS89] for finding k points, from n points in the plane, with minimum diameter. Aggarwal et al. proved independently of [CCJ90] that G(LPA) is bipartite. They then use an algorithm due to Imai and Asano [IA86], together with an efficient representation of G(LPtq) due to Hershberger and Suri [HS89], to find a maximum independent set in G(Lp,q) in 0 ( V 1 5 l o g V) time. The algorithm in the previous paragraph, together with the observations in this paragraph, therefore takes a total time of 0(V35 log V). Another result due to Imai and Asano [IA83] is an algorithm that finds a maximum clique of an intersection graph of rectangles in 0 ( V l o g V) time. Since a unit disk graph under Li and is an intersection graph of squares, their algorithm applies here, also. The following theorem summarizes the preceding discussion. Theorem 3.26 Given a realization, one can find a maximum clique in a unit disk graph G = (V,E) in 0 ( V 3 ' 5 l o g V) time under the L2 metric and in 0 ( V l o g V) time under the L\ and metrics. Chapter 3. Unit Disk Graphs 93 Is it possible to find a maximum clique in a unit disk graph without a realization? For example, Graf ([Gra,95] pages 35-36) interprets [CCJ90] to conclude that Lp,q = N(p) f l N(q), thereby obviating the need for a realization. Unfortunately, this conclusion is unwarranted if A < 1. The intersection of neighbourhoods N(p) fl N(q) is indeed enclosed by a (geometric) lune, but this lune is the intersection of two unit-radius disks. As shown in Figure 3.14, this unit-radius lune is larger than the lune through p and q, and the graph induced on its sites may not be cobipartite. Figure 3.14: The unit-lune defined by a pair of sites, together with the smaller lune through the pair of sites. Perhaps if p and q have the greatest separation of any pair in some clique, then the subgraph induced by N(p) D N(q), although not a cobipartite lune, is nevertheless still cobipartite (for other reasons). Then the maximum clique algorithm could be easily modified to work without a realization. This conjecture also is not true, as shown by the Chapter 3. Unit Disk Graphs 94 more detailed counterexample in Figure 3.15. The question "Can the maximum clique of a unit disk graph be computed in polynomial time without access to a realization?" therefore remains open. 3.3.3 Chromat i c N u m b e r To colour a graph, assign a positive integer (a colour) to each vertex in the graph, such that no two adjacent vertices receive the same colour. The minimum colour problem is to colour a graph using the least number of colours. This least number of colours is called the chromatic number of the graph. The related decision problem, G R A P H K - C O L O U R A B I L I T Y , is NP-complete for arbitrary graphs [GJ79]. We will see in this section that it remains NP-complete when restricted to unit disk graphs. Section §2.1.4 presented an illustrative application to radio frequency spectrum man-agement due to Hale [Hal80]. Hale states that the colouring problem for unit disk graphs was shown to be NP-complete by J .B. Orlin in Apri l of 1980, but does not provide any further details or references. Since then it has been proved by Burr [Bur82] and by Clark, Colbourn, and Johnson [CCJ90]. A n independent proof due to the author appears below. It follows the general framework outlined earlier, of reducing from the same problem in planar graphs with degree at most four. G R A P H K - C O L O U R A B I L I T Y (Problem GT4 in [GJ79]) INSTANCE: Graph G = (V,E), positive integer K < \V\. . QUESTION: Is G A'-colourable, that is, does there exist a function c : V —> {1 ,2 , . . . , A'} such that c(u) ^ c(v) whenever (u, v) 6 El Theorem 3.27 (Or l in 1980 [Hal80, Bur82 , CCJ90] ) GRAPH K-COLOURABILITY is NJ*-complete for unit disk graphs, even for K = 3. Chapter 3. Unit Disk Graphs 95 Figure 3.15: G(N(p) Pi N(q)) is not always cobipartite. The figure shows unit-radius arcs centered at a, 6, and c. For clarity, the figure does not show an edge between p and q, even though these vertices are adjacent. Site p is exactly unit distance from sites a and c. Similarly, q is exactly unit distance from a and b. Therefore, {a,b, c} C N(p) D N(q). The remaining sites (not shown in the figure) fall into one of two halves. The upper half lies in the lune, just above the unit-radius arcs from both b and c. The lower half also lies in the lune, just below the unit-radius arc from a. These additional sites are placed such that p and q have the greatest separation. By this construction, all points in the lune form a clique. In fact, they form a maximum clique; vertex a cannot be part of a maximum clique, since its presence would exclude the entire lower half. Similarly, neither b nor c can be part of a maximum clique, since the presence of either one would exclude the entire upper half. Finally, note that G(N(p) 0 N(q)) is not cobipartite since {a, b, c} induces a triangle in its complement. Chapter 3. Unit Disk Graphs 96 Proof: The problem is in N P since it is in N P for arbitrary graphs. NP-hardness follows by a reduction from G R A P H 3 - C O L O U R A B I L I T Y restricted to planar graphs having no vertex degree exceeding 4 [GJ79]. Let H = (V, E) be an instance of G R A P H 3 - C O L O U R A B I L I T Y . Figure 3.16 shows an example graph H. Begin, as usual, by em-Figure 3.16: A planar graph with degree at most 4. bedding H into a square grid in quadratic time and space using Valiant's method (§3.3.1). This embedding assigns integer coordinates to each vertex in H. Such an embedding is shown in Figure 3.17, where the vertices of H are drawn as concentric circles. The embed-ding introduces new vertices, which we will call pseudo-vertices, and which Figure 3.17 shows as single circles. Valiant's embedding typically dilates an edge in H (i.e., in E) into several unit length edges. For each edge in E, arbitrarily select one of the resulting edges and call it a "real" edge. Figure 3.17 shows these as dotted line segments. Call the remaining unit-length edges, "pseudo-edges"; Figure 3.17 shows these as solid line segments. Next, multiply all coordinates by 12, in effect refining the grid. We are now ready to construct an instance of a unit disk graph G^P) generated by a set of points P. Let P Chapter 3. Unit Disk Graphs 97 Figure 3.17: The graph from Figure 3.16 embedded in the grid. include points with the coordinates of all vertices, both real and pseudo. In addition, P includes points corresponding to the edges. Simulate edges with the points (subgraphs) in Figure 3.18. Note that any 3-colouring of pseudo-edges results in vertices with the Pseudo-edges cx^^^>c<^^^>o Edges Figure 3.18: Simulations for edges and pseudo-edges. Note that, in any 3-colouring, the endpoints of real edges must get different colours, and the endpoints of pseudo-edges must get the same colour. same colour, and any 3-colouring of real edges results in vertices with different colours. Pseudo-edges may therefore be conveniently thought of as extensions of the incident real vertices. Furthermore, the vertices adjacent to real edges can be coloured by any pair of non-identical colours (in the absence of other constraints). Figure 3.19 shows Chapter 3. Unit Disk Graphs 98 Figure 3.19: An instance of unit disk graph 3-colourability. This figure shows "real" vertices as concentric circles. It shows "real" edges as open circles, and "pseudo-edges" as solid disks. a complete transformation of the graph in Figure 3.16. Note that this construction introduces additional graph edges, which Figure 3.19 depicts as dotted line segments. These are artifacts of the "drawing". In fact, the vertical pairs of points in Figure 3.18 can be arbitrarily close together (but then they would would not be easily distinguishable in the drawing), thereby avoiding the connections in Figure 3.19. Nevertheless, it is easy to see that if the graph with these edges removed can be 3-coloured, then so can the entire graph. If c : V —> {1,2,3} is a 3-colouring of G, then c restricted to the real vertices is a 3-colouring of H, since the two kinds of edge component enforce different colours for adjacent real vertices. Similarly, if c : V —> {1, 2, 3} is a 3-colouring of H, then c can be extended to the vertices of G. Hence G is 3-colourable if and only if H is 3-colourable. I Approximation Algorithms Even though G R A P H K - C O L O U R A B I L I T Y remains NP-complete for unit disk graphs (Theorem 3.27), we.can use the greedy colouring algorithm to colour a unit disk graph Chapter 3. Unit Disk Graphs 99 with no more than three times the minimum number of colours. If we are required to colour each vertex exactly when it is presented to us "on-line", then the greedy algorithm still manages to use at most five times the minimum. Note that, for arbitrary graphs G, Lund and Yannakakis [LY93] have shown that there is some constant e > 0 such that no polynomial-time graph colouring algorithm has a performance ratio R < \V(G)\e (unless P = NP). It particular, for arbitrary graphs, unlike unit disk graphs, R cannot be a constant. Much of this subsection appears in a joint paper with Marathe, Hunt, Ravi, and Rosenkrantz [MBH+95]. In particular, most unattributed lemmas appear in [MBH + 95] also. The well-known greedy colouring algorithm simply colours each vertex with the first available colour. The algorithm colours each vertex in some order, so assume that the set of vertices V is totally ordered. For clarity of analysis and exposition, assume that each edge of the graph is directed from earlier to later vertices in the order of V. Table 3.8 presents the algorithm in detail. Table 3.8: Algorithm: G R E E D Y ( G ) [Colour G sequentially with the first available colour] Input: A totally ordered, directed unit disk graph G -= (V,A) Output: The number of colours used to colour G. Side Effects: colourfu] is the colour of each vertex. 1 for v <— first vertex in V to last vertex in V 2 do colour^] <— min(Z + \ {colour[u] : (u,v) £ A}) 3 return max {colour^] : v £ V} Lemma 3.28 Algorithm GREEDY colours (arbitrary) graphs with no more than one plus the maximum indegree colours. That is, GREEDY(G) < A J- 7 l(G) + 1. Chapter 3. Unit Disk Graphs 100 Proof: The algorithm maintains the invariant that any assigned colour is at most Aj- n + l . The invariant holds trivially at the beginning. Thereafter, vertex v has indegree at most Ain and outdegree 0, so vertex v too can be coloured with one of the A j n + 1 colours not used by its neighbours. | Coro l l a ry 3.28.1 Algorithm GREEDY colours (arbitrary) graphs with no more than one plus the maximum degree colours. That is, GREEDY(G) < A(G) -f 1. A n On-L ine A l g o r i t h m In order to analyze the behaviour of Algorithm G R E E D Y on unit disk graphs, we need to relate its degree to its chromatic number. One way to do this is by relating the degree of a unit disk graph to its maximum clique size, and relating this to its chromatic number. This results in Theorem 3.30 below. L e m m a 3.29 The maximum clique size of a unit disk graph G is greater than one sixth of its maximum degree. That is, w(G) > LA(G)/6J + 2. Proof: Let v be a vertex with the greatest degree. Its neighbours lie in the unit radius disk centered on the vertex. Therefore more than one sixth, at least [A/6] + 1 of these neighbours, lie in some 60 degree unit sector. Since such a sector has unit diameter, these neighbours together with the vertex v form a completely connected subgraph. A maximum clique must be at least as large. | Theorem 3.30 Algorithm GREEDY colours unit disk graphs with less than six times the maximum clique size. That is, GREEDY(G) < 6u(G) — 6. Proof: Lemma 3.29 implies that ui > [A/6J + 2 > A / 6 + 1 so that A < 6a> — 6. Since both A and u) are integers, A < Qui — 7. By Corollary 3.28.1, we have G R E E D Y ( G ) < A + l < 6 c u - 7 + l = 6 u > - 6 . Chapter 3. Unit Disk Graphs 101 I Corollary 3.30.1 Algorithm GREEDY colours unit disk graphs with less than six times the optimal number of colours. That is, GREEDY(G) < 6x(G) — 6. Proof: This follows from Theorem 3.30 since x > >^ for all graphs. | However, this corollary is overly pessimistic. Using the following lemma from [MBH + 95], Graf [Gra95] obtains a tighter bound on the chromatic number, thereby showing (Corol-lary 3.32.1) that Algorithm G R E E D Y performs even better than stated by Corollary 3.30.1. L e m m a 3.31 Let G be a unit disk graph. Any independent set in the neighbourhood of a vertex in G has at most five vertices. Proof: If the lemma were false, then some vertex would be the hub of an induced star K i } e - , thereby contradicting Lemma 3.1, which says that there can be no such star in a unit disk graph. | L e m m a 3.32 ([Gra95]) The chromatic number of a unit disk graph G is greater than one fifth of its maximum degree. That is, x(G) ^ l"A(G)/5] + 1-Proof: Let v be any vertex in a unit disk graph G = (V,E). By Lemma 3.31, at most five vertices adjacent to v can get the same colour. Therefore x(G(Ad](v)) > [|Adj(u)|/5]. Since v is adjacent to every vertex in its neighbourhood, it must be coloured differently than its neighbours. That is, x(G(N(u))) = x(G(Adj(u))) + 1 (recall that N(u) = Adj(u) U {v}). Consequently, X(G) > x(G(N(u))) = x(G(Adj(«))) + 1 > riAdj( v)|/5l + 1. The lemma follows if we choose v to have the maximum degree. | Corollary 3.32.1 ([Gra95]) Algorithm GREEDY colours a unit disk graph with less than five times its chromatic number. That is, GREEDY(G) < 5x(G) — 4. Chapter 3. Unit Disk Graphs 102 Proof: Lemma 3.32 states that X(G) > [A(G)/5l + 1 > A(G)/5 + 1, so that A(G) < 5X{G) - 5. By Corollary 3.28.1, we have G R E E D Y ( G ) < A + l < 5 x - 5 + l = 5 x - 4 . I A n Off-Line A l g o r i t h m In an "off-line" setting, we have the luxury of examining the entire graph before colouring any vertices. We will continue to exploit Algorithm G R E E D Y , but we will do so by reordering the vertices to our advantage. Essentially, we will create an order for the vertices so that no vertex has indegree greater than three times the graph's chromatic number. Say that (the vertex set of) a unit disk graph G is lexicographically ordered if x/(u) < xj[y) (or xj[u) — xj(y) and yj(u) < y/(v)) implies u < v, for some realization / : V —> R 2 . Graf attributes the first use of this order for colouring unit disk graphs to Peeters [Pee91]. L e m m a 3.33 ([Pee91, M B H + 9 5 , Gra95]) The maximum clique size of a lexicograph-ically ordered unit disk graph is greater than one third of the maximum indegree. That is, u(G)>\Ain(G)/3]+l. Proof: Let v be a vertex with the greatest indegree. Its (indegree) neighbours lie in the left unit-radius semicircle centered on the vertex. Therefore at least one third, [A,-„/3], of these neighbours lie in a 60 degree unit sector. Since such a sector has unit diameter, these neighbours, together with the vertex v, form a completely connected subgraph. The maximum clique must be at least as large. | Theorem 3.34 ([Pee91, M B H + 9 5 , Gra95]) Algorithm GREEDY colours lexicograph-ically ordered unit disk graphs with less than three times the maximum clique size. That is, GREEDY(G) < 3u(G) - 2. Chapter 3. Unit Disk Graphs 103 Proof: Lemma 3.33 implies that u > [A/3] + 1 > A / 3 + 1 so that A > 3w - 3. By Lemma 3.28, we have G R E E D Y ( G ) < A + l < 3 a > - 3 + l = 3 w - 2 . | Coro l l a ry 3.34.1 ( [ M B H + 9 5 ] ) Algorithm GREEDY colours lexicographically ordered triangle-free unit disk graphs with at most four colours. Proof: The maximum clique size of a triangle-free graph is at most 2. | Coro l l a ry 3.34.2 ([Pee91, M B H + 9 5 , Gra95]) Algorithm GREEDY colours lexico-graphically ordered unit disk graphs with less than three times the optimal number of colours. That is, GREEDY(G) < 3*(G) - 2. Proof: This follows from Theorem 3.34 since x > u f ° r all graphs. | Note that we required a realization to lexicographically order the vertices. We can dispense with the realization as follows. Graf points out that Algorithm G R E E D Y will perform at least as well given any vertex ordering that achieves a maximum indegree no greater than that of the lexicographic order. In particular, he mentions that Mat-ula [MB83] has shown that the smallest-last ordering minimizes the maximum inde-gree over all vertex orderings. The vertices {vi,V2, • • • ,vn} of a graph are said to be in smallest-last order if vi has minimum degree in G({ui, v2,..., v{\) for all i. There is an easy algorithm for finding a smallest-last ordering: find a minimum degree vertex v in the graph G = (V,E), recursively smallest-last order G(V \ {v}), and append v to the end of the sequence. Matula [MB83] shows how to implement this algorithm in 0(V + E) time with 0(V) additional space. Graf's PhD thesis [Gra95] also presents a more sophisticated algorithm (for colouring unit disk graphs) that still uses a realization, and still has a theoretical performance ratio Chapter 3. Unit Disk Graphs 104 of 3, but which performs well in practice. See also [MBH+95] for a different algorithm that achieves the same performance ratio of 3, but does not require a realization. In closing this section on colouring, recall that all triangle-free disk graphs are planar (Theorem 3.4). Therefore such graphs can be coloured with four colours by the famous four-colour theorem [AH76]. This improves on Lemma 5.2 in [MBH+95], which states that six colours suffice for triangle-free disk graphs. 3.4 Unit Disk Graph Recognition is NP-Hard This section proves that recognizing unit disk graphs is NP-hard. Equivalently, it shows that determining if a graph has sphericity 2 or less, even if the graph is planar or is known to have sphericity at most 3, is NP-hard. In fact, this section gives a polynomial-time reduction from SATISFIABILITY to a more general problem, that of recognizing p-bounded disk graphs, which involve disks of restricted sizes. It begins by describing the reduction for p-bounded coin graphs, where the disks have pairwise-disjoint interiors. We will see that this reduction can be extended to three dimensions, thereby showing that unit sphere graph recognition, or determining if a graph has sphericity 3 or less, is also NP-hard. Section 3.4.1 defines SATISFIABILITY and p-bounded disk and coin graphs. There-after, Sections 3.4.2 through 3.4.9 show how to reduce SATISFIABILITY to the problem of recognizing p-bounded coin graphs. Finally, Section 3.4.10 shows how to extend the result to other problems, including the already-mentioned problem of recognizing unit disk graphs. Chapter 3. Unit Disk Graphs 105 3.4.1 Coin Graphs A coin is a closed disk in the plane. A graph G is called a coin graph if G is the intersection graph of a set of interior-disjoint coins [Sac94]. Remarkably, finite coin graphs are precisely the finite planar graphs [Sac94], and can therefore be recognized in linear time (see [BL76] for example). In fact, every plane graph G has a coin realization f(G) such that (vi,v2, • • •, Vk) is a clockwise face in G if and only if {f(vi),f(v2),..., f(vk)) is a clockwise face in f(G). In this section, however, we are interested in the complexity of recognizing graphs than can be realized with coins of bounded size. A set of disks is p-bounded if every disk in the set has diameter between 1 and p inclusive. We will prove that / 9 -BOUNDED COIN G R A P H R E C O G N I T I O N is NP-hard. A special case of this problem is rec-ognizing penny graphs, where all disks have unit diameter. Penny graphs where all unit disks must be centered at integer coordinates are equivalently grid graphs (§2.3.3), which are also a subclass of unit disk graphs. Recall from Section 2.3.3 that GRID G R A P H R E C O G N I T I O N is NP-complete [BC87], even for grid graphs that are binary trees [Gre89]. We will have occasion to deal with the location and the radius of disks separately. To ease notation, say that / : V —> R 2 (locations) and r : V —, R (disk radii) constitute a disk realization (f,r) of a graph G — (V, E) if (u,v) £ E if and only if ||/(u) — /(u)| | < r(u)+r(v). Then a disk realization (/, r) is a coin realization if \\f(u)—f(v)|| > r(u)+r(v) for all u and v in V, adjacent or not, and it is p-bounded if 1 < 2r(v) < p for all v £ V. This section shows that the following parameterized recognition problem is NP-hard for any fixed p > 1. p - B O U N D E D COIN G R A P H R E C O G N I T I O N I N S T A N C E : A graph G = (V, E). Chapter 3. Unit Disk Graphs 106 Q U E S T I O N : Is G the intersection graph of a set of closed, interior-disjoint disks whose diameters fall in the range [l,p], that is, does G have a p-bounded coin realization? Since coin graphs are planar, they do not include K5. Since any intersection graph class must contain all complete graphs, this means that coin graphs are not the inter-section graph class of any family of sets [Sch85], in particular not of any family of disks. On the other hand, this section's reduction of SATISFIABILITY to p -BOUNDED COIN G R A P H R E C O G N I T I O N extends easily to allow overlapping disks as well. That is, recognizing p-bounded disk intersection graphs (equivalently the intersection graph class of p-bounded disks) is also NP-hard for any fixed p > 1. The special case p = 1 of this latter problem (namely UNIT DISK G R A P H RECOGNITION) has been reported separately [BK93]. p - B O U N D E D DISK G R A P H R E C O G N I T I O N I N S T A N C E : A graph G = (V, E). Q U E S T I O N : Is G the intersection graph of a set of closed disks whose diameters fall in the range [l,p], that is, does G have a p-bounded disk realization? The problem C N F SATISFIABILITY defined below is a common basis for NP-completeness proofs [Coo71, GJ79]. Let U = {ui,u2, • • • ,um} be a set of Boolean vari-ables. A clause c = {/i, l2,..., 4} is a set of literals, which are negated (e.g., ut-) and unnegated (e.g., U{) variables from U. A set C of clauses is intended to represent the conjunctive normal form Boolean formula Chapter 3. Unit Disk Graphs 107 A truth assignment is a function t : U -» {TRUE, FALSE}. In terms of literals, U{ is T R U E if and only if t(u%) = T R U E , and is T R U E if and only if *(«,•) = F A L S E . A clause is satisfied by t if at least one /t- £ c is T R U E . Finally, a satisfying truth assignment for C is one that simultaneously satisfies all the clauses in a set C. [LOl , [GJ79]] C N F S A T I S F I A B I L I T Y I N S T A N C E : A set U = {ui, u2,..., um} of Boolean variables and a set C = {ci, C 2 , . . . , Cn} of clauses over U. Q U E S T I O N : Is there a satisfying truth assignment for Cl T h e o r e m 3 . 3 5 p-BOUNDED COIN GRAPH RECOGNITION is NP-hard for any fixed p > 1. P r o o f : Given an instance C of SATISFIABILITY, we will construct a graph Gc = (Vci Ec) such that Gc has a realization3 if and only if C is satisfiable. Assume without loss of generality (see comments for SATISFIABILITY in [GJ79]) that each clause in C contains at most three literals ( | Q | < 3) and that each variable appears in at most three clauses. Note that this is not the same as 3SAT in which every clause has exactly three literals, and each variable appears in an unrestricted number of clauses (though 3SAT remains NP-complete if every variable appears in at most five clauses [GJ79]). We will build Gc in several stages. First, in Section 3.4.2, we will construct a bipartite graph GSCAT that corresponds closely to the instance C of SATISFIABILITY. We will define a notion of orientability for this graph, and prove that it is orientable if and only if C is satisfiable (Lemma 3.37). Then, in Section 3.4.3, we will draw this graph on the square grid. This natural drawing on the grid maintains the intuitive structure of the SATISFIABILITY problem. It also prevents edge drawings from "interfering" with 3 For brevity, and unless stated otherwise, the remainder of this section abbreviates "p-bounded coin graph realization" as "realization". Chapter 3. Unit Disk Graphs 108 each other by keeping them rectilinear and crossing them only at right angles. We will define a notion of orientability for this drawing. This serves to express the graph as a composition of smaller graphs, each of which is a primitive for orientation. The drawing is orientable if and only if the underlying graph is orientable (Lemma 3.39). Next, we will skew the grid so that it is part of the triangular grid. This will allow us to build and connect components by exploiting a hexagonal packing. Finally, we will form Gc in Section 3.4.4 by simulating components of the skewed grid drawing. Lemma 3.42 shows that Gc has a realization if and only if the underlying grid drawing is orientable. We finish by showing that the entire reduction can be executed in polynomial time. | 3.4.2 A G r a p h That Simulates S A T I S F I A B I L I T Y Our first step is to construct a bipartite graph GCAT from the instance C of SATIS-F I A B I L I T Y as follows. The vertices of the graph correspond to the clauses, variables, and negated variables of the SATISFIABILITY instance C. There is an edge between a literal vertex and a clause vertex if the literal appears in the clause. More formally, GSCAT = (VCSAT,E§AT), where: VSAT = {c:ceC} U {u+ :ueU} U {u~ : u £ U} E$AT = {(c,u+) : ce C,u £ c} U {(c,u~) : c £ C,u £ c}. Defini t ion 3.36 The graph GCAT is orientable if its edges can be directed such that, (1) for each clause vertex, outdegree(c) > 1 and, (2) for each pair of literal vertices, indegree(u+) = 0 or indegree(u~) = 0. | The graph GCAT models the testing of a truth assignment for SATISFIABILITY by directing its edges. Intuitively, an edge directed from c to u+ (resp. u~) means that clause c has selected literal u (resp. u) to satisfy it. That is, clause c requests that t{u) = T R U E (resp. t{u) = FALSE) . Chapter 3. Unit Disk Graphs 109 Lemma 3 . 3 7 The set of clauses C is satisfiable if and only if the bipartite graph GCAT is orientable. Proof: If t satisfies C7, then orient GCAT by directing every edge incident on u+ towards u+ if t(u) = T R U E , or away from u+ otherwise. Similarly, direct every edge incident on u~ towards u~ if t(u) = F A L S E , or away from u~ otherwise. Then outdegree(c) > 1 for every clause c, since c must be satisfied. Furthermore, either indegree(u+) = 0 or indegree(u~) = 0 for every variable u, since no truth assignment can set both a literal and its complement T R U E . Therefore GCAT is orientable if C is satisfiable. Conversely, if GCAT has been oriented, then set t(u) = T R U E if any edge (c, u+) is directed towards u+. Similarly set t(u) = F A L S E if any edge (c,u~) is directed towards u~. Then t sets each variable either T R U E or F A L S E since indegree(u+) = 0 or indegree(u_) = 0 for all variables u. Furthermore, t must satisfy every clause c since outdegree(c) > 1 for every vertex c. | 3 . 4 . 3 Drawing the Graph on the Grid Our next step is to draw the graph GCAT on the grid as shown in Figure 3.20. Each of the (6|£/| + 1) x (3|C| + 2) grid vertices in this drawing is either unused, or is associated with a unique component of the drawing. Each component is enclosed by a unit square centered on its own grid vertex. The drawing is made up of three groups of components: communication components, literals, and clauses. There are, in turn, three groups of communication components: wires, corners, and crossovers. A wire is a unit length line segment passing through a grid vertex, a corner is two half-length line segments meeting at right angles at a grid vertex, and a crossover is two unit-length line segments crossing at right angles on a grid vertex. There are therefore two types of wire components (horizontal and vertical), four types of corners, and one type of crossover. Chapter 3. Unit Disk Graphs 110 c 5 c 4 C3 c 2 C l Figure 3.20: A SATISFIABILITY graph drawn on the grid. The graph corresponds to the SATISFIABILITY instance ([/, C), where U = {ui, u2,113, u4}, C = {ci, c 2, C3, C4, C5}, and ci = {u 2 ,u 4 }, c 2 = {u i , u 2 , u 3 } , c 3 = {ui ,u 3 }, c 4 = {ui}, and c 5 = {u 2 , u 4 } . The clauses and the literals are drawn as squares. The variables are embedded as adjacent pairs and so are drawn as rectangles. Each clause and each literal component has three terminals. Note that the area of the grid is (6\U\ + 1) x (3|C| + 2). Chapter 3. Unit Disk Graphs 111 Each component in the drawing has two to four terminals, which are centered on a side of the unit square enclosing the component. The terminal on the top (or north) side of the unit square is called the T terminal. Similarly, the terminals on the bottom, left, and right are called the B, L, and R terminals respectively. Wires and corners have two terminals each, crossovers and literals have four, and clauses have three. Two components in the drawing are adjacent if they have coincident terminals. The terminal between two adjacent literals is called the (common) interliteral terminal; the others are called external terminals. Figure 3.20 depicts the set of all terminals as small circles. An orientation of a terminal is a direction, North, South, East, or West. Say that a terminal T (respectively B, L, R) is directed away from its component if it is oriented North (respectively South, West, East) and is directed towards its component otherwise. Defini t ion 3.38 A grid drawing is orientable if all terminals can be oriented subject to the following four conditions. d r a w l : T and B terminals must be directed North or South, draw2: L and R terminals must be directed East or West, draw3: every wire, corner, crossover line segment, and clause must have at least one terminal directed away from it, draw4: If a literal has its interliteral terminal directed towards it, then all other (exter-nal) terminals must be directed away from it. I A corollary of conditions draw4 and draw2 is that if a drawing is orientable, then every variable has a literal component with all external terminals directed away from it. Figure 3.21 shows an orientation of a portion of Figure 3.20. Chapter 3. Unit Disk Graphs 112 1 1 I : : i < : i < • • 1 I • L • : I 1 t t t u{ u1 u2 u2 Figure 3.21: An oriented grid drawing. The directions are drawn as arrows. Note that the path from u2 to c\ has a wire component that has both terminals directed away from it. Lemma 3.39 A grid drawing is orientable if and only if the underlying bipartite graph is orientable. Proof: Suppose we have an orientation for GCAT. Then orient the terminals along each path 4 in the grid drawing so that they are consistent with the orientation of the corresponding edge in EQAT. This ensures that every wire, corner, and crossover line segment has precisely one terminal directed away from it. It also ensures that each clause has one terminal directed away from it, since the clause vertices have outdegree at least one. Finally, every variable has a literal component with all external terminals directed away from it since one of the literal vertices has zero indegree in GCAT; direct the interliteral terminal towards this literal. Now suppose that the grid drawing has been oriented. Condition draw3 ensures that 4Those terminals not on a path need not concern us; they may be arbitrarily oriented, say away from the component. Chapter 3. Unit Disk Graphs 113 any path in the drawing corresponding to an edge in EQAT contains at most one wire, corner, or crossover line segment with both terminals directed away from the compo-nent. One terminal of such a component can be redirected by reversing the direction of all terminals in the path from it to the literal. Doing so keeps all conditions satis-fied since this operation does not change the orientation of any clause terminals, and it directs any external literal terminals away from the literal, in keeping with condition draw4. Condition draw3 then ensures that all terminals along the path are oriented consistently with some orientation for the corresponding edge. Under this orientation, all clause vertices have outdegree at least one, since the corresponding terminal is directed away from the clause. Furthermore, condition draw4 ensures that indegree(u+) = 0 or indegree(u~) = 0. I Corollary 3.39.1 A grid drawing is orientable if and only if the underlying instance of SATISFIABILITY is satisfiable. Skewing the Grid It would be easier to simulate the grid drawing components with coin subgraphs if GCAT had been embedded in a triangular grid instead of a square grid. This is because coins (disks) of the same size naturally pack hexagonally. We will exploit this packing for the two extreme sizes for the disks, that is, for diameter 1 and p. Hexagons also have a higher connectivity (six) with their neighbours than do squares (four), allowing us to construct more compact components. Both of these advantages will be evident in the forthcoming constructions. We chose to embed GCAT on a square grid to capture the intuition behind the bipartite SATISFIABILITY graph. Fortunately, it is easy to turn the square grid into a triangular one by "skewing" it as follows. If we imagine that the square grid is generated by two basis vectors (1,0) and (0,1), then we can skew the grid by replacing Chapter 3. Unit Disk Graphs 114 Figure 3.22: A skewed version of the oriented grid drawing from Figure 3.21. these with the new basis vectors (1,0) and (—1/2, \/3/2). The result of skewing the drawing from Figure 3.21 is shown in Figure 3.22. To aid visibility, the figure does not show grid lines with positive diagonal slope, but imagine that they are there to complete the triangular grid. 3 . 4 . 4 Simulating the Drawing with Coins We are now ready to construct Gc- To do so, we create a graph component for each skewed grid drawing component: wire, corner, crossover, literal, and clause. Section 3.4.8 provides the details. To specify the graph, we will describe a realization of each of its components, and then describe how the subgraphs are to be connected. Therefore, before giving detailed descriptions of the various graph components, we must examine the building blocks from which they are constructed. The main building blocks are cycles, called "cages". The construction joins two cages at a shared adjacent triple of vertices to create larger components. The remaining building blocks are clusters of vertices called "flippers". Flippers are associated with the vertex triple shared by two Chapter 3. Unit Disk Graphs 115 cages. In any realization, all the vertices in a flipper are forced to lie in one of the two incident cages. To ensure this consistent embedding, the vertices of a flipper are connected among themselves. Flippers come in three different sizes measured as a fraction of a cage's capacity: 1, 1/2, and 1/4. These sizes, and the capacity of a cage, allow us to rule out certain embeddings. Figure 3.23 shows schematically how cages and flippers are related. The next sub-Figure 3.23: Schematic drawings for cages and flippers. This schematic shows two joined cages. The one on the left contains two quarter flippers and a half flipper. The one on the right contains a full flipper, and has another attached full flipper that has been displaced. The labels on the flippers indicate their asymptotic portion of the hexagonal capacity. Unlabelled flippers are quarter flippers. Note the dimpled connecting corners, as well as one dimpled nonconnecting corner. section provides detailed instruction for constructing cages and flippers, and proves the required properties. The schematic shows a realization of a subgraph, but we are really constructing the subgraphs themselves. Therefore, let the schematics depict also the un-derlying coin graph. The realizations therefore serve both to specify the graph, and to show how the graph could be realized. A flipper embedded inside a cage diminishes the capacity of the cage for additional flippers. It thereby displaces other flippers, which may otherwise have been embedded inside the cage. The forthcoming construction (§3.4.7) for Gc ensures that a flipper can only be embedded in one of two adjacent cages. A displaced flipper therefore displaces other flippers from whatever cage it is embedded in. This is the basic mechanism for Chapter 3. Unit Disk Graphs 116 propagating information in a realization of Gc-Constructing Cages Cages will always have the same number of vertices, which depends to some extent on the diameter p of the largest disk. However, this number will be a multiple of six, so that a cage can be realized as a string of disks whose centres lie on a regular hexagon aligned with the triangular grid, as shown in Figure 3.24. Say that such a realization is Figure 3.24: The hexagonal realization of a cage. hexagonal. In the construction that follows, we join cages at the corners of such hexagonal realizations, as shown in Figure 3.25. Clearly, both cages cannot be strictly hexagonal in such a construction. To solve this problem, we choose to dimple a corner by moving the corner disk towards the centre of one of the hexagons. Naturally, the corner may be dimpled towards either of its two hexagons. Clearly, we can dimple a corner even if it is not shared with another cage; we will attach flippers to corners in this way. Define the hexagonal capacity of a cage as the maximum number of interior-disjoint disks that can be packed hexagonally into a hexagonal realization of a cage, as shown in Figure 3.26. Note that the capacity depends only on the number of cage vertices. Chapter 3. Unit Disk Graphs 117 Figure 3.25: Joining cages. The corner of the cage on the left is "dimpled". The alterna-tive hexagon realization, in which the corner is dimpled to the right, is also permitted. Figure 3.26: A hexagonal realization of a cage packed hexagonally with unit-diameter disks. The cage disks all have maximum diameter p. Chapter 3. Unit Disk Graphs 118 3.4.5 Fabricating Flippers We are now ready to fabricate flippers for cages. We will do so "asymptotically". Perhaps this is somewhat unusual, but the idea is simple: fabricate a flipper for an arbitrary cage size, and then notice that the size of the flipper approaches an ideal value as the cage size increases. Therefore, although our flipper never actually attains this value, we will see that it attains a suitable value. Full Flippers In any realization, a nipper must intersect (touch) the cage only at one corner, even if some of the other corners are dimpled. To ensure that this is the case, dimple in all corners of the cage. Fabricate a full flipper as follows. Pack the dimpled cage with the largest hexagon of unit disks in contact with the desired corner. Make sure that the nipper is not in contact with any other parts of the cage, including the dimpled corners, as shown in Figure 3.27. Figure 3.27: Fabricating full flippers The number of disks in a full flipper approaches the hexagonal capacity of the cage as the size of the cage increases. This is because the size of any disk becomes negligible in comparison with the sides of the cage. In particular, the gap between the flipper and the cage becomes negligible. Note that, even though the required cage size may be large, Chapter 3. Unit Disk Graphs 119 it does not depend on the size of the SATISFIABILITY instance. H a l f F l ippers There are two kinds of half flippers. The fabrication of quarter flippers, described in the next paragraph, results in one kind. To fabricate the other kind, notice that the line segment through the hexagon in Figure 3.28. (a) divides the area of the hexagon into two equal areas. Construct a half'flipperfrom this drawing and a full flipper by removing any Figure 3.28: Fabricating a half flipper disks hit by the line segment, as well as the half hexagon not in contact with the cage. Figure 3.28.(b) shows two half flippers constructed in this way. As the size of the cage (and the size of the packing) increases, the number of disks in the half flipper approaches one half of the hexagonal capacity. Quar ter F l ippers Similarly, the line segment through the hexagon in Figure 3.29.(a) divides the area of the hexagon into two regions. The smaller region has one quarter of the hexagon's area, and the larger region has three quarters the area. Fabricate a flipper that approaches 1/4 of the hexagonal capacity from this drawing by removing from a full flipper any disks hit by the line segment, as well as the three quarters hexagon not in contact with Chapter 3. Unit Disk Graphs 120 L (b) Figure 3.29: Dividing a hexagon into a quarter and three-quarters. The length L of the short side of the small trapezoid is (y/lO — 2)/4, which ensures that the area of the trapezoid is one-quarter that of the hexagon with unit sides. the cage. Similarly, fabricate a flipper that approaches 3/4 of the hexagonal capacity by removing any disks hit by the line segment, as well as the one quarter hexagon not in contact with the cage. Figure 3.29.(b) shows both such flippers. As the cage size increases, the number of disks in the small section approaches one quarter the hexagonal capacity, and the number of disks in the other section approaches three quarters of the hexagonal capacity. In the same way, fabricate another kind of half flipper by adding a second segment to the hexagon, symmetric with the first as shown in Figure 3.30.(a). Figure 3.30.(b) shows the result. 3.4.6 Capacity Arguments The flipper fabrications concentrated solely on the hexagonal capacity of a cage. In fact, these capacities are close to optimal. Define the unconstrained capacity of a cage as the maximum number of interior-disjoint disks that can be packed into any realization of a cage. Again, the capacity depends only on the number of cage vertices. We may as well assume that all disks in a maximum packing have unit diameter, since replacing any Chapter 3. Unit Disk Graphs 121 L (a) (b) Figure 3.30: Dividing a hexagon into two quarters and a half larger disks with unit disks results in an interior-disjoint packing with the same number of disks (and possibly room for more). With the help of the following lemma, we will be able to prove that some embeddings cannot be realized. Lemma 3.40 (The Capacity Lemma) The. unconstrained capacity of all sufficiently large cages is less than 12/TT2 (PZ 1.216J times its hexagonal capacity. Proof: We will construct an asymptotic value for the hexagonal capacity Ch of a cage, and a (rather conservative) upper bound for its unconstrained capacity Cu. We will then see that Cu < (12/TT2)C7^. First, consider a hexagonal realization of a cage, and pack it also hexagonally, see Figure 3.26 again. As the size of the cage increases, its perimeter5 P approaches that of its corresponding hexagonal packing. That is, P = 6s in the limit, where s is the length of a hexagon side. Note that s is one less than the number of unit disks on one of the hexagon's sides. It is easy to verify by induction on s that the total number of disks in the packing, which is the hexagonal capacity Ch of the cage, is given by Equation 3.6. 5 T h e perimeter of a string of disks is just the perimeter of the polygon through its disk centres. The perimeter of a set of disks is the perimeter of the convex hull of its centres. Chapter 3. Unit Disk Graphs 122 To construct an upper bound on the unconstrained capacity Cu, notice that the interior of a cage can never be completely covered with interior-disjoint unit disks, since unit disks do not tile the plane. Therefore Cu is less than the maximum internal area, taken over all possible realizations of the cage, divided by the area of the unit disks. As the size of a cage increases, its internal area approaches from below the area of the polygon through its disk centres. The circle has maximum area over all polygons with the same perimeter. Therefore, the realization of a cage as a set of maximum-size disks with cocircular centres achieves, at the limit, the maximum internal area. As noted, this area is less than the area of the circle through its disk centres. Calculate this circle's area as follows. First note that the circumference c of the circle is the same as the perimeter P = 6s of the hexagonal realization. Therefore the radius r of the circle is r = P/2% = 3S/TT-, so that its area A is A = 7rr 2 = 7r(35/7r)2 = 9s2/n. Since the area of a unit-diameter disk is 7r/4, it follows that Cu < A TT/4 9S2/TT TT/4 36s2 TT2 (3.7) Combining Inequalities 3.7 and 3.6 yields the following inequality, which proves the lemma. Cu 36S2/TT2 _ 12 Ch < 3S2 ~ TT2 Chapter 3. Unit Disk Graphs 123 Corollary 3.40.1 In any realization, no sufficiently large cage can contain a subset of flippers whose total nominal size exceeds 12/n2. Note that this corollary does not exploit the fact that a hexagonal disk packing is asymptotically optimal [CS88]. By virtue of this corollary, and our earlier requirement for hexagonal realizations, the main building block for the ensuing construction is a cage whose size (number of vertices) is a sufficiently large multiple of six. To emphasize the asymptotic nature of these cages, the rest of this NP-hardness reduction abbreviates cages and flippers by the schematics shown previously in Figure 3.23. 3.4.7 The Skeleton of the Graph Gc The following lemma, which applies to arbitrary disk intersection graphs as well as coin graphs, shows that the inside of a realization of a cycle is well defined. Lemma 3.41 A realization of a vertex-induced cycle is a plane graph. Proof: If the realization were not a plane graph, then two line segments would cross somewhere. Then, by Lemma 3.3, the endpoints of the segment would induce a triangle in the graph. This contradicts the fact that cages are triangle-free. | The skeleton of the graph under construction will be composed of many cages "hooked together", as presented already in Figure 3.25. Lemma 3.3 and its corollary guarantees that cages do not cross or overlap each other in any realization. In addition, the attached flippers and the capacity of the cages keeps cages from containing each other. This connection strategy therefore ensures that, in any realization, the clockwise order of the edges about every cage is determined by the order of the edges about any cage in the connected graph. It is therefore useful to think of the embedding of the skeleton of the graph, that is, the cages without flipper vertices, to be invariant under all realizations. Chapter 3. Unit Disk Graphs 124 Different realizations simply allow flippers to "flip" from one cage into another. Flippers must be in one of their two incident cages, by virtue of their construction and Lemma 3.3. The component definitions which follow will clarify this construction. The properties described above allow realizations of unit disk graphs to simulate the orientability of the grid drawing, depending on which cage encloses which flippers. In addition, we must ensure that the skeleton of the graph, that is, the graph induced on the cages, is always realizable. In particular, the components must "fit together". For example, they must stretch between two grid vertices (from Figure 3.20). To achieve this requirement, each component joins together several cages. For example, the forthcoming wire component consists of five cages. The number five comes from the realization scheme described in Section 3.4.9. 3 . 4 . 8 The Components of the Graph Gc Each of the graph components has two or more terminals—labelled T, B, L, or R—that correspond to the grid drawing terminals. Each terminal is a three-vertex corner of a cage with an attached full flipper. For example, the three-vertex corners belonging to terminals L and R are circled in Figure 3.31 below. To construct Gc, connect every pair of adjacent6 components together by identifying the appropriate terminals. Terminal T (resp. B, L, R) should be identified with an adjacent B (resp. T, R, L) terminal. You will find that the example presented in Section 3.4.9 clarifies this process. Wires The graph in Figure 3.31 implements the horizontal wire component as a string of five cages joined by corners (refer back to Figure 3.25). 6Two graph components are adjacent if the associated grid drawing components are adjacent. Chapter 3. Unit Disk Graphs 125 Figure 3.31: The horizontal wire component: drawn with both terminals oriented East. The terminals are indicated by small circles.' Remember that the terminals include the flippers. The grid square enclosing the wire component is drawn as a dotted parallelo-gram. Chapter 3. Unit Disk Graphs 126 Say that the T (resp. B, L, R) terminal is oriented South (resp. North, East, West) if its flipper is embedded inside its cage, and that it is oriented North (resp. South, West, East) otherwise. The orientation of the terminals shown in Figure 3.31, or in any of the following figures, is not the only possible orientation. However, the properties of the cages ensure that, for any realization, if the L terminal is oriented East, then its R terminal is also oriented East. Similarly, if the R terminal is oriented West, then its L terminal is also oriented West. This ensures that at least one terminal is directed away from the wire. Note that it is also possible for the L and R terminals to be simultaneously oriented West and East respectively. The vertical wire component is shown in Figure 3.32. Note that although the realiza-tions are different, and the terminals are labelled differently, the (unlabelled) horizontal and vertical wire component subgraphs are isomorphic. Again, for any realization, if either terminal is oriented inwards, then the other one must be oriented outwards. Corners As can be seen from Figure 3.33, there are four kinds of corners, and their graphs are also strings of five cages. Again, the properties of cages ensure that at least one terminal is directed away from each corner in any realization. Crossovers Of all components used in this reduction, the crossover component is the most difficult to understand. Its schematic is shown in Figure 3.34. We need to convince ourselves that paired terminals (that is, T and B, or L and R) cannot be simultaneously oriented towards the component, but that all other orientations are possible. The heart of the crossover is the ring of cages labelled (a, 6, c, d, e, / ) in Figure 3.34. The flippers associated with a set of cages are those that are shared by two cages of Chapter 3. Unit Disk Graphs 127 B Figure 3.32: The vertical wire component: drawn with both terminals oriented North. Chapter 3. Unit Disk Graphs 128 QOG> Figure 3.33: The corner components. They are drawn with the L terminals oriented West, the R terminals oriented East, the T terminals oriented Southland the B terminals oriented North. Note that not all four are isomorphic, since on two of them, some cages are connected but do not share nippers. Chapter 3. Unit Disk Graphs 129 Figure 3.34: The crossover component. The T and B terminals are drawn oriented North and the L and R terminals are drawn oriented West. The ring of cages (a, 6, c, d, e, / ) h drawn oriented counterclockwise. Chapter 3. Unit Disk Graphs 130 the set. In any realization, the flippers associated with the ring can take on one of two orientations: counterclockwise, as shown, in Figure 3.34, or clockwise. This is because the number (6) of cages and associated nippers is the same. By Corollary 3.40.1, no two nippers can be in the same cage. Consequently, every cage of the ring must contain exactly one of its shared nippers. Note that this means that the three-quarters nipper in the central cage k must remain in this cage for all realizations. Figure 3.34 emphasizes this fact by showing two connections for this nipper, but of course one will suffice. The crossover allows two channels to cross without interference. The horizontal chan-nel transmits its information via the horizontal chain of cages, and the diagonal channel transmits its information via the ring. Let us examine the horizontal channel in action. Since cages b, k, and e must each contain a three-quarters flipper, none of them can contain two additional quarter flippers, by Corollary 3.40.1. This means that the quarter flippers associated with the horizontal cages (h, 6, k, e, j) can take on one of two orienta-tions, west or east. The west orientation, shown in Figure 3.34, displaces the full flipper from cage h, and the east orientation displaces the full flipper from cage j. Conversely, if terminal R is oriented West, then it forces the west orientation on (h, b, k, e,j), and if terminal L is oriented East, then it forces the east orientation on (h,b,k,e,j). Let us now examine the diagonal channel in action. If the ring (a,b,c,d,e, f) is oriented counterclockwise as shown, then the full flipper in cage a displaces the quarter flipper into cage g. If it is oriented clockwise, then the three-quarters flipper displaces the half flipper from cage d. Conversely, if terminal B is oriented North as shown, then it forces the counterclockwise orientation on the ring, and if terminal T is oriented South, then it forces the clockwise orientation on the ring. Finally, the orientation of the ring, and the orientation of the horizontal channel, are independent. To see this, create the four possible orientations of the crossover by "swivelling" the flippers on their articulation points in Figure 3.34. Figure 3.39 later in this section shows all four orientations in use. Chapter 3. Unit Disk Graphs 131 Literals The positive literal component is shown in Figure 3.35. Its heart is the central cage, T fl Figure 3.35: The positive literal component. The L and R terminals are drawn oriented East, the T terminal is drawn oriented South, and the B terminal is drawn oriented North. drawn containing two quarter flippers and a half flipper. This component's key property is that for any realization, if the R terminal is oriented West, so that the terminal's flipper is in the literal's cage, then a full flipper must also be in the central cage, which means that all other terminals must be directed away from the component. A symmetric property holds for the negative literal component, shown in Figure 3.36. Note that the (unlabelled) positive and negative literal subgraphs are isomorphic. This is easier to see if you rotate one of the diagrams by 180 degrees. Chapter 3. Unit Disk Graphs 132 Figure 3.36: The negative literal component. The L and R terminals are drawn oriented East, the T terminal is drawn oriented North, and the B terminal is drawn oriented South. Chapter 3. Unit Disk Graphs 133 C l a u s e s The clause testing component is shown in Figure 3.37. Its heart is again the central cage, T Figure 3.37: Clause component: drawn with the T terminal oriented South, the B ter-minal oriented North, and the R terminal oriented East. the one containing two half flippers in the figure. Since this cage can contain at most two half flippers by Corollary 3.40.1, it must be that at least one terminal is oriented away from it. If a terminal is not used in the grid drawing, that is, if there is no adjacent wire, corner, or crossover, then "cap" the corresponding graph component terminal by adding a small ring of cages with full flippers. The T terminal in Figure 3.38 is capped in this way. This cap ensures that one of the full flippers from the ring occupies the terminal's cage. This full cage acts as if the terminal were oriented towards the component, thereby Chapter 3. Unit Disk Graphs Chapter 3. Unit Disk Graphs 135 forcing one of the remaining terminals to be oriented away7. 3.4.9 Building and Realizing Gc Now that all components have been described, we can present an example showing how components are connected. Recall that to connect two components, we simply identify the terminals at the desired connection point. Figure 3.39 shows how components are connected to simulate the skewed grid drawing from Figure 3.22. Coincidentally, the drawing in Figure 3.39 is oriented the same way as Figure 3.22, but remember that the object under construction is a graph, not a realization. To reiterate: we used realiza-tions to specify graph components, but we need not appeal to realizations at all when connecting components. Lemma 3.42 The graph Gc has a realization if and only if the underlying grid drawing is orientable. Proof: Given a realization, orient the grid drawing terminals exactly as the realization terminals. The definition of orientation for realization terminals ensures that conditions drawl and draw2 are met. The nature of the wire, corner, crossover, and clause compo-nents ensures that condition draw3 is met. Finally, the nature of the literal components ensures that condition draw4 is met. That is, the grid drawing is orientable if Gc has a realization. Now assume that the terminals of the grid drawing have been oriented. From this we will construct a realization for Gc- For each component in the grid drawing, construct the corresponding component realization, oriented exactly as the grid drawing compo-nent, centered at the origin. The discussion accompanying the description of each graph 7Note that we could have simply "pinned" the unused flipper into its cage by dimpling in all corners and bending the cage such that all corners contact the nipper. It may not be possible to bend the cage for other potential uses of this reduction, for example, if we further restricted all cage disks to lie on the hexagonal grid. Chapter 3. Unit Disk Graphs 136 Figure 3.39: How to connect components. This. "landscaped" figure (view it from the right) shows how to simulate the portion of the skewed grid drawing shown in Figure 3.22. There is a wire with both terminals directed outwards in Figure 3.21 and Figure 3.22. Do you see the empty cage corresponding to this wire? Chapter 3. Unit Disk Graphs 137 component above implies that this can always be done. Now magnify the skewed grid such that the distance between grid coordinates is five hexagons. Then translate each oriented component realization to its grid location. This procedure results in duplicated disks (points) corresponding to terminals. Re-move one set; they were only duplicated for clarity of exposition of each component. I L e m m a 3.43 SATISFIABILITY can be reduced to p-BOUNDED COIN GRAPH RECOG-NITION in polynomial time, for any fixed p. Proof: The grid drawing has area (Q\U\ +1) x (3|C| + 2), and therefore at most this many components, since a unit square encloses each component. Each component of Gc has a small constant number of cages and flippers. Each cage and nipper, on the other hand, has some, possible large, number of vertices and edges. But this number depends only on the ratio of disk diameters p, and not on the size of the SATISFIABILITY instance. Since the size of the SATISFIABILITY instance is a polynomial function of \U\ and | C | , the entire recognition instance can be built in polynomial time. | 3.4.10 Ex tend ing C o i n G r a p h Recogni t ion Di sk Intersection Graphs We can easily modify the reduction for p -BOUNDED COIN G R A P H R E C O G N I T I O N to p -BOUNDED DISK G R A P H R E C O G N I T I O N as follows. First note that the cages are already suitable disk intersection graphs in that their realizations cannot contain more area by virtue of their disks having the freedom to overlap. The flippers pose only slightly more difficulty. Clearly, the realizations of current flippers may occupy less area by allowing their disks to overlap, thereby possibly leaving room for more flippers in a cage than we would like. The solution is to make the flippers out of independent vertices. Chapter 3. Unit Disk Graphs 138 In this way they cannot be compressed smaller than (or even equal to) their packing size without coming into contact with one another. In their current construction, the flippers do not contact each other nor the cage (except at one attachment point). Therefore we have room to uniformly scale each flipper slightly while leaving the contact point fixed. This expansion disconnects the constituent disks from one another, as Figure 3.40.(a) shows in exaggerated proportions. (a) (b) Figure 3.40: Expanding flippers to make the vertices independent. The flippers in the cage on the left have been expanded to make the vertices independent. The expanded flippers in the cage on the right have been "bound together" with the addition of many new disks. Although these new disks have been drawn as an independent set, there is no need for them to be independent. Clearly, the number of disks in such an expanded flipper still approaches its asymp-totic portion of the cage's capacity. We still need to hold the flipper together somehow, to ensure that it is completely embedded in some cage. We can easily accomplish this task by adding as many more disks to the flipper as we like, being careful not to intersect the cage nor any other flippers in the construction, as shown in Figure 3.40.(b). This Chapter 3. Unit Disk Graphs 139 modified reduction proves the following theorem. Theorem 3 . 4 4 p-BOUNDED DISK GRAPH RECOGNITION is NP-hard for any fixed value p > 1. Coro l l a ry 3 . 4 4 . 1 UNIT DISK GRAPH RECOGNITION is NP-hard. Manha t t an and Chessboard Me t r i c s Disks are bounded by squares under the Manhattan (L\) and the chessboard (Loo) met-rics. We can easily modify the reduction for p -BOUNDED COIN G R A P H R E C O G N I -TION (and the subsequent relaxation to arbitrary intersection) to p -BOUNDED S Q U A R E G R A P H R E C O G N I T I O N by skewing the grid differently and using cages that are di-amonds rather than hexagons. To illustrate, assume that the squares are aligned with the coordinate axes. The new reduction is identical to p -BOUNDED COIN G R A P H R E C O G N I T I O N up to the point that the reduction draws the bipartite graph on the square grid. Cages are diamonds, and are still joined at the corners by shared vertex triples, as shown in Figure 3.41. Figure 3.41: Joining square cages. Compare with Figure 3.25. Chapter 3. Unit Disk Graphs 140 The unit disks (squares) tile the plane in a square packing; this packing also constructs solid flippers leaving no internal gaps. The components are necessarily con-structed differently, but the same principles apply. The modified reduction establishes the following theorem. Theorem 3.45 p-BOUNDED COIN GRAPH RECOGNITION is NP-hard for any fixed value p > 1 under the L\, L2, and metrics. The extension to disk intersection graphs used for Theorem 3.44 applies here also. Theorem 3.46 p-BOUNDED DISK GRAPH RECOGNITION is NP-hard for any fixed value p > 1 under the L\, L2, and metrics. Coro l l a ry 3.46.1 p-BOUNDED SQUARE GRAPH RECOGNITION is NP-hard for any fixed value p>l. Coro l l a ry 3.46.2 UNIT SQUARE GRAPH RECOGNITION is NP-hard. Therefore, the smallest dimension K, for which a graph G is the intersection graph of /^-dimensional unit hypercubes is called the cubicity of the graph [Rob68b]. This definition leads immediately to the following problem. K - C U B I C I T Y I N S T A N C E : Graph G = (V, E), positive integer K. Q U E S T I O N : Does G have cubicity at most KI The complexity of 2-CUBICITY was a long-standing open question [Rob68b, Coz92]. It follows from Corollary 3.46.2 that A"-CUBICITY is NP-hard, even for K = 2. The Introduction mentions that graphs with cubicity 1 are called indifference graphs and can be recognized in polynomial time. That is, A ' - C U B I C I T Y is solvable in polynomial time for K =1. Also, Yannakakis [Yan82] showed that 3-CUBICITY is NP-hard. Chapter 3. Unit Disk Graphs 141 Corollary 3.46.3 2-CUBICITY is NP-hard. Conjecture 3.47 K-CUBICITY is NP-hard for all fixed K greater than 1. Three Dimensions The reduction can also be modified to three dimensions. A k-dimensional ball is the set of points within some radius of a point (the ball's centre) in A>dimensional Euclidean space. Hence disks are two-dimensional balls, and coins are two-dimensional interior-disjoint balls. The basic building blocks for the three-dimensional reduction are again flippers in cages, but this time the cages are three-dimensional regular octahedra. Again, derive an upper bound on the capacity of a cage by a spherical volume argument. Derive a lower bound on its octahedral capacity by packing the interior in a hexagonal lattice, which is a rotated face-centered cubic lattice. Note that it does not matter whether face-centered cubic is "optimal" (though it is the densest lattice packing in three dimensions [CS88]), merely that its capacity forms a small constant ratio with the spherical-volume upper bound. Embed the SATISFIABILITY graph in the three-dimensional triangular grid as fol-lows. Begin by embedding the clauses and variables in a two-dimensional square grid, as before. Think of this as a horizontal layer, with z = 1. Now, add more horizontal layers to the grid for a total of 3 |C| layers, so that each occurrence of a literal in a clause has its own layer. Draw the bipartite graph corresponding to SATISFIABILITY on the three-dimensional grid by routing a literal to a clause. First route from the literal to the layer corresponding to the clause, using the third dimension. Then route over to the clause's (x,y) coordinates on its dedicated grid layer. Conclude by routing to the clause using the third dimension. In this way, no wires interfere so that the construction simu-lates a natural layout of the bipartite SATISFIABILITY graph without using cross-over Chapter 3. Unit Disk Graphs 142 components. Next, derive a tetrahedral three-dimensional grid by skewing the square grid. Finally, simulate the components with the octahedral cages. Note that these cages, unlike their two-dimensional analogues, have "holes in the mesh", which may allow flip-pers to "escape". Patch these holes with smaller spheres placed into the holes, possibly leaving some much smaller holes. Simply continue this process until all holes are smaller than the unit spheres. Theorem 3.48 In two or three dimensions, p-BOUNDED BALL GRAPH RECOGNI-TION and p-BOUNDED TOUCHING-BALL GRAPH RECOGNITION are NP-hard for any fixed p. Conjecture 3.49 p-BOUNDED BALL GRAPH RECOGNITION and p-BOUNDED TOUCHING-BALL GRAPH RECOGNITION are NP-hard for all fixed p and for all fixed K greater than 1. Two unit balls intersect if and only if their boundaries, which are spheres, intersect. Therefore, the smallest dimension K, for which a graph G is the intersection graph of A'-dimensional unit balls, is called the sphericity of the graph. This definition leads immediately to the following problem. K - S P H E R I C I T Y I N S T A N C E : Graph G = (V, E), positive integer K. Q U E S T I O N : Does G have sphericity at most KI The complexity of 3-SPHERICITY was an open question due to Havel [Hav82a, Hav82b, HKC83] arising from studies of molecular conformation. It follows from Theo-rem 3.48 that K - S P H E R I C I T Y is NP-hard, even for K = 2 or K = 3. Again, recall from the Introduction that graphs with sphericity 1 are called indifference graphs and can be Chapter 3. Unit Disk Graphs 143 recognized in polynomial time. That is, i ^ -SPHERICITY is solvable in polynomial time for K = 1. Theorem 3.50 K-SPHERICITY is NP-hard, even for K = 2 and K = 3. Conjecture 3.51 K-SPHERICITY is NP-hard for all fixed K greater than 1. 3.4.11 Miscellaneous Proper t ies The reduction embeds the bipartite graph corresponding to SATISFIABILITY in a very straightforward manner, making it easy for the reader to recall the structure of the draw-ing. This clarity comes at a small price, however, namely the need for crossover compo-nents. We could have constructed a reduction that avoids them. Kratochvil [Kra94] has recently introduced a special case of SATISFIABILITY, called 4 -BOUNDED P L A N A R 3 -CONNECTED 3-SAT, and shown that it is NP-complete. In this special case, every clause contains exactly three variables, every variable appears in at most four clauses, and the bipartite graph of clauses to variables is planar and 3-connected. We could have reduced this new problem to p-BOUNDED COIN G R A P H S , as follows. Since the bipar-tite graph is planar and has degree at most four, Valiant's procedure (see Section 3.3.1) will embed it in the square grid, with no crossovers, in polynomial time. The rest of the reduction would be nearly identical to the one presented here, differing only in that it does not require crossovers, and that the literal and clause components may have to have different shapes to accommodate the different incident edges. We already noted that coin graphs are planar. On the other hand, no such restriction holds for disk graphs. However, note that an instance of the p -BOUNDED DISK G R A P H R E C O G N I T I O N constructed by the reduction in this chapter is planar. This is true whether or not it has a realization, since it is always possible to embed (in the traditional sense of drawing a graph on the plane) all flippers inside their incident cages without Chapter 3. Unit Disk Graphs 144 crossing edges by making them sufficiently small. That is, p -BOUNDED DISK G R A P H R E C O G N I T I O N remains NP-hard even if the graph is planar. An instance of p -BOUNDED COIN G R A P H R E C O G N I T I O N constructed by the reduction has a 3-dimensional realization. To see this, embed the cages in the (horizontal) plane, as usual. Then embed each flipper directly above its incident vertex on the cage using the third dimension. Align all the flippers to ensure that flippers on a common cage remain independent of one another. That is, pick some plane normal to the horizontal and, when embedding a flipper, make all of its points equidistant from that plane. Since the cages are all spaced by the large hexagonal grid, the flippers from different cages will be, too, and will consequently be independent. This implies, for example, that 2-SPHERICITY remains NP-hard even if the graph has sphericity at most 3. We can express COIN G R A P H R E C O G N I T I O N and DISK G R A P H R E C O G N I -TION as existentially-quantified formulae in the first order theory of the reals (c/. [Can88]) as follows. Let G = (V, E) be a graph where V = {1, 2 , . . . , n}. Say that {fv : v £ V} C R 2 is a set of (possible) disk locations. Similarly, say that {rv : v £ V} C R is a set of (possible) disk radii. Then G is a coin graph if and only if 3/i3/ 2 . . . 3fn3r{3r2 ... 3rn P c o i n ( f i , f2, • • •, fn, n , r2, • • •, rn) KPE(fi,f2,---,fn,ri,r2,...,rn) where PC0in(fi,f2,---,fn,r1,r2,...,rn) = f\ \\fu - fv\\ = ru + rv (u,v)eE and P e ( / i , / 2 , . . . , / n , r i , r 2 , . . . , r n ) = / \ | | / u - /„ | | > ru + r„. (3.8) (u,v)e~E Since the existential theory of the reals is decidable in P S P A C E [Can88], it follows that COIN G R A P H R E C O G N I T I O N is in P S P A C E 8 . 8This should come as no surprise, since coin graphs can be recognized in linear time, as mentioned earlier. Chapter 3. Unit Disk Graphs 145 We can add a predicate Pp to constrain the radii of the disks. In particular, given a value p, let PP(ri,r2,...,rn) = /\ 1 < rv < p. (3.9) vev Then G is a p-bounded coin graph if and only if 3 / 1 3 / 2 . . . 3 / n 3 r 1 3 r 2 . . . 3 A i ^ ( / i , / 2 , - - - , / n , r i , r 2 , . . . , r n ) A P P(ri,r 2,. . . ,rn). It follows that p -BOUNDED COIN G R A P H R E C O G N I T I O N is in P S P A C E . Similarly, G is a disk graph if and only if 3 / i 3 / 2 . . . 3 / „ 3 r i 3 r 2 . . . 3rn Pdisk(fi,f2, • • • , / n , r 2,..., rn) A PE(/I, h, r l 5 r 2,..., r„) where i3dt»ik(/i,/2,---,/n,»"i,r2,...,rn) = f\ \\fu - fv\\ < ru + rv (u,v)eE and PE is defined by Equation 3.8. It follows that DISK G R A P H R E C O G N I T I O N is in P S P A C E . Again, we can add predicate Pp (Equation 3.9). Then G is a p-bounded disk graph if and only if 3 / i 3 / 2 . . . 3 / n 3 r i 3 r 2 ...3rn Pdisk{f • • • , fn, n , r2, • • •, rn) A.PB(/i,/ 2,...,/ n,ri,r 2,...,r n) A P p(ri,r 2,... ,rn). It follows that p -BOUNDED DISK G R A P H R E C O G N I T I O N is in P S P A C E . Chapter 4 Cocomparability Graphs Every r-strip graph is a cocomparability graph for all r £ [0,-\/3/2], as we established in Theorem 3.7. Therefore, the class of strip graphs naturally inherits the properties of, as well as any algorithms on, the class of cocomparability graphs. This chapter develops (in Section 4.2) polynomial time algorithms for several domination problems on cocomparability graphs. This chapter also establishes properties of cocomparability graphs that will be used later in the thesis to develop algorithms on strip graphs and two-level graphs. In particular, transitive orientations of the nonedges of strip graphs play a significant role in their characterization, as we will see in Chapter 5 and Chapter 6. Section 4.3 therefore explores the possible transitive orientations for the complements of cocomparability graphs. The following brief introduction characterizes cocomparability graphs in terms of a linear order on the vertices of a graph. It also presents an algorithm for finding such an order. This characterization leads to efficient algorithms for several domination problems, which form the subject of Section 4.2. These algorithms, of course, are applicable to strip graphs and two-level graphs. 4.1 Introduction Recall from Section 2.3 that a comparability graph is an undirected graph that has a transitive orientation, and that a graph is a cocomparability graph if its nonedges are 146 Chapter 4. Cocomparability Graphs 147 transitively orientable (that is, if it is the complement of a comparability graph). Cocom-parability graphs can be characterized in terms of possible linear orders on their vertices. In this chapter, we will see how this characterization leads to efficient algorithms. Definition 4.1 A spanning order (V, <) of a graph G = (V, E) is a linear order on the vertices V of G such that, for any three vertices u < v < w, if u and w are adjacent, then v is adjacent to either u or w (or both). | Figure 4.1 shows a small cocomparability graph and a spanning order on its vertices. 1 2 3 4 5 6 7 8 9 Figure 4.1: A cocomparability graph on nine vertices. The vertices are labelled in span-ning order. We will see that the vertices of every cocomparability graph can be labelled in span-ning order (Lemma 4.2), and that every graph whose vertices can be labelled in spanning order is a cocomparability graph (Theorem 4.3). Damaschke [Dam92] says this is well-known, and he states the result without proof. Nevertheless, your intuition may be strengthened by working through the following simple proofs. In essence, Theorem 4.3 says that the restriction of a spanning order to the complementary edges is a transitive orientation. L e m m a 4.2 ([Dam92]) Let G = (V,E) be a cocomparability graph. Every linear ex-tension of a transitive orientation of the complement of G is a spanning order of G. Proof: Let G = (V, E) be a cocomparability graph. That is, its complement G = (V, E) is a comparability graph. Let G = (V, E) be a transitive orientation of G, and consider Chapter 4. Cocomparability Graphs 148 any linear extension (V, <) of G. This is a spanning order of G. To see this, consider any three vertices u < v < w such that (u,w) G E but (u,v) fc E and (v,w) fc E. Then, by definition, (u,v) G E and (v,w) G E. By transitivity, (u,w) G E, and therefore (u,w) fc E, a contradiction. | Theorem 4.3 ([Dam92]) A graph is a cocomparability graph if and only if it has a spanning order. Proof: Every cocomparability graph has a spanning order by Lemma 4.2. Conversely, suppose a graph G = (V, E) has a spanning order. Construct a directed graph G = (V,E) such that (u,v) G E if and only if (u,v) fc E and u < v. This graph is a transitive orientation of G, for if (u,v) G E and (v,w) G E, then u < v < w and (u,v) fc E and (v,w) fc E. Therefore u and w could not be adjacent; otherwise —* the spanning ordering property would be violated. That is, (u,w) EE. So G is a comparability graph, and G is a cocomparability graph. | Defini t ion 4.4 A linearly-ordered graph G = (V,E,<) is a graph (V,E) and a linear order (V, <). A spanning-ordered graph G = (V,E,<) is a (necessarily cocomparability) graph (V, E) and a spanning order (V, <). | The algorithm below generates a spanning-ordered graph (several of the following sections require this) from a cocomparability graph. This algorithm may return a linearly-ordered graph even if G is not a cocomparability graph. This property allows us to use this linearly ordered graph, for example in the subsequent Steiner set algorithm (Section 4.2.5), to avoid having to recognize cocomparability graphs, a step that would add G(M(V)) time [Spi85]. ' Theorem 4.5 Given an arbitrary graph G, Algorithm OCC returns a spanning-ordered cocomparability graph if G is a cocomparability graph, or it returns a linearly-ordered Chapter 4. Cocomparability Graphs 149 Table 4.1: Algorithm: OCC(G) [spanning Ordered Cocomparability Graph] Input: A graph G = (V, E). Output: A linearly-ordered graph, or a message stating that G is not a cocomparability graph. 1 G f- the complement of G. 2 G <r- T R A N S I T I V E L Y - O R I E N T ( G ) 3 if the directed graph G does not contain a cycle, 4 then order the vertices of G by topologically sorting G. 5 return the now linearly-ordered graph G = (V, E, <). 6 else return " G is not a cocomparability graph." graph, or it prints a message stating that G is not a cocomparability graph, in 0(V2) time. Proof: Implement Step 2 with Spinrad's transitive orientation algorithm [Spi85], which will orient any undirected graph. The resulting directed graph is transitive if and only if G is a cocomparability graph [Spi85]. This graph is clearly not transitive if it contains a cycle; this observation justifies Step 3. If G is a cocomparability graph, the linearly-ordered graph in Step 5 is a spanning-ordered graph by Lemma 4.2. It is straightforward to implement Step 1 to run in 0(V2) time. Spinrad's algo-rithm also takes 0(V2) time [Spi85]. Step 4 can be implemented to run in 0(V + E) time ([CLR90] pages 485-488) using depth first search. Step 4 can simultaneously test for cycles in G, since such a cycle exists if and only if depth first search discovers a "back edge" (one directed to an already-visited vertex). This obviates the need for an explicit Step 3. I A graph is said to be chordal if it does not contain any induced cycles with four or more edges, that is, every such cycle has a "chord". Neither comparability graphs Chapter 4. Cocomparability Graphs 150 nor cocomparability graphs are necessarily chordal. In fact, the chordal cocomparability graphs are exactly the interval graphs [GH64]. However, cocomparability graphs are in some sense almost chordal. This is a simple consequence of spanning orders. Theorem 4.6 ([Gal67]) Cocomparability graphs do not have induced cycles with five or more edges. Proof: Let G = (V, E) be a cocomparability graph, and let C be any cycle with five or more edges, and therefore at least five vertices, in G. Consider now a spanning order (V, <) , which must exist for G by Theorem 4.3. Let s be the least vertex in C in this order, and t the greatest vertex in C. Then C forms two paths from s to t, as shown in Figure 4.2. There must be three vertices a < b < c such that b lies on one path, (a, c) b b b Figure 4.2: Every 5-cycle in a cocomparability graph has a chord: three cases in left-to-right order. is an edge on the other path, and b is not adjacent to a or c (that is, (a,b) E and (6, c) ^ E). This contradicts the properties of a spanning order. | A related idea is that of a dominating path. A path P = (s,...,t) in a graph G = (V, E) is dominating if every graph vertex is either equal or adjacent to a path vertex. Lemma 4.7 A path P in a spanning-ordered cocomparability graph dominates all vertices between the least vertex in P and the greatest vertex in P. Chapter 4. Cocomparability Graphs 151 Proof: A path in any graph dominates the vertices on the path. So let v be a vertex in G = (V, E, <), not in the path P , between the least vertex / and the greatest vertex t in P (that is, I < v < t). Then there exists some edge (u,w) of the path such that l<u<v<w<t. By the properties of the spanning order, v is adjacent to either u or w, so the path dominates v. | Corollary 4.7.1 A path in a spanning-ordered cocomparability graph dominates all vertices between its endpoints. 4.2 Algorithms for Dominating and Steiner Set Problems This section describes polynomial time algorithms for finding optimal dominating sets and Steiner sets in cocomparability graphs. The following table of contents lists the subsections in which each algorithm is developed. An algorithm for this set is presented in Subsection minimum cardinality connected dominating set (MCCDS) §4.2.1 minimum cardinality dominating set (MCDS) §4.2.2 minimum cardinality total dominating set (MCTDS) §4.2.3 minimum weight independent dominating set (MWIDS) §4.2.4 minimum weight Steiner set (MWSS) §4.2.5 Until recently [KS93], it was not known if these problems on cocomparability graphs were solvable in polynomial time. Every algorithm in this section has a better run time complexity, and is designed using a very different method, than the corresponding dynamic programming algorithm of Kratsch and Stewart [KS93]. Subsequently, Daniel Liang [Lia94] has achieved the same run time complexities as the algorithms in this section; his algorithms use dynamic programming and are similar to those of Kratsch and Stewart. A joint paper with Liang is in preparation. Chapter 4. Cocomparability Graphs 152 Table 4.2 summarizes the results of this section and compares the running times of algorithms for interval graphs and permutation graphs, both of which are subclasses of cocomparability graphs. Recall that interval graphs are exactly the chordal cocompara-bility graphs. Dominating sets for interval graphs have received considerable attention [RR88b] and, in fact, linear algorithms for several dominating set problems are known. These algorithms, however, also exploit additional properties of interval graphs, such as chordality. The complexity of these problems on other kinds of perfect graphs appear in the literature and are succinctly surveyed by Corneil and Stewart [CS90b]. A particularly relevant observation from this survey is that all of these problems are NP-complete for comparability graphs, the complements of cocomparability graphs. Table 4.2 assumes that a spanning order is available for the cocomparability graph algorithms, and that a defining permutation is available for the permutation graph al-gorithms. In both cases, these can be created in 0(V2) time (see [Spi85] and Algo-rithm OCC in Table 4.1), but notice that doing so would introduce an 0(V2) term to some of these algorithms. Incidentally, we may assume that our input graphs are con-nected, otherwise we can run the algorithms on the connected components and take the union of the solutions. In particular, this means |V| = 0(E). Problem \ Graph Cocomparability Permutation Interval MCCDS MCDS MCTDS MWIDS MWSS 0(VE) 0(V + E) [AR92] 0(V + E) [RR88b] 0(VE2) 0(V log log V) [TH90] 0(V + E) [RR88b] 0(VE2) 0(V2 + VE) [CS90b] 0(V + E) [RR88b] 0(V2-376) 0(V2) [BK87] 0(V + E) [RR88b] 0(V log V + E) 0(V log V + E) [AR92] Table 4.2: Complexity of Domination Problems Section 4.2.1 covers connected domination. Section 4.2.2 then develops an algorithm for finding a dominating set by exploiting the structure of a minimum cardinality con-nected dominating set. Subsequently, Section 4.2.3 reduces the problem of finding a total Chapter 4. Cocomparability Graphs 153 dominating set to that of finding a dominating set. The algorithm for finding a minimum weighted independent dominating set utilizes a new 0(M(V)) time algorithm for finding a minimum (or maximum) weight maximal clique in a comparability graph. Both algorithms are presented in Section 4.2.4. The Steiner set algorithm MWSS-CC presented in Section 4.2.5 is particularly flexible: given an arbitrary graph, the algorithm will either find a minimum weight Steiner set, or it will print a message stating that the graph is not a cocomparability graph. That is, the 0 ( M ( V ) ) time recognition step is not required. Section 4.2.6 is on the applicability of these algorithms to realizations for three well known subclasses of cocomparability graphs. It shows that the standard realizations of permutation, interval, and indifference graphs admit easily extracted spanning orders, and that the algorithms therefore run unchanged on these representations. 4.2.1 Connected Dominating Sets This section describes a polynomial time algorithm for finding a minimum cardinality connected dominating set (MCCDS) in a cocomparability graph. Recall that a connected dominating set in a graph is a subset of vertices that induces a connected subgraph, and that is adjacent to every other vertex in the graph. If a vertex in a linearly-ordered graph dominates all lesser vertices (in the linear order), refer to it as a left dominator. Similarly, refer to a vertex that dominates all greater vertices as & right dominator. Let V~(G) denote the set of left dominators in the linearly-ordered graph G. Similarly let V+(G) denote the set of right dominators. We will not write the graph argument if it is implicit (that is, we write V+{G) = V+ if G is understood). Also let V~(S) (respectively V+(S)) denote the set of left dominators Chapter 4. Cocomparability Graphs 154 (respectively right dominators) in the linearly-ordered graph induced by the linearly-ordered subset S of vertices V. More formally, Corollary 4.7.1 implies that every path from a vertex in V~ to a vertex in V+ forms a connected dominating set. The following lemma implies that such a path could serve as an M C C D S if an M C C D S does not have exactly two vertices. In the forthcoming material, let | P | denote the number of vertices in the set P, even if P is a path or similar structure. Lemma 4.8 Let M be a connected dominating set in a spanning-ordered cocomparability graph G. Then there exists a path P from V" to V+ such that (i) \P\ < \M\, or (ii) \P\ = 3 and \M\ = 2. Proof: Case 1: M contains vertices u £ V~ and w £ V+. Since M induces a connected subgraph, there is a path P from u to w using only vertices in M. Therefore \P\ < \M\. Case 2 : M does not contain any vertices from V~ but contains a vertex w £ V+. Let Ui be the least (in the spanning order) vertex in V. Trivially, U i £ V~ and so U i fc M. Since M is a dominating set, u\ is adjacent to some vertex v\ £ M. See Figure 4.3. Since v\ fc V~, there is a least vertex u2 < Vi that is not adjacent to v\. It must be that u2 £ V~ since every vertex u < u2 is adjacent to u2. To see this, note that such a vertex u is adjacent to Vi since u2 is the least vertex not adjacent to v\. Then u < u2 < v\ implies that u2 is adjacent to u according to the spanning order. V-(G) {v : (u,v) £ E(G) for all vertices u < v}, and V+(G) {v : (u, v) £ E(G) for all vertices u > v}. Chapter 4. Cocomparability Graphs 155 Figure 4.3: M does not contain any vertices from V but contains a vertex w £ V+. Again, u2 is adjacent to some vertex v2 £ M. Note that v2 ^ vi since u2 is not adjacent to v\. Without loss of generality, suppose that P' = . . . , w) is the shortest path from either v\ or v2 to w in the subgraph induced by M (otherwise relabel the vertices by exchanging subscript labels 1 and 2). This path exists since M is connected. The path cannot contain v2 for, if it did, the path from v2 would be shorter. Therefore < | M \ { u 2 } | = | M | —1. We can now create path P by appending Ui to the beginning of P'. It follows that \P\ = \P'\ + 1 < \M\. Case 3: M contains a vertex w € V~ but does not contain any vertices from V+. This proof is symmetric to that for Case 2. Since M does not contain any vertices from V+, we can find distinct vertices £ V+ and u 4 6 V+ that are adjacent to distinct vertices U3 and V4 in M respectively (see Figure 4.4). We can now construct a shortest path P from w to 113 or u 4 so that, again, \P\ < \M\. Case 4: M contains vertices from neither V~ nor V+. As for Case 2, we can find distinct vertices u\ € V~ and u2 £ V~ that are adjacent to distinct vertices v\ and v2 in M respectively (see Figure 4.5). And as for Case 3, we can find distinct vertices 113 £ V+ and u 4 £ V+ that are adjacent to distinct vertices U3 and v4 in M respectively (see Figure 4.5). Without loss of generality, let P' = (y\,..., v3) be Chapter 4. Cocomparability Graphs 156 Figure 4.4: M contains a vertex w £ V but does not contain any vertices from V+. Figure 4.5: M contains vertices from neither V nor V+. Chapter 4. Cocomparability Graphs 157 the shortest path, from either Vi or v2 to either v3 or v4, in the subgraph induced by M (otherwise relabel the vertices). As before, this path cannot contain v2, and it cannot contain v4 either. If Vi = u 3 , then P = (ui,vi = v3,u3) is a path in G that dominates G by the dominating properties of U i and u3 and by Lemma 4.7. Then | P | = 3. It cannot be that \M\ = 1, since otherwise the sole vertex would be in both V~ and V+. Therefore either \P\ < \M\ or | M | = 2. On the other hand, if Vi ^ u 3 , then v2 ^ v4, otherwise P' = (v2) would have been the shortest path. Therefore \P'\ < \M \ {v2,v4}\ = \M\ — 2. We can now create path P by appending Ui to the beginning of P', and u 3 to the end of P'. It follows that \P\ = \P'\ + 2 < | M | . | Figure 4.6 shows a graph with \P\ = 3 and \M\ =2 . The left-to-right order of the Figure 4.6: Lemma 4.8.(ii) A graph with | P | = 3 and \M\ = 2. vertices is clearly a spanning order. The two dark vertices form a connected dominating set, yet the shortest path from V~ to V+ has three vertices. Theorem 4.9 Let G be a connected spanning-ordered cocomparability graph, and let P be a shortest path from V~ to V+. Either P is a minimum cardinality connected dominating set, or \P\ = 3 and G is dominated by a single edge. Proof: Let P be a shortest path from V~ to V+. The path P is clearly connected, and it is dominating by the definitions of V~ and V + , and by Lemma 4.7. Chapter 4. Cocomparability Graphs 158 Now let M be a minimum cardinality connected dominating set of G. Then by Lemma 4.8, either \P\ < | M | , and therefore P is a minimum cardinality connected dominating set, or \P\ = 3 and \M\ = 2, so that G is dominated by a single edge. | Table 4.3: Algorithm: MCCDS-OCC(G) [Minimum Car dinality Connected Domi-nating Set for spanning-Ordered Cocomparability Graphs] Input: A connected spanning-ordered cocomparability j ?raph G = (V,E) Output: A minimum connected dominating set of G 1 V~ {v : v dominates all lesser vertices } 2 V+ <— {v : v dominates all greater vertices } 3 P <r- the shortest path from V~ to V+ in G 4 i f | P | = 3 5 then for all edges (u,v) G E 6 do if every vertex w G V is adjacent to u or v 7 then P <- {u,v} 8 break 9 return P. Theorem 4.10 Algorithm MCCDS-OCC computes a minimum cardinality connected dominating set of a connected spanning-ordered cocomparability graph G — (V,E, <) in 0(VE) time. Proof: The correctness of the algorithm follows immediately from Theorem 4.9. To ana-lyze the run-time complexity of the algorithm, assume that the input graph is represented by an adjacency list and adjacency matrix. Step 1 can be executed in 0(V + E) time by making use of the linear-order and the adjacency matrix. To test if a vertex is a left-dominator, we check all lesser vertices in order, stopping immediately if a vertex is not adjacent (i.e., not dominated). In this way every edge is visited at most once, and each vertex examines at most one non-adjacent Chapter 4. Cocomparability Graphs 159 Table 4.4: Algorithm: MCCDS-CC(G) [Minimum Cardinality Connected Dominat-ing Set for Cocomparability Graphs] Input: A cocomparability graph G = (V, E). Output: A minimum cardinality connected dominating set, or a message stating that no connected dominating set exists. 1 if G is connected, 2 then G <r- OCC(G) 3 return MCCDS-OCC( G ) 4 else return "No connected dominating set exists." vertex. Similarly, Step 2 can be executed in 0(V + E) time by checking all greater vertices. Step 3 can be implemented by adding a new "source" vertex s and a new "target" vertex t that is adjacent to the vertices in V~ and V+ respectively. Step 3 can then be executed in 0(V + E) time by running breadth first search starting at s. Finally, Step 6 runs in 0(V) time for a total of 0(VE) time since it is called for each edge. | Theorem 4 . 1 1 Given a cocomparability graph G, Algorithm MCCDS-CC computes a minimum cardinality connected dominating set, or prints a message stating that no connected dominating set exists, in 0(VE) time. Proof: Step 4 is correct since there is clearly no connected dominating set if G is not connected. On the other hand, if G is connected, V itself forms a connected dominating set. Step 2 correctly orders the vertices V of G by Theorem 4.5, so the set returned by Step 3 is a minimum cardinality connected dominating set by Theorem 4.10. Step 1 takes 0(V + E) using depth first search, Step 2 takes 0(V2) time by The-orem 4.5, and Step 3 takes 0(VE) time by Theorem 4.10. If G is connected, then \E\ > | V | — 1. Therefore, if the algorithm executes Steps 2 and 3, then it must be that 0(V2) = 0{VE). The total time taken is therefore 0(V + E + VE) = 0(VE). I Chapter 4. Cocomparability Graphs 160 4.2.2 Minimum Cardinality Dominating Sets We now know how to find a minimum cardinality connected dominating set in a cocompa-rability graph. On the other hand, there may be an even smaller set that dominates the graph but that does not induce a connected subgraph. This is the ordinary dominating set problem—to find a (not necessarily connected) minimum cardinality dominating set (MCDS) in a graph—which we now address. Let G = (V, E, <) be a spanning-ordered graph. To simplify the treatment of special cases, this section assumes that V has been augmented with two additional vertices s and t. These vertices 5 = 0 and t = \V\ + 1 are not adjacent to any vertices. We will find an MCDS in G by finding a shortest path in an auxiliary digraph G' = ( V ' , A ' ) , which we will refer to as a d-auxiliary graph There is a node in V for each vertex in V, and two nodes in V for each undirected edge in E. More precisely, V = VUVmUVout, where Vin = {e;„ : e £ E} and Vout = {eout • e e E}. The arc set A' depends only on the spanning-ordered graph G. However, the reader may find its presentation at this point somewhat mysterious, so we will instead present A' in conjunction with its properties. For now, let us just say that A' is the disjoint union of six sets of directed arcs A' = A0 U Ai U A2 U A3 U AA U A 5 . These sets are defined by the forthcoming Equations 4.10 through 4.15. Let M be a minimum cardinality dominating set of G. Write fc+i M= \JPU i=0 Chapter 4. Cocomparability Graphs 161 where each induced subgraph G(Pi) is a connected component of the induced subgraph G(M). Recall that N ( p ) = N(P,-, G) denotes the vertices dominated by P,- in G. Clearly, Pi is a minimum cardinality connected dominating set of the subgraph induced by N(P 2 ). By Theorem 4.9, we may assume that Pi is either a shortest path (/,-,...,r,-) from a left dominator /; £ V~(N(P,)) to a right dominator r; £ V + ( N ( P 8 ) ) , or a single edge Pi = (h,fi). We may further assume that < r,- in the spanning order for all paths P;. We can clearly do so if Pt- = (/,• = r,-) or if P, = (/,-, r;). For longer paths P 4 = (/j,.. . , r;), we must have U < ri, otherwise U and r,- would be adjacent by the definition of left and right dominators, and (/;,r;) would be a shorter path than Pi. Furthermore, these paths do not overlap in the spanning order. That is, no vertex v £ Pi falls between the least and greatest vertices of any other path Pj. This is because such a vertex v would be dominated by Pj according to Lemma 4.7, contradicting the fact that G(Pi) and G(Pj) are connected components of G(M). We may therefore assume that lo < r0 < li < r-i < • • • < lk+1 < r k + 1 in the spanning order. Note that, since the vertices s and t are not adjacent to any vertices, PQ = (lo = ro = s) and Pk+i = (lk+i = rk+i = t). Define a paths dominator to be a set M that satisfies all of the above properties. We have established the following property of spanning-ordered graphs. Property 4.12 Every spanning-ordered graph has a paths dominator. We are now ready to derive the arcs of the d-auxiliary digraph. In order to do so, however, we will need the following technical lemma that explains why all vertices between two paths are dominated by the endpoints of the paths. L e m m a 4.13 Let v be a vertex between paths in a paths dominator M, that is, rt < v < Chapter 4. Cocomparability Graphs 162 for some i. Then v is adjacent to at least one endpoint of at least one of the two bracketing paths. More precisely, 1. If neither Pi nor Pj+i is an edge, then v is adjacent to r%- or Zj+i. 2. If Pi+i is an edge but Pi is not, then v is adjacent to ri, U+i, or r,-+i. 3. If Pi is an edge but Pi+i is not, then v is adjacent to li, ri, or li+\. 4- If both Pi and Pj + 1 are edges, then v is adjacent to li, ri, orr,- +i. Proof: Since M is a dominating set, the vertex v is adjacent to some vertex u £ M. If u £ Pi, then we are done since either r,- £ V+(N(Pj)) or Pj = (/j,r,-). Similarly, we are done if it £ Pj+i, since either li+1 £ V~(N(Pi+1)) or Pi+1 = (li+1,ri+1). Otherwise u is in some other path Pj so that u < ri and (u,r,) £ £7, or Zj+i < iz and (u,li+1) £ Z2, since P; and Pj do not overlap in the spanning order. In the first case, where u < ri < v, there is an edge between u and v, but (u, r,-) £ E. Therefore (r,-, v) £ E by the spanning order. Similarly, in the second case, (ZJ+i,u) £ E. | We want to define the arcs A' of the d-auxiliary graph G' so that for every paths dominator M, there is a path P' from s to t in G ' with the same number of vertices as M. Essentially, we want P' to simulate the sequence where (U,..., r,-) is the path Pj. More precisely, let a : V —> V denote the significance of a vertex in G', defined by S = (s = l0 = r 0 , Z i , . . . , r i , . . . , Zj , . . . , r j , . . . , lk+1 = rk+i = t), t v1 if v' £ V u if v' = ein, where e = (u,v) and u < v v if v' eout, where e = (u,v) and u < v. Chapter 4. Cocomparability Graphs 163 We want the significance of path P' — (s — p'i,p'2, • • • ,p'n) to be 5", that is, we want S = Wl), ^(P2)> • • • , a(Pn) = 0. or more compactly S = c(P'), by a traditional abuse of notation. To achieve this, we need an arc from u' to v' in G' if o~(u') and o~(v') are consecutive in S. We will use dominating properties to determine which nodes should be adjacent. To this end, let u and v be two consecutive vertices in the sequence S. If u and v belong to some path P,-, then (u,v) £ E, and by the spanning order, {u,v} dominates all vertices between u and v. Therefore define A \ U A 2 to be the set of arcs whose endpoints dominate all vertices in between. The set A 2 is just the reversal of A \ . More precisely, A\ = {(u, u) : u,v € V, u <v, and for all i £ V, u < i <v implies (i,u) £ E or (i,v) £ P} (4-10) A 2 = {(v,u):(u,v)€A!}. (4.11) If u and v do not belong to a path, then u = rt- and u = for some paths P s and Pi+\. We must consider four cases, depending on whether P,- or P j + i is an edge. Case 1: Neither Pi nor P 8 + i is an edge. In this case, {r 8-,/ J +i} dominates all vertices in between by Lemma 4.13.1. Note that ( r ; , / ;+ i )£ Ax. Case 2: P,- + 1 = ( / ; + 1 , r ! + 1 ) = e is an edge but P,- is not. The set {r;} U r , + 1 } dominates all vertices between r,- and by Lemma 4.13.2. In this case, we want an arc from r; to e j n. More precisely, A3 = {(w, e,-n) : to £ V, e = (u, u) £ E, w < u < v, and for all i £ V, w <i <u implies (i,u) £ E, (i,v) £ or (i,w) £ i?}. (4-12) Also, we wish to ensure that any path in G' through e;n also includes eout. To do so, we define the arcs AQ between such pairs. For every edge e £ E, this will be the only arc Chapter 4. Cocomparability Graphs 164 from e{n and the only arc to eout. More precisely, Ao = {( e £ E}. (4.13) Case 3: P 8 = (Z,-,rj) = e is an edge but Pj+i is not. The set {/,-, rj} U {Zj+i} dominates all vertices between r2- and Zj+i by Lemma 4.13.3. In this case, we want an arc from eout to Zj + 1 . More precisely, A 4 = {(eout, iu) : w G V, e = ( « , » ) e £ , u < v < w, and for all i £ V, v <i <w implies (z',tt) G i£, (z,f) G i?, or (i,w) G £ } . (4-14) Again, the arcs in AQ ensure that any path in G' through eout includes e ! n. Case 3: Both P,- = (Zj,rt-) = e and P ! + 1 = (Z 8 + i , r , + i ) = / are edges. The set {Z,-, r,-}U{Z,-+i, r,-+i} dominates all vertices between rt- and Z;+i by Lemma 4.13.4. In this case, we want an arc from eout to /,-n. More precisely, A 5 = {(e o u i , /i„) : e = (it, u), f = (w,x) £ E, u < v < w < x, and for all i G V, v < i <w implies (i,u) £ E, (i,v) £ E, (i,w) £ E, or (i,x) £ E}. (4.15) This completes our description of the arc sets Ao through A 5 in A'. The preceding discussion establishes the following lemma. Lemma 4.14 For every paths dominator M of a spanning-ordered graph, there is a path P' in G' that satisfies \P'\ < \M\. The reader may suspect from our construction that every path in G' also corresponds to a dominating set in G. The following lemma confirms this suspicion. Lemma 4.15 Let P' be a path from s to t in G'. The vertices ofV corresponding to the nodes of P' dominate G. Chapter 4. Cocomparability Graphs 165 Proof: Let S = {o~(v) : v £ P'} be the set of vertices corresponding to the nodes in the path P', and let j be an arbitrary vertex in V. We claim that 5" dominates j. If j E 51, then we are done. Otherwise, s < j < t in the spanning order, so there exists a pair of vertices i and k in P' such that (i, k) £ A' and either o~(i) < j < cr(k) or a(k) < j < <r(i). If (i, k) £ Ao, then (o~(i), cr(fc)) £ so j is dominated by cr(z') or cr(k) by the spanning order. If (i,k) £ A i or (i,k) £ A 2 then (j,a(i)) £ E or (j,a(k)) £ E by the definition of the sets. If (i, A:) £ A3, then (i, k) = (to, e,n) for some vertex u; and edge e = (u, u) that satisfies w < j < u < v. Therefore j is adjacent to u, u, or u; by the definition of A3. Since node ein has outdegree 1 in G', it must be that eout £ P'. Therefore v = cr(eout) £ S and, since we already know that u = (j(e,n) = cr(A:) and w = o~(i) are in S, it follows that j is dominated by S. If (i,k) £ A 4 , then (i,k) — (eout,w) for some vertex w and edge e = (u,v) such that u < v < j < w. Therefore j is adjacent to u, v, or 10 by the definition of A 4 . Since node eout has indegree 1 in G', it must be that e;„ £ P'. Therefore u = cr(e,-n) £ 5* and, since we already know that v = o~(eout) = o~(i) and w — o~(k) are in 5, it follows that j is dominated by S. Finally, if (i, A;) £ A 5 , then (i, A;) = (eout, /;„) for some edges e = (u, u) and / = (w, x) such that u<v<j<w<x. Therefore j is adjacent to u, v, w, or x by the definition of A 5 . Since node e o u i has indegree 1 and node / j n has outdegree 1 in G', it must be that ein £ P' and / o u t £ P'. Therefore u = cr(e4„) £ S and x = cr(fout) £ 5* and, since we already know that v = o~(eoui) = a(i) and w = cr(/,-n) = c(k) are in S1, it follows that j is dominated by S. | Theorem 4.16 The set of vertices corresponding to the nodes in a shortest path from s Chapter 4. Cocomparability Graphs 166 to t in G' is a minimum cardinality dominating set. Proof: Such a set is dominating by Lemma 4.15 and of minimum cardinality by Lemma 4.14. I Let us now examine a straightforward algorithm for constructing the d-auxiliary di-graph G'. To enhance the legibility of the algorithm, we will construct a subalgorithm for each of the arc sets Ao through A$. An implementation of this algorithm could combine the subalgorithms to capitalize on loops that are shared by more than one subalgorithm. The most straightforward of the subalgorithms is the one for Ao- It clearly executes in 0(E) time. Table 4.5: A l g o r i t h m : AO-MCDS-OCC(G) [Construct arc set A0] Input: A spanning-ordered graph G = (V, E, <). Output: The arc set A0 (Equation 4.13) 1 A 0 <- 0 2 for al l edges e € E where e = (u, v) and u < v 3 do A 0 f- A 0 U {(e 4 re turn A0 To construct the arcs for A\, we only need to determine which pairs dominate all vertices in between. This is certainly true for the endpoints of edges in G. Otherwise, Algorithm A 1 - M C D S - 0 C C (Table 4.6) tests if a vertex is not dominated by the pair. Step 2 calls Step 4 0(V2) times and calls Step 5 0(E) times. Step 5 calls Step 6 0(V) times. Therefore Algorithm A l - M C D S - O C C runs in 0(V2 + VE) = 0(V3) time. Note that there are 0(V2) arcs in A\. To construct the arcs for A 3 , we need to determine which pairs of vertices and edges dominate everything in between. Again, Algorithm A3-MCDS-OCC (Table 4.7) tests if a vertex is not dominated by the pair. Step 2 calls Step 3 0(E) times, which calls Step 5 Chapter 4. Co-comparability Graphs 167 Table 4.6: Algorithm: Al -MCDS-OCC(G' ) [Construct arc set Ay) Input: A spanning-ordered graph G = (V, E, <). Output: The arc set Ax (Equation 4.10) 1 Ay <- 0 2 for all pairs of vertices u, v € V where u < v 3 do Ay <- A i U {(tt,u)} 4 i f ( u , u ) g £ 5 for all z such that u < i < v 6 if (i,u) fc E and (i,v) fc E 7 then Ay <- Ay\ {(u,v)} 8 return A i 0(V) times, which calls Step 6 0(V) times. Therefore Algorithm A3-MCDS-OCC runs in 0(V2E) time. Note that there are 0{VE) arcs in A 3 . Table 4.7: Algorithm: A3-MCDS-OCC(G) [Construct arc set A 3 ] Input: A spanning-ordered graph G = (V, E, <). Output: The arc set A 3 (Equation 4.12) 1 A 3 <- 0 2 for all edges e £ E where e = («, v) and u < u 3 do for all vertices w such that w < u < v 4 do A 3 <- A 3 U{( to , e i n ) } 5 for all vertices z such that w < i < u 6 do if (i,iw) ^ (z',u) ^ and ( i,v)fcE 7 then A 3 «- A 3 \ {(to, e i r >)} 8 return A 3 Algorithm A4-MCDS-OCC (Table 4.8) is symmetric with Algorithm A3-MCDS-OCC. It therefore also runs in 0(V2E) time, and A 4 has 0(VE) arcs. In Algorithm A5-MCDS-OCC (Table 4.9), Step 2 calls Step 3 0(E) times, which calls Chapter 4. Cocomparability Graphs 168 Table 4.8: Algorithm: A4-MCDS-OCC(G) [Construct arc set A4] Input: A spanning-ordered graph G = (V, E, <). Output: The arc set A 4 (Equation 4.14) 1 A 4 f- 0 2 for all edges e £ E where e = (u, v) and u < v 3 do for all vertices w such that v < w 4 do A 4 < - A 4 U {(e o u i,u;)} 5 for all vertices i such that v < i < w 6 do if (i,u) £ E, (i,v) E, and (i,w) £ E 7 then A 4 A 4 \ {(e o u i , ^)} 8 return A 4 Step 5 0(E) times, which calls the constant-time Step 6 step 0(V) times. Therefore Algorithm A 5 - M C D S - 0 C C executes in 0(VE2) time. Note that there are 0(E2) arcs in A5. Table 4.9: Algorithm: A5-MCDS-0CC(G) [Construct arc set A5] Input: A spanning-ordered graph G = (V, E, <). Output: The arc set A5 (Equation 4.15) 1 A 5 4 - 0 2 for all edges e £ E where e = (u, v) and u < v 3 do for all edges f £ E where / = (w, x) and u < v < w < x 4 do A5 <- A 5 U { ( e o u t , / , • „ ) } 5 for all vertices i such that v < i < w 6 do if (i,u) ^ E, (i,v) ^ E, (i,w) £ E, and (i,x) £ E 7 then A 5 <- A5 \ {(eout, fin)} 8 return A5 Lemma 4.17 Algorithm Aux-MCDS-OCC (Table \.10) constructs the d-auxiliary graph in 0(VE2) time. Chapter 4. Cocomparability Graphs 169 Table 4.10: Algorithm: A u x - M C D S - O C C ( G ) [d-Auxiliary Graph for Minimum Car-dinality Dominating Set for spanning-Ordered Cocomparability Graphs] Input: A spanning-ordered graph G = (V, E, <). Output: The d-auxiliary digraph G' = (V, A'). 1 V <- V U Ein U Eout 2 A 0 <- AO-MCDS-OCC(G) 3 A i f- A l - M C D S - O C C ( G ) 4 A 2 4-Ar 1 5 A 3 4-A3-MCDS-OCC(G) 6 A 4 4-A 4 - M C D S - O C C ( G ) 7 A 5 4-A5-MCDS-OCC(G) 8 A ' 4-A 0 U A i U A 2 U A 3 U A 4 U A 5 9 return G' = ( V , A') Proof: It is straightforward to verify that the algorithm correctly constructs arc sets Ao through A 5 . Of course, the algorithm refers to these sets for clarity only. An implemen-tation would just add the arcs directly to the set A'. Implement the input and output graphs as adjacency matrices. Testing if two vertices are adjacent in G, or adding an arc to G' therefore take constant time. Step 1 amounts to allocating memory for an 0((V + 2E) x (V + 2E)) matrix, and initializing it in 0(V2 + E2) time. It is easy to verify that Step 7 dominates the running time of the algorithm and takes at most 0(VE2) time from the discussion above. | Theorem 4.18 Algorithm MCDS-OCC finds a minimum cardinality dominating set of a spanning-ordered cocomparability graph in 0(VE2) time. Proof: The algorithm is correct by Theorem 4.16. Step 1 takes 0(VE2) time by Lemma 4.17. Step 2 can be implemented to run in 0(V'2) = 0(V2 + E2) time us-ing breadth first search[CLR90], and Step 3 clearly runs in 0(V) time. Step 1 dominates the run time of the algorithm, so the theorem follows. | Chapter 4. Co-comparability Graphs 170 Table 4.11: Algorithm: MCDS-OCC(G') [Minimum Cardinality Dominating Set for spanning-Ordered Cocomparability Graphs] Input: A spanning-ordered graph G = (V, E, <). Output: A minimum cardinality dominating set of G. 1 G' <- MCDS-OCC-Aux(G) 2 P ' 4— a shortest path from s to t in G'. 3 S <- {<r(v) : u G P'} ' 4 return 51. 4.2.3 Total Dominating Sets Recall from the introduction (§1.3.2) that a total dominating set of a graph G = (V, E) is a subset D C V such that every vertex in V is adjacent to some vertex in D. Contrast this with an ordinary dominating set, which either contains oris adjacent to every graph vertex. The total domination problem is to find a total dominating set of minimum cardinality. This section presents an efficient algorithm for total domination in cocomparability graphs. It does this by reducing the problem to (ordinary) domination in cocompara-bility graphs. Given an arbitrary graph G, we begin by constructing an (undirected) auxiliary graph G' that we will refer to as a t-auxiliary graph. We will see that if G is a cocomparability graph, then so is G'. Furthermore, there is a minimum cardinality dominating set M of G' that corresponds to a minimum cardinality total dominating set of G. That is, M solves the total domination problem on G. Definition 4.19 Let G = (V, E) be an arbitrary graph. The t-auxiliary graph G' is two copies of the graph G. In addition, two vertices in different copies are adjacent if the corresponding vertices in G are adjacent. More formally, G' is defined by the following Chapter 4. Cocomparability Graphs 171 equations. G' = ( V , E') where V = VlUV2 E' = E1 U E2 U E12 V1 = {v1 : v G V} V2 = {u 2 : u G V} E1 = { ( u 1 , U 1 ) : ( W , U ) e E } E2 = {{u2,v2):(u,v)eE} E12 = {(u\v2) : (u,v) e E} I To illustrate this definition, Figure 4.7 shows the t-auxiliary graph of the graph in Figure 4.1. For the rest of this subsection, define the significance o~(v') of a vertex in V V1 Figure 4.7: Reducing total domination to domination. This is the t-auxiliary graph of the cocomparability graph in Figure 4.1. The vertices V1 are on the top, and the vertices v2 are on the bottom. Similarly, the edges E1 are on the top and the edges E2 are on the bottom. The edges E12 go between the two layers of vertices. to be the corresponding vertex in V. More formally, let a : V —> V be a function that satisfies <j{yl) = cr(v2) = v for all vertices v G V. We can now make three simple observations. Note that Observations 4.21 and 4.22 can be viewed as corollaries of Observation 4.20. Chapter 4. Cocomparability Graphs 172 Observation 4.20 Every pair of vertices u',v' £ V satisfies (u',v') £ E' if and only if (a{u'),a(v')) £ E. Observation 4.21 Vertices v1 and v2 are not adjacent in G' for every v £ V. Observation 4.22 A vertex w' £ G' is adjacent to v1 if and only if w' is adjacent to v2. That is, Adj(V,G") = Adj(u 2,G") for every v £ V. Lemma 4.23 The t-auxiliary graph of a cocomparability graph is also a cocomparability graph. Proof: Let G = (V, E) be a cocomparability graph, and let G' = (V, E') be its t-auxiliary graph using the notation in Definition 4.19. If (V, <) is a spanning order of G, then the linear order ( V , -<), where u' -< v1 if and only if cr(u') < cr(u'), or u' = v1 and v' = v2 for some v £ V, is a spanning order of G'. To see this, let a -< b -< c be three vertices in G' such that a and c are adjacent. Then o~(a) < cr{b) < a(c), and (a(a),a(c)) £ E by Observation 4.20. Therefore, if cr(6) is equal to one of o~(a) or cr(c), then it is adjacent to the other. On the other hand, if a(b) is distinct from o~(a) and <r(c), then a(b) is also adjacent to <r(a) or <r(c) by the spanning order on G. Therefore b is adjacent to a or c by Observation 4.20. If follows from Lemma 4.3 that G' is a cocomparability graph. | Suppose now that M 1 2 C V1 U V2 is a dominating set of G ' . Then there is a subset M 1 C V1 that is also a dominating set of G', and that satisfies |Af11 < | M 1 2 | . We will see shortly (Lemma 4.27) that a(M1) is a total dominating set for G. But first, we need to see how to construct M 1 from a dominating set M 1 2 . We will do this by "lifting" vertices from M 1 2 n V2 into V1. Let v2 be an arbitrary vertex in M 1 2 fl V2. If v1 fc M 1 2 , then lift v2 to v1. Otherwise, v1 £ M 1 2 . Since G has no isolated vertices, v1 must have a neighbour w1 £ V1. In this case, lift v2 to vertex to1 (any such neighbour will do). More Chapter 4. Co-comparability Graphs 173 formally, let M 1 = / ( M 1 2 ) where / : V —> V1 is defined by the following equation. v1 if v' = v1 f(v') = I v1 iiv' = v2 and v1 £ M12 (4.16) w if t>' = v2, v1 £ M 1 2 , and (v1,™1) £ E1 For example, the circled vertices in Figure 4.8 form a dominating set (actually a Figure 4.8: The effect of function / on a minimum cardinality dominating set. minimum cardinality dominating set) of the graph in Figure 4.7. The arrows in Fig-ure 4.8 show the effect of the function / . Since / maps vertices already in the set V1 to themselves, the figure does not show arrows from (or to) these vertices. Observation 4.24 IfV € M 1 2 or v2 e M12, then v1 e / ( M 1 2 ) . Observation 4.25 |Af11 < | M 1 2 | . Proof: The observation follows since / (Equation 4.16) is a function. | L e m m a 4.26 Let G' be the t-auxiliary graph of a cocomparability graph G with no isolated vertices. If M12 is a dominating set of G', then Ml = / ( M 1 2 ) is a dominating set of G'. Proof: Let v' be an arbitrary vertex in V. Since M 1 2 is a dominating set, either (1) vertex v' is adjacent to some vertex u' E M 1 2 , or (2) v' £ M12. Case 1: v' is adjacent to some vertex u' £ M 1 2 (see Figure 4.9). Chapter 4. Cocomparability Graphs 174 Figure 4.9: Case 1: Vertex v' must be adjacent to u1 G M 1 . The circled vertex is in M 1 2 . Either u' — u1 or u' = u2. Therefore v' is adjacent to u 1 by Observation 4.22, and u 1 G M 1 by Observation 4.24. It follows that v' is dominated by M 1 . Case 2.1: v' G M 1 2 and v1 <£ M12 (see Figure 4.10). u1 v' = v2 u' = u2 Figure 4.10: Case 2.1: Case 1 applies again. The circled vertices are in M 1 2 . Since vl is not in M 1 2 , vertex v1 must be adjacent to (dominated by) some vertex u' G M 1 2 . Therefore v' = v2 is also adjacent to u' by Observation 4.22, and the previous case applies. Case 2.2: v' G M 1 2 and v1 G M 1 2 (see Figure 4.11). Figure 4.11: Case 2.2: Vertex v' is adjacent to a neighbour w1 of u 1 , both of which are in f(M12). The circled vertices are in M 1 2 . If v' — v1, then we are done. Otherwise, v' = v2 and, by Equation 4.16, f(v') — w1, Chapter 4. Co-comparability Graphs 175 where w1 £ V 1 is some neighbour of v1. Vertex v2 is also adjacent to w1 by Ob-servation 4.22. Therefore v' is dominated by M 1 = / ( M 1 2 ) since w1 = f(v') and / K ) £ / ( M 1 2 ) . | Corollary 4.26.1 If M12 is a minimum cardinality dominating set of a cocompara-bility graph G with no isolated vertices, then M1 — f(M12) is a minimum cardinality dominating set of G'. Proof: Immediate from the lemma and Observation 4.25. I Lemma 4.27 Let G = (V, E) be a cocomparability graph with no isolated vertices. If M1 C V1 is a minimum cardinality dominating set of the t-auxiliary graph G', then o^M 1 ) is a minimum cardinality total dominating set of G. Proof: To see that a(M1) is a total dominating set for G, let v be an arbitrary vertex in V. Then v2 is adjacent to some vertex u 1 £ M 1 , since M 1 dominates v2 but M 1 does not contain any vertices from V2. Therefore v = cr(v2) is adjacent to o-(ul) by Observation 4.20. Since <r(u1) £ cr(Ml), it follows that u ( M 1 ) is a total dominating set of G. To see that a(Ml) is of minimum cardinality, let T be a minimum cardinality total dominating set of G. By definition, every vertex in V is adjacent to some vertex in T. Let T 1 be the subset of V1 that corresponds to T. That is, let T1 = {t1 : t £ T}. Clearly, every vertex in vl £ V1 is adjacent to some vertex in T1. Now consider an arbitrary vertex v2 £ V2. This vertex v2 is adjacent to some vertex in T1 since v1 is adjacent to some vertex in T 1 and Adj(u 1 , G') = Adj(u 2 , G') by Observation 4.22. Therefore T 1 is an (ordinary) dominating set of G', and \<r{Mx)\ = \MX\ < IT1] = \T\ by Corollary 4.26.1. I Lemma 4.28 Algorithm MCTDS-OCC (Table J..2.3) prints a message stating that no total dominating set exists if and only if no total dominating set exists. Chapter 4. Cocomparability Graphs 176 Table 4.12: Algorithm: MCTDS-O.CC(G) [Minimum Cardinality Total Dominating Set in spanning-Ordered Cocomparability Graphs] Input: A spanning ordered cocomparability graph G. Output: A minimum cardinality total dominating set M of G, or a message stating that no total dominating set exists. 1 if G has no isolated vertices, 2 then Construct the spanning-ordered t-auxiliary graph G'. 3 M 1 2 «- MCDS-OCC(G") 4 M 1 4- {fiy) : v € M 1 2 } as defined by Equation 4.16. 5 return a(M1). 6 else return "No total dominating set exists." Proof: An isolated vertex of G cannot be total dominated, therefore no total dominating set exists in this case. On the other hand, G(V) total dominates G(V) if there are no isolated vertices, hence a total dominating set exists in this case. This condition is tested in Step 1. | Theorem 4.29 Algorithm MCTDS-OCC returns a minimum cardinality total dominat-ing set of a spanning-ordered cocomparability graph if and only if one exists in 0(VE2) time. Proof: The algorithm returns a set if and only if a minimum cardinality total dominating set exists by Lemma 4.28. The t-auxiliary graph G' constructed in Step 2 is a spanning-ordered graph by Lemma 4.23. Therefore the set M 1 2 constructed in Step 3 is a minimum cardinality dominating set by Theorem 4.18. Since G does not have any isolated vertices, the set M 1 constructed in Step 4 is a minimum cardinality dominating set by Lemma 4.26. Finally, t r ( M 1 ) is a minimum cardinality total dominating set by Lemma 4.27. To achieve the stated run time, represent all graphs with adjacency lists. Then Step 1 can be checked in 0(V) time. To construct the t-auxiliary graph in Step 2, begin Chapter 4. Cocomparability Graphs 177 by copying the graph in 0(V + E) time. Then construct E' by examining every edge (u,v) <G E in 0(V + E) time and adding the edges (u1^1), (u2,v2), and (ul,v2) in constant time. There are 2\E\ edges in E12, and edges in each of E1 and £ 2 , for a total of 4\E\ edges in E'. That is, G' has 0(£ ' ) edges, and since there are 2 |V| vertices in V , graph G" has 0(V) vertices. Therefore, Step 3 can be executed in 0(VE2) time by Theorem 4.18. Finally, the output in Steps 4 and 5 can be constructed in 0(V) time by examining each vertex in V and using the first available neighbour when one is needed. Step 3 is the time critical step and accounts for the complexity of the algorithm. | 4.2.4 Weighted Independent Dominating Sets Until now, the problems in this section entailed finding minimum cardinality subgraphs. This section and the next (§4.2.5) generalizes this type of optimization problem by finding minimum weight subgraphs in graphs with weighted vertices. The weighted versions of the problems studied so far, namely minimum weight connected dominating set, minimum weight dominating set, and minimum weight total dominating set, all on cocomparability graphs, have been shown to be NP-complete by Maw Shang Chang [Cha94]. This section presents an efficient polynomial-time algorithm for finding a minimum weight independent dominating set in a weighted cocomparability graph. It does so by reducing the problem to finding a minimum weight maximal clique1 in a comparability graph. It then exhibits a new efficient algorithm for finding a minimum weight maximal clique in a comparability graph. Could we not negate all weights and find a maximum weight clique? Yes, certainly, but are standard maximum weight clique algorithms, for example the one in [Gol80], applicable here? They are not, because they are content to find a maximum weight clique, not necessarily a maximal one. A maximum weight clique is also a maximal 1 Recall that a clique is a complete subgraph. Chapter 4. Cocomparability Graphs 178 clique if its weights are positive. However, a clique found by such an algorithm would not contain any vertices with negative weights, even though such a clique might not be maximal. This is because the weight of any subgraph can be increased by removing any negative weight vertices. On the other hand, the new algorithm presented in this section can also find a max-imum weight maximal clique if the weights are first negated. Let us begin by exploring some relevant properties of independent dominating sets. Observation 4.30 An independent set is a dominating set if and only if it is a maximal independent set. Proof: Let I be an independent set. If I is a dominating set, then every vertex not in I is adjacent to some vertex in I. Therefore, I is a maximal independent set. Conversely, if I is a maximal independent set, every vertex not in I must be adjacent to some vertex of I, and therefore is dominated by I. Since I trivially dominates all vertices in 7, it is a dominating set. | Observation 4.31 A set is independent in a graph if and only if it is completely con-nected in the complement of the graph, and a set is maximal independent in a graph if and only if it is a maximal clique in the complement. The two observations above imply that we can find independent dominating sets by looking for maximal cliques in the complement. The complement of a cocomparability graph is, of course, a comparability graph. The next lemma shows what maximal cliques look like in comparability graphs. It requires the following definition from Section 1.3.1 (where it is phrased more generally in terms of relations). Definition 4.32 A transitive reduction of a directed graph G is any directed graph, with Chapter 4. Cocomparability Graphs 179 the least number of edges, whose transitive closure is isomorphic to the transitive closure of G. | Under this most general definition, a transitive reduction need not even be a subgraph of G. If, however, G is finite and acyclic, then the transitive reduction is a unique sub-graph and may be obtained by systematically removing all transitively implied edges from G [AGU72]. Therefore the notion of the transitive reduction of a transitively oriented comparability graph is well defined. L e m m a 4.33 The maximal cliques in a comparability graph are exactly the maximal paths in the transitive reduction of any transitive orientation of the graph. P r o o f : Let G = (V, E) be a comparability graph, and let T = (V, A) be any transitive orientation of G. Let the linear order (V, <) be any linear extension of T. Let C be any maximal clique of the comparability graph G. To see that C is a maximal (directed) path in the transitive reduction of T, let u and v be two vertices in C such that u < v. Then (u, v) G A. Therefore the vertices of C , taken in their linear order, form a path in T. Furthermore, every arc (u,v) of this path must be in the transitive reduction of T. If it were not, there would be a path from u to v in T with at least one intermediate vertex, w. By the linear order, u < w < v, so that w fc C. But by the transitivity of T, w would be adjacent in G to every vertex in C. That is, C U {w} would be completely connected in G, contradicting the maximality of C. Finally, every path in T, and therefore in the transitive reduction also, is a completely connected subgraph of G by transitivity. Therefore C is a maximal path in T since any superpath of C would contradict the maximality of C in G. Conversely, let P = (pi,... ,pk) be a maximal path in the transitive reduction of T. Then P is a clique in G by transitivity, and pi < p2 < • • • < Pk- Suppose that P is not a maximal clique in G. Then there exists a vertex v G V \ P that is adjacent in G to every Chapter 4. Cocomparability Graphs 180 vertex in P . If v < p i , then there is an arc from v to pi in T. Therefore there is a path from v to pi in the transitive reduction of T. This contradicts the assumption that P is a maximal path. If v > Pk, then a similar contradiction arises. Finally, if pi < v < pk, then there are two vertices u and w in P such that u < v < w. Therefore there are arcs from u to v, and from v to w, in T. But this contradicts the assumption that (u,w) is an arc in the transitive reduction of T. | These lemmas have set the stage for the polynomial time algorithm below. They show that independent dominating sets in cocomparability graphs are precisely the paths from minimal elements to maximal elements in the transitive reduction of a transitive orientation of the complement of the graph. To simplify the handling of special cases2, augment the transitively oriented graph with two sentinel vertices, s = 0 and t = \V\ + 1, that have 0 weight. Add an arc from s to every other vertex (including £), and add an arc to t from every other vertex (including s). Therefore 5 is the only minimal element and t is the only maximal element. Note that s and t must appear in every maximal clique C and that C \ {s,t} is a maximal clique in the unaugmented graph. We want only the minimum weight maximal clique, that is, the minimum weight path from s to t. The details of the algorithm appear in Table 4.2.4. Theorem 4.34 Algorithm MWMC-C finds a minimum weight maximal clique in a weighted comparability graph G = (V, E,w) in 0(M(V)) time. Proof: Correctness follows from Lemma 4.33 and the discussion concerning augmenting the digraph. Step 1 runs in 0(V2) time [Spi85], and Step 2 runs in 0(V) time. Since the augmented 2 We can achieve the same effect by augmenting the transitive reduction R with s and t. In this case we would add arcs only from s to minimal elements of R and from maximal elements of R to t. Instead, we have chosen to let the transitive reduction step find the minimal and maximal elements for us. Chapter 4. Cocomparability Graphs 181 Table 4.13: Algorithm: M W M C - C ( G ) [Minimum Weight Maximal Clique for Com-parability Graphs] Input: A weighted comparability graph G. Output: A minimum weight maximal clique in G. 1 T 4— a transitive orientation of G. 2 augment T with 5 = 0 and t = | V | + 1. 3 R 4— the transitive reduction of T. 4 C f- the least weight path from s to t in R. 5 return C \ {s, t}. T is already transitively closed, the transitively implied edges are given by T2. We can find this by creating the adjacency matrix for T and multiplying this by itself in 0(M(V)) time. Then R = T\T2 can be computed in 0(V2) time, and so Step 3 runs in 0(M(V)) time 3. Let the weight of an arc (u,v) S R be w((u,v)) = w(u). Then the edge weight of a path from 5 to t is equal to its vertex weight since w(s) = w(t) = 0. Therefore Step 4 can be implemented to run in 0(V + E) time since R is a directed acyclic graph [CLR90]. The entire algorithm therefore runs in 0(M(V)) time. | We are now ready to construct algorithm MWIDS-CC for finding an optimal inde-pendent dominating set in a cocomparability graph, as shown in Table 4.14. Theorem 4.35 Algorithm MWIDS-CC finds a minimum weight independent dominating set in a weighted cocomparability graph G = (V,E,w) in 0(M(V)) time. Proof: Correctness follows from Observation 4.30, Observation 4.31, and Theorem 4.34. Step 1 runs in 0(V2) time, and Step 2 runs in 0(M(V)) time (Theorem 4.34). | 3The transitive reduction of an acyclic digraph D = (V, A) can actually be computed in O(kR) = 0(VR) = 0(V A) time [Sim88], where k is the number of paths in a path decomposition of D. Since R is not reflected in the output of Algorithm MWMC-C, this proof uses the 0(M(V)) = 0{V2376) bound. Chapter 4. Cocomparability Graphs 182 Table 4.14: Algorithm: MWIDS-CC(G) [Minimum Weight Independent Dominat-ing Set for Cocomparability Graphs] Input: A weighted cocomparability graph G. Output: A minimum weight independent dominating set of G. 1 G <— the complement of G. 2 I <- M W M C - G ( G ) 3 return /. 4.2.5 Weighted Steiner Sets This section describes a polynomial time algorithm for finding a minimum weight Steiner set (MWSS) in a cocomparability graph. This algorithm can be implemented to run in 0(V log V + E) time on a vertex-weighted spanning-ordered cocomparability graph G = (V, E, <, w). The algorithm terminates in 0(V2) time even if a spanning order is not available. Furthermore, given an arbitrary graph, the algorithm will either find a minimum weight Steiner set, or it will be print a message stating that the graph is not a cocomparability graph. That is, the algorithm does not need to recognize cocomparability graphs, a step that would add 0(M(V)) to the run-time complexity. These results simplify and generalize the permutation graph algorithms obtained by Colbourn and Stewart [CS90a] (0(V3) time) and by Arvind and Rangan [AR92] ( 0 ( V l o g V + E) time). They also extend an 0(V + E) algorithm for minimum car-dinality Steiner sets in cocomparability graphs due to Colbourn and Lubiw (described in [KS93]). Definition 4.36 Let G = (V, E, w) be a graph with nonnegatively weighted vertices. Let R C V be a "required" set of vertices in G. A Steiner set (of G and R) is a set S of vertices such that (1) S C V \ R and (2) the subgraph G(S U R) induced in G by S U R Chapter 4. Co-comparability Graphs 183 is connected. The weight w(S) of a set of vertices S is defined, as usual, to be the sum of the weights of its elements. Assume without loss of generality that w(v) = 0 for all vertices v £ R. The weighted Steiner set problem is to find a minimum weight Steiner set in such a vertex weighted graph. Finally, if G is linearly ordered, let s = min R be the least point in R and t = max R be the greatest point in R. | L e m m a 4.37 Let P be a path from s to t in a spanning-ordered cocomparability graph. Then S = P \ R is a Steiner set. Proof: Since S C V \ R, we only need to show that S + R is connected. First note that S + R = P U R. Since P is connected, it suffices to show that every vertex v £ R \ P is adjacent to some vertex in P. This follows from Lemma 4.7. | L e m m a 4.38 Let G be an arbitrary weighted graph G = (V,E,w) and R C V be a required set. Let P be a least weight path from some vertex u £ R to some vertex v £ R. If S is a Steiner set of G and R, and S C P, then S is a minimum weight Steiner set. Proof: Let S' be a minimum weight Steiner set of G and R. Since the subgraph induced by S' U R is connected, there is a path P' from u to v in this subgraph, and w(P') < w(S' U R). Since P is a least weight path, w(P) < w(P'). Now w(S' U R) = w(S') since w(v) = 0 if v £ R. Finally, w(S) < w(P) since S C P. It follows that w(S) < w(P) < w(P') < w(S' U R) = w(S'). That is, S is also a minimum weight Steiner set. | Theorem 4.39 Let P be a least weight path from s to t in a spanning-ordered cocom-parability graph G. Then S = P \ R is a minimum weight Steiner set. Proof: The set S is a Steiner set by Lemma 4.37 and therefore a minimum weight Steiner set by Lemma 4.38. I One way to use this result for an arbitrary graph is to first check if G is a cocompa-rability graph, then construct a spanning order, and find a least weight path. The best Chapter 4. Cocomparability Graphs 184 known upper bound for recognizing an order n cocomparability graph is the same as that for n x n matrix multiplication [Spi85]. Since this bound is currently 0(n2'376) [CW87], recognition is the limiting step in this approach. Spinrad [Spi85] shows that the complex-ity of recognizing comparability graphs and transitive digraphs is the same.4 Therefore, we are no better off by assuming a linearly ordered graph as input, since recognizing a spanning-ordered graph would entail recognizing the transitively oriented complement. Fortunately, there is another approach, which does not require recognition. Table 4.15: Algorithm: MWSS-OCC(G, R) [Minimum Weight Steiner Set for span-ning-Ordered Cocomparability Graphs] Input: A linearly-ordered vertex-weighted graph G = (V, E,<,w) and a required set of vertices R C V. Output: A minimum weight Steiner set S, or a message stating that no Steiner set exists, or a message stating that G is not a spanning-ordered cocomparability graph. 1 if R is empty, 2 then return the empty set 3 halt. 4 if R is not a subset of a connected component of G, 5 then return "No Steiner set exists." 6 halt. 7 F f - a least weight path from min R to max R in G. 8 S <- P\R 9 if the subgraph induced by S + R is connected, 10 then return S 11 else return " G is not a spanning-ordered cocomparability graph." Lemma 4.40 Given a linearly-ordered graph G and required vertices R, Algorithm 4More precisely, if transitive digraphs can be recognized in 0(f(V)) time, then comparability graphs (and therefore also cocomparability graphs) can be recognized in 0(f(V) + V2) time. Conversely, if comparability graphs can be recognized in 0(f(n)) time, then transitive digraphs can be recognized in 0(f(n) + E) time. Chapter 4. Cocomparability Graphs 185 MWSS-OCC (Table 4-2.5) prints a message stating that no Steiner set exists if and only if no Steiner set exists. Proof: A Steiner set exists if and only if the required vertices R can be augmented to induce a connected subgraph. This is true if and only if the vertices of R all lie in the same connected component of G. This condition is checked in Step 4. | L e m m a 4.41 Given a linearly-ordered graph G and required vertices R, if Algorithm MWSS-OCC returns a set S, then S is a minimum weight Steiner set. Proof: Suppose the algorithm returns a set S. This can only happen in Steps 2 and 10. If S is returned by Step 2, then R is empty. A minimum weight Steiner set can therefore also be empty since all vertex weights are nonnegative. If S is returned by Step 10, then the subgraph induced by 5" + R is connected. So S is a Steiner set by definition, and therefore a minimum weight Steiner set by Lemma 4.38. I Theorem 4.42 Given a spanning-ordered cocomparability graph G and required set R CV, Algorithm MWSS-OCC returns a minimum weight Steiner set of G and R if and only if it exists. Proof: If a minimum weight Steiner set does not exist, then no Steiner set exists since the graph is finite. By Lemma 4.40, a message is printed and the algorithm halts at Step 3. Suppose, on the other hand, that such a set does exist. If R is empty, a suitable minimum weight Steiner set, the empty set, is returned by Step 2. Otherwise, the set S = P\R constructed in Step 8 is a minimum weight Steiner set by Theorem 4.39. That is, the subgraph induced by S + R is connected. So S is correctly returned by Step 10. I Chapter 4. Cocomparability Graphs 186 Theorem 4.43 Given any linearly-ordered graph G = (V, E, <, w), Algorithm MWSS-OCC runs in 0(V log V + E) time. Proof: Use linked lists to represent all sets, and represent the graph as an adjacency-list. Then Step 1 can be executed in constant time. Step 4 can be implemented by marking all vertices in the same connected component as the first vertex in R using a single phase of depth first search in 0(E) time. It then suffices to check, in O(R) time, that all vertices in R are marked. Step 4 therefore executes in 0(V + E) time. The extreme vertices mini? and max 7? can easily be found in O(R) = 0(V) time. Step 7 uses Dijkstra's algorithm with a Fibonacci heap priority queue [CLR90] to find a least weight path between min R and max R in 0(V log V + E) time. Dijkstra's algorithm expects an edge-weighted digraph as input. So, implicitly set the weight of a directed edge to be w((u,v)) = w(u). Since w(m'mR) = w(maxR) = 0, the edge weight of a simple path from mini? to maxi? is equal to its vertex weight. Step 8, setting S = P\R, can be computed in 0(P + R) = 0(V) time. For example, the algorithm could mark all vertices in R and let S be the set of unmarked vertices in P. Finally, Step 9 represents the induced subgraph in 0(S + R) = 0(V) time by marking the inducing vertices in G. It then tests the induced subgraph for connectivity in 0(V+E) time by depth first searching the marked vertices. | Although Algorithm MWSS-OCC may work—find minimum weight Steiner sets— for some arbitrary linearly-ordered graphs and sets of required vertices, the class of graphs for which it works for all sets of required vertices is exactly the spanning-ordered cocomparability graphs. This is restated by the following lemma. Chapter 4. Cocomparability Graphs 187 Lemma 4 . 4 4 For every linearly-ordered graph that is not a spanning-ordered cocom-parability graph, there exists a required set such that Algorithm MWSS-OCC prints the message stating that the graph is not a spanning-ordered cocomparability graph. Proof: Let G = (V, E, <,w) be a linearly-ordered graph that is not a spanning-ordered graph. Then there exists three vertices a < b < c that violate the spanning order, that is, (a,c) £ E but (a, b) fc E and (6,c) fc E. Let R = {a,o,c}. Algorithm MWSS-OCC will find the least weight path P = (min R, max R) = (a, b) in Step 7, and it will construct the empty set S in Step 8. But the subgraph induced by S + R = R is not connected since b is not adjacent to either a or c. Hence the message will be printed in Step 11. | We now have enough tools to find a minimum weight Steiner set in a cocomparability graph. Again, the input to the algorithm will be an arbitrary graph; the algorithm does not require a recognition step. Table 4.16: Algorithm: MWSS-CC(G, R) [Minimum Weight Steiner Set for Co-comparability Graphs] Input: A weighted graph G = (V, E, w) and a required set of vertices RC.V. Output: A minimum weight Steiner set 5", or a message stating that no Steiner set exists, or a message stating that G is not a cocomparability graph. 1 G <- OCC(G) 2 return MWSS-OCC(G, R) Theorem 4 . 4 5 Given a weighted graph G and a required set of vertices R, Algorithm MWSS-CC (Table 4-2.5) computes a minimum weight Steiner set, or prints a message stating that no Steiner set exists, or prints a message stating that G is not a cocompara-bility graph, in 0(V2) time. Chapter 4. Cocomparability Graphs 188 Proof: By Theorem 4.5, Step 1 either returns a linearly-ordered graph, or correctly prints a message stating that G is not a cocomparability graph. If Step 2 returns a set S, then S is a minimum weight Steiner set by Lemma 4.41. If it prints a message stating that no Steiner set exists, then no Steiner set exists by Lemma 4.40. Finally, if it prints a message stating that G is not a spanning-ordered cocomparability graph, then G is also not a cocomparability graph for, if it were, it would have been spanning-ordered by Step 1. Step 1 takes 0(V2) time by Theorem 4.5, and Step 2 takes 0(V\ogV + E) time by Theorem 4.43. I 4.2.6 Applications We can use the algorithms for spanning-ordered graphs to solve the corresponding prob-lems for permutation graphs, interval graphs, and indifference graphs, given permutation, interval, or indifference realizations of the graphs. These graphs are all cocomparability graphs, and as shown here, a spanning order can easily be extracted from their realiza-tions. Theorem 4.46 All spanning-ordered cocomparability graph algorithms work without change for permutation graphs given a permutation realization linearly ordered by vertex label. Proof: Let G = (V, E) be a permutation graph, and let the labelling of the vertices and 7T be a permutation realization. That is, n is a permutation of {1 ,2 , . . . , ra} such that (u,v) 6 E if and only if [u — i>)(7r_1(u) — 7T - 1 (u)) < 0. Then the vertex labelling is a spanning order. To see this, let u < v < w be three vertices such that u and w are adjacent in G. It follows that (u — ti>)(7r_1(w) — 7r - 1 (io)) < 0, which implies that 7r _ 1 (u) > -IT~1(W) since Chapter 4. Cocomparability Graphs 189 u < w. Either 7r-1(i>) < 7r _ 1(u) or 7r - 1 (u) > 7r _ 1 (u ) . If 7r _ 1(u) < 7r _ 1(u), then (u,v) £ _E since u < v. Otherwise, 7r _ 1(u) > 7r _ 1 (u) and 7r _ 1(u) > ir~1(w) since (tf,u;) € Therefore 7r _ 1(u) > 7r_ 1(u;) so that (v,w) £ E. That is, u and u are adjacent, or v and u; are adjacent. | Theorem 4.47 All spanning-ordered cocomparability graph algorithms work without change for interval graphs given a set of intervals in nondecreasing order of left endpoint. Proof: Such a linear ordering of intervals is a spanning order. Let u < v < w be three intervals such that u and w are adjacent. Then the left endpoint of w must be to the left of the right endpoint of u. Therefore the left endpoint of v lies between the left endpoint of u and the right endpoint of u. That is, u and v are adjacent. | Theorem 4.48 All spanning-ordered cocomparability graph algorithms work without change for T-strip graphs, given a set of points in order of nondecreasing x-coordinate, for all r 6 [0 ,73/2] . Proof: Such a linear ordering of points is a spanning order. Let / : V —> {0,r} be a strip realization. Let u < v < w be three points such that u and w are adjacent. Then by the definition of r-strip graphs, Xf(w) — xj(u) < 1. This implies that either xl(v) ~ xf(u) < 1/2 or xj(w) — xj(v) < 1/2. If the former, then \\f(v)-f(u)\\2 = (xf(v)-xf(u)Y-r(yf(v)-yf(u)Y < 1/4+ r 2 < l/4 + (V3/2)2 = 1. That is, u and v are adjacent. On the other hand, if Xf(w) — xj{y) < 1/2, then v and w are adjacent. | Chapter 4. Co-comparability Graphs 190 Corollary 4.48.1 All spanning-ordered cocomparability graph algorithms work without change for indifference graphs given a set of points in nondecreasing order. Proof: Indifference graphs are 0-strip graphs. | 4.3 Transitive Orientation, Implication Classes, and Dominating Paths This section explores the possible transitive orientations for the nonedges of cocompa-rability graphs. Chapter 5 and Chapter 6 use the transitive orientations of the comple-ments of r-strip graphs and two-level graphs in characterizing these classes of graphs. For example, Chapter 6 uses the results of this section to show that two-level graphs have es-sentially one such orientation that is compatible with a given assignment of y-coordinates for the vertices. Subsection 4.3.1 establishes some basic properties of transitive orientations in co-comparability graphs. In particular, it shows how the orientations of some nonedges "force" the orientation of others. With this background, Subsection ,4.3.2 then studies cocomparability graphs that have induced dominating paths with four or more vertices. 4.3.1 Definitions and Basic Properties The following notation follows that of Golumbic ([Gol80] pages 105-148), where it is used for comparability graphs. Since r-strip graphs are cocomparability graphs, this section modifies Golumbic's notation for the complement of comparability graphs. Let G = (V, E) be an undirected graph. Define the complementary arcs (nonedges) E to be the set of distinct ordered pairs not in E, that is, E = {(u,v) :M/D and (u,v) £ E}. Definition 4.49 Write (a, 6)T(c, d), and say that arc (a, 6) directly forces arc (c, cf), if Chapter 4. Cocomparability Graphs 191 1. a = c and (6, d) E, or 2. b = d and (a, c) ^ That is, T is a binary relation on E. Say that (a,b) forces (c,d), and write (a,b)T*(c,d), if there exists a sequence of arcs (a forcing chain of length k > 0) such that (a, b) = (a0,b0)T(a1,b1)T • • • T(ak,bk) = (c,d). That is, T* is the transitive closure of T. | Note that T is reflexive and symmetric. Note also that (a,6)T(c, d) if and only if (b,a)T(d,c). The following lemma explains the intent of these definitions. Lemma 4.50 Let G = (V, E) be a cocomparability graph and G = (V, E) be a transitive orientation of G. If(a,b) forces (c,d), then (a, b) £ E if and only if (c, of) £ E. Proof: The lemma follows by induction on the length k of the forcing chain. The base case k = 0 is trivial. For the inductive case, consider the forcing chain (a, b) - (a 0,6 0)r(ai, b^T • • • T(ak-i, bk_1)T(ak, bk) = (c, d), and assume (a0,b0) £ E if and only if (ak-i,bk_i) £ E. Since (ak^i,bk-i)T(ak,bk), either a f c_i = ak and (bk-i,bk) <£ E, or = bk and (ak_i,ak) £ E. If ak_x = ak and (bk-i,bk) £ E, then (0^-1,6^-1) £ Z? if and only if (ak,bk) £ E. For otherwise, transitivity of E implies (bk-i,bk) £ E or £ Z2, and therefore bk) £ Z?; see Figure 4.12. Similarly, if bk_i = bk and ( a^ - i , ^ ) ^ Z2, then (ak-i,bk-i) £ Z? if and only if (ak,bk) £ Z£. It follows that (ao,&o) € Z? if and only if (ak,bk) EE. | Note that V* is an equivalence relation, i.e., reflexive, symmetric, and transitive. It therefore partitions E into equivalence classes that we refer to as implication classes. Let [e] denote the implication class generated by e £ E. The following theorems will be useful later, when orienting two-level graphs in Chapter 6. Chapter 4. Cocomparability Graphs 192 Figure 4.12: (a,k-i,bk-i) £ E if and only if (a^,^) £ E. Theorem 4.51 ([Gol80], page 107) Let A be an implication class of [the complemen-tary arcs of] an undirected graph G. If G has a transitive orientation F, then either F fl (A U A~l) = A or F n (A U A'1) = A'1 and, in either case, A 0 A - 1 = 0. Theorem 4.52 ([Gol80], page 122) Let G = (V, E) be a graph. Then G is a cocom-parability graph if and only if [e] fl [e] _ 1 = 0 for all e £ E. To illustrate Theorem 4.52 in action, let us reprove Theorem 4.6 (from the introduc-tion to this chapter). An examination of both proofs will also associate the notion of forcing chains with the notion of spanning order. Theorem 4.6 ( § 4 . 1 ) Cocomparability graphs do not have induced cycles with five or more edges. Proof: Let (u1? v2,..., vn, Vy) be an induced cycle in a graph G, where n > 5, as shown in Figure 4.13. Then (vy,vs) £ E and (v3,vy) £ [(^1,^3)], as demonstrated by the forcing chain (vy,V3)T(vy,V4)r . . . T(vy , Vn_y ) Y(v2, Vn_y)T(v2, Vn_y)r(v2, Vn)r(v3, Vn)r(v3, Vy). Therefore [(^1,^3)] H [(^ I,^)] - 1 is not empty, and G is not a cocomparability graph by Theorem 4.52. | Chapter 4. Cocomparability Graphs 193 Figure 4.13: An induced cycle on n > 5 vertices. The arrows show that (vi, Vs)T*(v3, Vi). 4.3.2 Transitive Orientation and Dominating Paths When there is a long dominating path in a graph, every nonedge either has its orientation forced by the nonedge joining the path's endpoints, or belongs to a nearly-oriented square or claw. Recall that a path is chordless if it is simple and if its vertices are adjacent in the graph only if they are adjacent in the path. For example, consider the connected T-strip graph (0 < T < y/3/2) generated by some set of points in a strip. A shortest path from the leftmost point to the rightmost point in the strip is chordless, otherwise there would be a shorter path. Furthermore, if the leftmost and rightmost points are more than three Euclidean units apart, then the path has at least three edges (and at least four vertices). The complementary arc between the endpoints of a chordless path force all other complementary arcs with endpoints on the path, as shown by the following lemma. This easy lemma will be so useful to us that we will often use it without reference. Lemma 4.53 (The Path Lemma) If (s = po,Pi, • • • ,Pk = t) is a chordless path in an arbitrary graph, then (s,t) forces (pi,Pj) whenever i < j. Proof: Since the pi are distinct and adjacent only to their neighbours in the path, there Chapter 4. Cocomparability Graphs 194 is a sequence of arcs (M) = (s,pk)T(s,pk-1)r---T(s,pj+1)r(s,pj)1 so that (s,t) forces (s,pj), as shown in Figure 4.14. There is also a sequence of arcs (S,PJ) = (/^,Pi)r(pi,pj)r---r(p,-_1,p j)r(p,-,p j), so that (s,pj) forces (pi,Pj). The lemma follows since T* is transitive. | S = Po o>-! H D t = Pk Figure 4.14: The endpoints of a chordless path force the chords in the complement. Definition 4.54 A colour class of a graph G is the symmetric closure of an implication class. That is, if [a] is an implication class of G = (V, E), where a G E, then [a] = [a]U[a]" 1 is a colour class. | The following theorem relates colour classes to cocomparability graphs. Theorem 4.55 ([Gol80], page 109) Each [undirected subgraph induced by a] colour class of a graph G either has exactly two transitive orientations, one being the reversal of the other, or has no transitive orientation. If in G there is a colour class having no transitive orientation, then G fails to be a cocomparability graph. Chapter 4. Co-comparability Graphs 195 It is sometimes the case that a graph has only one colour class. If such a graph is a cocomparability graph, then it is said to be uniquely complement orientable5. This section deals with graphs that are not so strongly constrained. We will see (Theorem 4.57) that if a graph contains a sufficiently long chordless dominating path, then every complementary arc is either forced or captured—trapped in a small, otherwise forced subgraph. That is, the graph has an implication class such that every other arc is captured by it. The following definition is more precise. Definition 4.56 Let G = (V,E) be a graph, and let (s,t) be an arc in its complement G = (V, E). Say that (s, t) captures (it, v) if (it, v) G E is part of one of the three induced subgraphs in Figure 4.15, where the directed nonedges are forced by (s,t). | v V .Q Figure 4.15: (s,t) captures (u,v). Theorem 4.57 (The Capture Theorem) Let G = (V,E) be a connected graph, and let P = (s , . . . ,t) be a chordless dominating path in G. If P has at least four vertices, then for all (u,v) G E, either (s,t) forces (u,v); (s,t) forces (v,u); or (s,t) captures (u,v). 5Golumbic [Gol80] would refer to the complement, which is a comparability graph, as uniquely par-tially orderable. I have chosen not to use this term since it is easily confused with the spanning order, which is a linear order. Chapter 4. Cocomparability Graphs 196 Proof: Linearly order the vertices in P according to their distance from s (so that s is the least vertex and t the greatest). Let it' be the least vertex in P that dominates it, as shown in Figure 4.16. It may be that it' = it, but in any event (it, it') ^ E. u v Figure 4.16: Definition of it' and v'. Assume without loss of generality that v is not dominated by any vertex in P less than u' (otherwise swap labels it and v and recompute it'). Let v' be the greatest vertex in P that dominates v. Note that v' is not less than it'. Again, it may be that v' = v, but in any event (v,v') (fc E. However, u ^ v', since (it,i>) £ E but (it, it') ^ E. Similarly, v ^ it'. The definitions and assumptions above also pertain to Lemmas 4.58 through 4.61. There are three alternatives: (it', v') £ E, (u', v') £ E, or u' = v'. The following three lemmas prove the theorem: if (it', u') £ E, then Lemma 4.58 applies; if (it', i/) £ E, then Lemma 4.60 applies; and if v! = v', then Lemma 4.61 applies. | Chapter 4. Cocomparability Graphs 197 L e m m a 4.58 Let G = (V, E) be a connected graph, and let P = (s,..., i) be a chordless dominating path in G. If P has at least four vertices, then for all (u, v) G E for which (u',v') G E, either (s,t) forces (u,v), or (s,t) captures (u,v). Proof: Case 1: (u',v) G E. Then (s, t)F*(u', v')T(u', v)T(u, v), so that (s,t) forces (u,v). o • s u 0 - 6"-u' v •59 o Case 2: (u,v') G E. Then (s, t)T*(u', v')F(u, v')T(u, v), so again, (s,t) forces (u,v). u 0. o 5 v •O 6 u t Case 3: (u',v) £ E and (u,v') <£ E. Then since (u,v) G E, u' ^  u and v' ^  v. Therefore {u,v,u\ v'} induces a square in G, and (s,t) captures (u,v). u v o s I Chapter 4. Cocomparability Graphs 198 Lemma 4 . 5 9 Let G = (V, E) be a connected graph, and let P = (s , . . . , t) be a chordless dominating path in G. If P has at least four vertices, then for all (u,v) G E for which (u',v') G E and u' = s, either (s,t) forces (u,v); (s,t) forces (v,u); or (s,t) captures (u,v). Proof: Since u' = s and P has at least four vertices, P — (s — u',v',v",v"',. . . ,t). Let us proceed by case. Case 1: (u,v") <£E. In this case, (s,t) forces (v,u) since (s,t)T*(v',v"')T(v,v'")T(v,v")T(v,u). Case 2 : (u,v") G E and (u,v') fc E. Then {v',u,v,v"} induces a claw in G. Furthermore, (s, t)F*(u', v")V(u, v") and (s,t)r*(v',v"')T(v,v'")T(v,v"), so (s,t) captures (u,v). U V Chapter 4. Cocomparability Graphs 199 Case 3: (u,v") G E and (u,v') G £. Here, (s,t) forces (u,u) since (s, t)V*(u', v")T(u, u")r(u, v')F(u, v) u v Cy. *Q u 3> -o • • o I Coro l l a ry 4.59.1 Let G = (V, E) be a connected graph, and let P = (s,...,t) be a maximum dominating path in G. If P has at least four vertices, then for all (u,v) G E for which (u',v') G E and v' = t, either (s,t) forces (u,v); (s,t) forces (v,u); or (s,t) captures (u, v). Proof: The lemma applies if we reverse path P , swap labels s and t, and swap labels u and v. | L e m m a 4.60 Let G = (V, E) be a connected graph, and let P — (s,..., t) be a chordless dominating path in G. If P has at least four vertices, then for all (u, v) G E for which (u',v') G E, either (s,t) forces (u,v); (s,t) forces (v,u); or (s,t) captures (u,v). Proof: If u' = 5 or v' = t, then Lemma 4.59 or its corollary applies, and we are done. So assume P = (s,...,it",u', v', v",... ,t). Note that (u", v) G E since v is not dominated by any vertex less than u'. Case 1: (u',v) G E. Here (s,t) forces (u,v) since (s,t)T*(u",v')r(u",v)T(u',v)T(u,v). U V 0 o Chapter 4. Cocomparability Graphs 200 Case 2 : (u', v) fcE~ and (u, v') fc E. It cannot be that v! = u since (u,v) £ E, but (it', u) fc E. Therefore {u',u", u,v} induces a claw in G. Furthermore, (s,t)T*(u",v')T(u",v), and (s,t)T*(u",v')T(u",u). Therefore, (s,t) captures (u,u). u v Case 3: (u',v) fc E, (u,v') € E, and (u,v") € E. Again, (s,t) forces (u,v) since (s,t)T*(u',v")T(u,v")r(u,v')T(u,v). u v o o Case 4: (u',v) fc E, (u,vf) e E, and (u,v") fc E. Here, (s,t) forces (v,u) since (s,t)T*(u',v")T(v,v")r(v,u). u v o o s u" I Lemma 4.61 Let G = (V, E) be a connected graph, and let P — (s,... , i ) be a chordless dominating path in G. If P has at least four vertices, then for all (u,v) £ E for which u' = v', either (s,t) forces (u,v); (s,t) forces (v,u); or (s,t) captures (u,v). o Chapter 4. Cocomparability Graphs 201 Proof: Since P has at least four vertices, either u' is at least two edges from 5 in P, or v' is at least two edges from t. We can assume the latter, since the claim for the other possibility follows from symmetry. Again, let us proceed by case. Case 1: (u, v") £ E. Here (s, t)T*(v', v'")T(v, v'")T(v, v")T(v, u), so (s,t) forces (u,u). U V o s u o Case 2 : (u,v") E E and (u,v"') <£ E. Again, (s,t) forces (v,u) since (s,t)T*(v',v"')r(v,v'")T(v,u). u o s u o Case 3 : (u,v") G E and (u,v"') G E. Clearly, u 7^  u' since (u,v) G Z? but (u,u') (fc E. Similarly, v ^ v'. Therefore, \u' = v',u,v,v"} induces a claw in G. Furthermore, (5, t)T*(v', v'")T(u, v"')Y(u, v") and (s,t)r*(v',v'")V(v,v'")T(y,v"). Therefore (s,<) captures (u,u). o s I Chapter 4. Cocomparability Graphs 202 4.3.3 Implications for Cocomparability Graphs Let G = (V, E) be a connected cocomparability graph, and let G = (V, A) be any transitive orientation of E. If (po,Pi, • • • ,Pk) is a shortest path in G from a source p0 in G to a sink pk in G, then P is dominating. For, if v is any undominated vertex, then (po, v)T(pi,v)T • • • T(pk,v). So either po is not a source or pk is not a sink. Furthermore, such a path is chordless, simply because it is a shortest path. This argument establishes that every connected cocomparability graph has a chordless dominating path. It is instructive to see that this result also follows from the spanning order character-ization of cocomparability graphs. A connected cocomparability graph has a spanning order by Theorem 4.3. Since the graph is connected, there is a shortest path from its least vertex to its greatest vertex. This path is dominating by Lemma 4.7. Chapter 5 Strip Graphs Recall from Definition 1.3 that a graph G = (V, E) is a T-strip graph if there is a function (a r-strip realization) / : V —> R x [0, r] such that (u, v) € E if and only if ||/(u) — /(t>)|| < 1. A graph is a strip graph if it is a -\/3/2-strip graph. Recall that every \/3/2-strip graph is a cocomparability graph (Theorem 3.7) but that for every r > \/3/2 there is a r-strip graph that is not a cocomparability graph (Theorem 3.8). In this chapter, we will see how algorithms can exploit a strip-realization for strip graphs. In particular, Section 5.1 develops an algorithm for finding optimal Steiner sets whose run-time is independent of the number of edges. The rest of this chapter is concerned with characterizing strip graphs; one facet of this is creating a realization given a graph. Section 5.2 characterizes strip graphs as those that have only positive-weight cycles in a corresponding digraph. The algorithm for constructing this digraph assumes knowledge of the y-coordinates of all vertices, as well as the left-to-right order of the realization of nonedges. Section 5.3 immediately applies this characterization to small stars, and shows that, for example, is a r-strip graph for some but not all r . Section 5.4 shows how the characterization distinguishes between indifference graphs and other strip graphs. Specifically, it shows that a strip graph is not an indifference graph precisely if it contains a claw or a square. Finally, Section 5.5 characterizes trees in strip-graphs, showing that all such trees are equivalently caterpillars of degree at most 4. Let us begin with a brief example. We will see later (Property 5.21) that the bound 203 Chapter 5. Strip Graphs 204 in this example is tight, that is, that Kit4 is not a T-strip graph for any r < ^5 /8 . E x a m p l e 5.1 J ^ 1 ) 4 is a r-strip graph for all r > v /5/8. Proof: Let {a, b, c, d} be the independent set of vertices in the star A ' i ) 4 , and let h be the star's "hub". Let dy be any value satisfying y^5/8 < dy < min{y^5/4, r} . In particular, if ^5/8 < r < ^ / 5 / 4 , then d„ = r will suffice. Define 4 = ^ ( 4 - d2y)/9. Then the assig nment / : V 4 R x ( T ] is a strip realization for Kij4, where r > ^5/8 , and /(a) = (0,d y) /(&) = (4,0) / ( C ) = (2dx,dy) f(d) = (34,0) /(/*) = (34/2,^/2) Figure 5.1 shows the resulting strip realization when dy = T — 0.8. The construction a c b d Figure 5.1: Ki>4 is a 0.8-strip graph. ensures that ||/(a) - /(d)|| = 2 so that \\f(a) - f(h)\\ = \\f(h) - f(d)\\ = 1. To see this, note that \\f(a)-f(d)\\2 = {Zdxf + dl = 4. Chapter 5. Strip Graphs 205 The construction also ensures that — f{h)\\ = \\f(h) — /(c)| | < 1 since \\m-f(h)\\2 = (djzy + di 4 - d 2 4-9 + y = (4 + 37d2)/36 < 1 if d 2 < (36 - 4)/37 = 32/37. Conversely, the construction ensures that ||/(a) — f(b)\\ = \\f(b) — /(c)| | = ||/(c) /(d)|| > 1 since | | / ( a ) - / ( 0 ) | | 2 = = d2x + d2y 4 - d2 9 y 4 + 8d 2 9 > 1 if d2y > (9 - 4)/8 = 5/8. Finally, the construction ensures that ||/(a) - /(c) || = ||/(6) /(d)|| > 1 since l l / ( * ) - / ( c ) i r = = ( 2 4 = 4 2 4 - d 2 y 9 > 1 i f d 2 < ^ = 5/4. | 5.1 Exploiting a Geometric Model Since strip graphs are both unit disk graphs and cocomparability graphs, we can com-bine algorithms from both classes of graphs. We will see how do this by solving the Chapter 5. Strip Graphs 206 minimum weight Steiner set problem on strip graphs in 0 (Vlog V) time, given a geo-metric representation. This result improves on the 0(V\ogV + E) time bound for the more general cocomparability graphs in Chapter 4. In fact, we will simply implement the cocomparability algorithm using the unit disk graph data structure from Chapter 3. 5.1.1 Review of Cocomparability Results The following algorithm and terms from Chapter 4 are reproduced here to keep the present discussion self-contained. Let G = (V,E,w) be a graph with nonnegatively weighted vertices. Let R C V be a "required" set of vertices in G. A Steiner set (of G and R) is a set of vertices S C V \ R such that the subgraph induced in G by S U R is connected. The weight w(S) of a set of vertices S is defined, as usual, to be the sum of the weights of its elements. Assume without loss of generality that w(v) — 0 for all vertices v £ R. The weighted Steiner set problem is to find a minimum weight Steiner set in such a vertex weighted graph. Theorem 4.42 [Chapter 4] Given a spanning-ordered cocomparability graph G and required set R C V, algorithm MWSS-OCC returns a minimum weight Steiner set of G and R if and only if it exists. Theorem 4.43 [Chapter 4] Given any linearly-ordered graph G = (V, E,<,w), algo-rithm MWSS-OCC runs in 0(V log V + E) time. 5.1.2 Application to Strip Graphs We want to modify this algorithm to work for strip graphs. The bottleneck in Algorithm MWSS-OCC is finding a lightest path in Step 7. This is precisely the operation that Algorithm M I N - P A T H from Chapter 3 was designed to handle for unit disk graphs. A strip graph is of course a unit disk graph where all vertex points are confined to an Chapter 5. Strip Graphs 207 Table 5.1: Algorithm: MWSS-OCC(G, R) [Minimum Weight Steiner Set for span-ning-Ordered Cocomparability Graphs] Input: A linearly-ordered vertex-weighted graph G = (V, E, <, w) and a required set of vertices R CV. Output: A minimum weight Steiner set S, or a message stating that no Steiner set exists, or a message stating that G is not a spanning-ordered cocomparability graph. 1 if R is empty, 2 then return the empty set 3 halt. 4 if R is not a subset of a connected component of G, 5 then return "No Steiner set exists." 6 halt. 7 P <— a least weight path from min R to max R in G. 8 S <- P\R 9 if the subgraph induced by S + R is connected, 10 then return S 11 else return " G is not a spanning-ordered cocomparability graph." Chapter 5. Strip Graphs 208 infinite strip r units thick. Assume the vertices are linearly ordered along the axis of the strip, left to right in this case. This order is a spanning order, by the following lemma. Contrast this lemma with Lemma 3.6, which states that the compatible orientation (as opposed to linear order) associated with any strip realization of a graph is a transitive orientation. Lemma 5.2 Every left-to-right-ordered strip graph is a spanning-ordered cocomparabil-ity graph. Proof: Let / : V —> [0, r] be a r-strip realization of a graph G = (V,E), where r 6 [0,^/3/2]. Let i i , v, and w be three vertices where u and w are adjacent, and Xj(u) < xf(v) < xj(w). Since f(u) and f(w) are at most unit distance apart, their projections, x/(u) and x/(w), on the x-axis are at most unit distance apart, too. Since xj(v) lies between xj(u) and xj(w), it must lie within half a unit of either xj(u) or xj(w), say xj(u), without loss of generality. Therefore the square of the distance between f(u) and f(v) is given by (xf(U)-xf(v)f + (yj(U)-yf(v)f < ( l / 2 ) 2 + r 2 < (1/2) 2 +(x/3/2) 2 = 1. That is, u and v are adjacent, and the vertex order is a spanning order. | Create Algorithm MWSS-S (Minimum Weight Steiner Set on Strip Graphs) by re-moving Steps 9 through 11 from Algorithm MWSS-OCC, and by inserting Step 0, which sorts the strip graph vertices from left to right (i.e., by x-coordinate). Theorem 5.3 Given a strip graph G, represented as a set of points in a strip, and required vertices R, Algorithm MWSS-S returns a minimum weight Steiner set if and only if it exists, and runs in O ( V l o g V ) time. Chapter 5. Strip Graphs 209 Proof: Correctness follows from Theorem 4.42 and Lemma 5.2. Steps 9 through 11 are not required by Lemma 5.2. Represent all sets as arrays with V elements; linked lists and arrays are easily inter-converted where required in 0(V) time. Step 0 takes 0(VlogV) time, and Step 1 takes 0(V) time. Step 4 can be implemented by marking all vertices in the same connected component as the first vertex in R using a single phase of depth first search in 0(VlogV) time (Theorem 3.16). It then suffices to check, in O(R) time, that all vertices in R are marked. Step 4 therefore executes in 0 ( V l o g V) time. The extreme vertices mini? and maxi? can easily be found in O(R) = 0(V) time. Step 7 uses algorithm M I N - P A T H (Table 3.3) to find a least weight path between mini? and maxi?, which takes 0 ( V l o g V) time by Theorem 3.14. Step 8, setting S = P \ R, can be computed in 0(V) time by filling in the array for S. I 5.2 Characterization Via Cycles in Weighted Digraphs Let G = (V, E) be a r-strip graph, and let / : V —> R x [0, r] be a realization of G. Let us suppose that we know the y-coordinates of / ( V ) . Then we know something about the x-dimension between pairs of vertices, as clarified by the following definition and its property. Definition 5.4 For every pair of vertices u,v 6 V , call aUiV = y \ l — (yj{u) — yj{y))2 the critical x-dimension corresponding to the pair. Note that, if 0 < r < \/3/2, then 1/2 < aUtV < 1 for all pairs u,v. | Chapter 5. Strip Graphs 210 Property 5 .5 Let G = (V, E) be a r-strip graph, and let f : V -» R x [0,r] 6e a realization of G. For every distinct pair u,v G V, (u,v) £ E if and only if xf(u) - xj(v) < auv (5-17) and xj(v) - xj(u) < cxuv. (5-18) Proof: By Definition 5.4, (u,v)€E if and only if - / ( u ) | | < 1, if and only if |x/(w) - xf(v)\2 + \yf(u) - yf(v)\2 < 1, if and only if \xj(u) — xj(y)\ < auv. I Suppose also that we know the left-to-right order of the endpoints of each nonedge. Note that this is weaker than knowing the left-to-right order of all of the vertices. That is, let (V, E) be the strict partial order where (u,v) G E if and only if (u,v) G E and xj(u) < Xf(v). Note that E is an orientation of E. Property 5 .6 For every pair u,v G V, (u,v) G E if and only if xf(v) - xf(u) > auv equivalently, if and only if xf(u) - xj(v) < -auv. (5.19) Chapter 5. Strip Graphs 211 Proof: By Property 5.5, (u,v) £ E if and only if \xj[u) — xj(y) \ > auv. | Note that inequalities 5.17 through 5.19 generate a system of \E\ + \E\ + ( |V | 2 - | V | - \E\) = \V\2 - \V\ + \E\ difference constraints. Let us make some of these notions more formal. Definition 5.7 A (T) levelled graph G = (V, E,l) is a graph G — (V,E) and a levelling function I : V —>• [0, r] that maps vertices to y-coordinates. | Definition 5.8 A levelled graph G = (V,E,l) is a levelled r-strip graph if there exists a r-strip realization / : V —> R x [0,r] for G such that y/(v) = l(v), for every vertex v £ V. | Recall from Chapter 1 that an oriented graph is a directed graph G = (V, A) where A is asymmetric. Also from Chapter 1, an orientation of an undirected graph G = (V, E) is an oriented subgraph H = (V, A) where E = A + A"1 (the arc set A is also called an orientation of the edge set E). Definition 5.9 A complement oriented graph G = (V, E, E) is a graph G = (V, E) and an orientation G = (V, E) of the complement G = (V, E). | Definition 5.10 A complement oriented graph G = (V,E,E) is a complement oriented r-strip graph if there exists a r-strip realization / : V —> R x [0, r] for G such that xj(v) < Xf(v) whenever (u,v) £ E. | By the above discussion, a strip graph can be levelled and complement oriented such that Inequalities 5.17 through 5.19 are satisfied. Conversely, if the inequalities corresponding to a levelled, complement oriented graph are satisfied by some assignment, then that assignment, together with the levelling, is also a strip realization. We have established the following theorem: Chapter 5. Strip Graphs 212 Theorem 5.11 Given a levelled, complement oriented graph G = (V,E,l,E), let B be the corresponding system of Q(V2) difference constraints (5.17 through 5.19). A function f is a strip realization for G if and only if {xu : f(u) = (xu, l(u))} is a feasible solution for B. Corollary 5.11.1 A graph is a r-strip graph if and only if it can be r-levelled and oriented such that the corresponding system of difference constraints has a solution. 5.2.1 Solving Systems of Difference Constraints According to Theorem 5.11, we can recognize levelled, complement oriented, r-strip graphs by generating and solving a system of inequalities. This section develops an algorithm that solves systems of mixed difference constraints such as Inequalities 5.17 through 5.19. We will begin by describing Bellman's solution of nonstrict difference constraints [Bel58], and then extend his methodology to include strict inequalities. The extension is self-contained, so Bellman's results will be described but not proved. Bellman's Solution The following terminology follows [CLR90]. A system of difference constraints is a set of m inequalities (on n variables) of the form Xj — xt- < bk, where 1 < i,j < n and 1 < k < m. A vector of values x = ( x i , . . . ,xn) is a feasible solution of a system of difference constraints if it satisfies the system. If x; — Xj < bk is a constraint in the system, then all other constraints x4- — Xj < bi, where 6/ > bk, are redundant in terms of feasible solutions. Therefore we may assume that any system has at most one constraint of the form Xj — x,- < bk, and at most one constraint of the form x,- — Xj < bi, for each pair i,j. The corresponding constraint digraph is a weighted directed graph D = (V, A, w) Chapter 5. Strip Graphs 213 where: V = {vo,vu . .. ,vn}, A = {(vi, Vj) : Xj — X{ < bk is a constraint} U {{vo, V{) : 1 < i < n} w(vi,Vj) = bk if Xj — Xi < bk is a constraint w(vo,Vi) = 0 where 1 < i < n The weight of a path (or cycle) in D is the sum of the weights of its arcs. Let 8(u,v) denote the least weight of any path from u to v in D. Theorem 5.12 ([CLR90], page 542) Given a system of difference constraints, let D be the corresponding constraint digraph. If D contains no negative-weight cycles, then x = (8(v0,vi),8(y0,v2),... ,8(v0,vn)) is a feasible solution for the system. If D contains a negative-weight cycle, then there is no feasible solution for the system. Adding Strict Inequalities Let us begin by generalizing the notion of difference constraints. Let Xj — X{ < bk denote a constraint of the form Xj — X{ < bk or Xj — X{ < bk- A system of mixed difference constraints is a set of m inequalities (on n variables) of the form Xj — Xi < bk, where 1 < i,j < n a n d 1 < k < m. Again, we can assume that there is at most one constraint Xj — Xi < bk and at most one Xi — Xj < bi for each pair The corresponding mixed constraint digraph is a weighted directed graph D = (V, A, w) where: V - { u 0 , u i , . . . , u n } , Chapter 5. Strip Graphs 214 A — {(i>,-, Vj) : Xj — xi < bk is a constraint} U {(v0,Vi) : 1 < i < n} w(vi,Vj) = bk if Xj — Xi <bk is a constraint w(yo, Vi) = 0 where 1 < i < n A loose constraint is one with a strict inequality; the other constraints are tight. A loose arc is one corresponding to a loose constraint, and a loose cycle is one containing a loose arc. L e m m a 5 .13 Given a system B of mixed difference constraints, let D be the corre-sponding mixed constraint digraph. If B has a feasible solution, then D contains no negative-weight cycles, and all loose cycles have positive weight. Proof: Suppose that B has a feasible solution, and let C = ( c 0 , C i , . . . , c p _i , c 0 ) be any directed cycle in D with p > 1. Let — Xi < be the constraint corresponding to (c;, C j + i ) . By construction and the feasibility of the solution, ti>(c,, c 8 + 1) = >x,-+1 — Xi. Therefore, the weight of the cycle satisfies p—l p—l ^ w ( c i , c i + i ) = i=0 i-0 p-1 i'=0 = - x 0 + x0 = o, where subscript arithmetic is modulo p. That is, every cycle has nonnegative weight. If C contains at least one loose arc, then the inequality is strict. That is, every loose cycle has positive weight. | The proof and extended statement of the converse of Lemma 5.13 require the following notions. Let D = (V, A, w) be the weighted digraph corresponding to a set B of mixed Chapter 5. Strip Graphs 215 difference constraints. Our goal is to create a digraph for a system of tight difference constraints that has a feasible solution if and only if B has a feasible solution. To do so, we first convert loose arcs to tight arcs by inserting a necessary "space" A into the constraints. If D has no loose cycles, define A to be any positive constant. Otherwise, let Co be a least-weight loose cycle, and let A be any value that satisifies 0 < A < ^ 1 . (5.20) Now create a new directed graph D' from D by subtracting A from the weight of every loose arc. Defini t ion 5.14 The tightened constraint digraph corresponding to D = (V,A,w) is the weighted digraph D' = (V, A,w') where w'(u, v) = ^ I w(u,v) — A if (u,v) is a loose arc [ w(u,v) if (u,v) is a tight arc. Theorem 5.15 Given a system B of mixed difference constraints, let D be the corre-sponding mixed constraint digraph, and let D' be the corresponding tightened constraint digraph. If D contains no negative-weight cycles, and all loose cycles have positive weight, then x = {xu : u £ V and xu = 5'(vo,u)} is a feasible solution of B. If D contains a negative-weight cycle, or a loose cycle with non-positive weight, then there is no feasible solution for the system B. Proof: The tightened constraint digraph D' has no negative-weight cycles. To see this, let C be a cycle in D'. If C is not loose, then it has the same nonnegative weight in D' that it has in D. Otherwise, assume it has j loose arcs. Then, since j < | V | — 1, "w\C) =. w(C)-j\ Chapter 5. Strip Graphs 216 > w(Co) w(C)-j . w(Cp) V i - i w(C0) = 0. Therefore the least path weight S'(u, v) from u to v in D' is well defined. Now let (u, v) be any arc in D'. Then S'(v0, v) < S'(vo, u) + w'(u, v), for otherwise the path from VQ through u to v (with weight S'(vo,u) + w'(u,v)) would be lighter than the lightest path from Vo to v (with weight S'(vo,v)). It follows that Therefore xv — xu < w(u, v) if (u, v) is loose, and xv — xu < w(u, v) otherwise. It follows that x is a feasible solution of B. The second claim follows immediately from Lemma 5.13. 5.2.2 Constraint Digraphs Corresponding to r-Strip Graphs For strip graphs, it is convenient to go directly to the corresponding constraint digraph by combining Theorem 5.11 and Theorem 5.15 as follows. Definition 5.16 The (mixed constraint) digraph corresponding to a levelled, complement oriented graph G = (V, E, I, E) is a weighted, directed graph D = (Vp, A, w) where: xv - xu = S'(v0,v) - S'(v0,u) < w'(u,v). I vD V U {v0} A A+ U A" U {{v0,v) :veV} A+ {(u,v) : (u,v) € E] {(u,v) : {v,u) e E} A~ Chapter 5. Strip Graphs 217 The weight w(u,v) of each directed arc (u,v) in D is: w(u,v) ctuv if (u,v) € A + - a ™ if (u,v) <G A~ 0 if u = VQ where ctuv = yj\ — (l(u) — l(v))2). See Figure 5.2 for a small example. | Figure 5.2: The weighted digraph (b) corresponding to the star (a). The eight arcs with positive weight in Figure (b) are shown as solid arrows. Note that there are two solid arrows between every pair of vertices that are adjacent in Figure (a). The six arcs with negative weight in Figure (b) are shown as dotted arrows. Note that there is exactly one dotted arrow between every distinct pair of vertices that are not adjacent in Figure (a). The solid arrows all have weight a ' = y j \ — ( r /2) 2 . The horizontal dotted arrows all have weight — 1 = — 0 2. The diagonal dotted arrows all have weight — a = — y / l — r2. Arcs in A+ are called positive, and arcs in A~ are called negative. Note that, for each edge in E, there are two oppositely directed arcs in A+ with the same positive weight. Note also that the negative arcs A~ in D are exactly the loose arcs. So, if C is a cycle in D with no loose arcs, then it must have positive weight. Theorem 5.17 Given a r-levelled, complement oriented graph G, let D be the cor-responding mixed constraint digraph, and let D' be the corresponding tightened con-straint digraph. If every cycle in D has positive weight, then f : V —± R 2 , where Chapter 5. Strip Graphs 218 f(u) = (S'(u),l(u)) for all u £ V, is a strip-realization ofG. If D contains a nonpositive-weight cycle, then there is no strip-realization with the given levelling and complement orientation. Proof: Let B be the corresponding system of 0(V 2 ) difference constraints (5.17 through 5.19). By Theorem 5.11, / is a strip realization for G if and only if \xu : f(u) — (xu,y(u))} is a feasible solution for B. The digraph D corresponding to the levelled and complement oriented graph G also corresponds to this system of constraints B. If every cycle in D is positive, {xu : xu = S'(vQ,u) and u £ V} is a feasible solution of B by Theorem 5.15. Finally, if D contains a nonpositive-weight cycle C , then C must be loose. By Theo-rem 5.15, there is no feasible solution for B, and the theorem follows by Theorem 5.11. I Corollary 5.17.1 A graph is a r-strip graph if and only if it can be r-levelled and com-plement oriented such that every cycle in its corresponding digraph has positive weight. 5.2.3 An 0(V3) Algorithm for Laying Out Levelled, Complement Oriented, Strip Graphs Theorem 5.17 promises an algorithm to lay out a strip graph once it has been levelled and complement oriented. Algorithm S T R I P - L A Y O U T is a straightforward implementation of the ideas behind the theorem. Theorem 5.18 Algorithm STRIP-LAYOUT recognizes and produces a r-strip realization for a levelled, complement oriented graph G = (V,E,l,E) in 0(V3) time. —* Proof: Given a levelled, complement oriented graph G — (V,EJ,E), Step 1 can build an re x n matrix representation of the corresponding mixed constraint digraph D = Chapter 5. Strip Graphs 219 Table 5.2: A l go r i t hm: STRIP-LAYOUT(G) [Levelled, Complement Oriented, Strip Graph Recognition] Input: A levelled, complement oriented graph G = (V, E, I, E) and a real number r . Output: A shortest least-weight cycle in D and a realization / : V —> R x [0, T] if G is a T-strip graph, or evidence (a negative cycle in D) that G is not a r-strip graph. 0 VD <- V U {v0} 1 D = (VD,A, W) <— the digraph corresponding to G. 2 {6(u,v):u,v G VD} 4 - F L O Y D - W A R S H A L L ( D ) 3 u;o = min{5(t>, u) : f G V} 4 return the cycle in D corresponding to IUO. 5 i fw o >0 6 then A <-u;o/(|V| - 1) 7 D' = (VD,A,W') 4 - the tightened digraph corresponding to G. 8 {<£'(«, u) : u, v G Vb} ^ - F L O Y D - W A R S H A L L ( D' ) 9 return function / , where f(v) = (5'(v0lv)J(v)) for all v G V . Chapter 5. Strip Graphs 220 (VD,A,W) in O(Vp) = 0(V2) time. Let us assume that w(v,v) = oo for all v £ Vo-Step 2 can then run the dynamic programming Floyd-Warshall all-pairs shortest-paths algorithm ([CLR90] pages 558-565) on D(V) in 0(V3) time. Step 3 can then set w0 = mm{8(v,v) : v £ V} from the result of Step 2 in 0(V) additional time. Implement the Floyd-Warshall algorithm to maintain, for all pairs u, v, the prede-cessor p(u, v) of v in the (computed) shortest path from u to v. This does does not increase the asymptotic run time, but allows Step 4 to extract the cycle corresponding to Wo in linear time ([Tar83] pages 94-95). The Floyd-Warshall algorithm has the property that this cycle is negative if Wo < 0, in which case G has no layout compatible with its levelling and complement'orientation, again by Theorem 5.17. If w0 > 0, then D has no negative cycles, and S(u,v) correctly represents the least path-weight from u to v for all such pairs. In particular, w0 is the weight of a least-weight cycle in D, and A in Step 6 satisfies Inequality 5.20. Step 7 can now construct the tightened constraint digraph D' = (VD, A, w') corresponding to D in 0(VD + A) = 0(V2) time. Finally, Step 8 again1 executes the Floyd-Warshall algorithm in 0(V3) time. The entire recognition and layout algorithm for levelled, complement oriented strip graphs therefore runs in 0(V3) time. | 5.3 Stars in Strip-Graphs As an example of the weighted cycle mechanism of the previous section, this section shows which stars are r-strip graphs as the thickness r of the strip varies. Property 5.19 The star K~ii5 is not a r-strip graph for any r 1Since the time-critical step in this algorithm is Step 2, it suffices for Step 8 to execute in 0(V3) time. If you have a faster way to find the lightest cycle in Step 2, then you should also use a faster algorithm for Step 8. One such algorithm is the Bellman-Ford algorithm for the single-source least path-weight problem ([CLR90] pages 532-544). The Bellman-Ford algorithm runs in 0(VA) = 0(V3) time, which is faster for sparse graphs. Chapter 5. Strip Graphs 221 Proof: Consider any levelling and complement orientation of K\$. Its 5 independent vertices are then linearly ordered, say a<b<c<d<e. Then C = (h, e, d, c, 6, a, h) is a cycle in the corresponding digraph, where h is the "hub" of the star. This cycle has weight w(C) = Q>he — Oted — CX.dc ~ &cb ~ &ba + « a A < 1 - 1 / 2 - 1 / 2 - 1 / 2 - 1 / 2 + 1 = 0 since, for all arcs (u,v), \/\ — r2 < auv < 1 and \ / l — r 2 = yj\ — (3/4) = 1/2. Hence A+,5 is not a strip graph by Theorem 5.17. | Property 5 .20 The star is not a r-strip graph for any r < y 5 / 9 . Proof: Again, consider any levelling and complement orientation of A+ ) 4 such that C = (h, d, c, b, a, h) is a cycle in the corresponding digraph where a < b < c < d are the independent vertices and h is the hub. This cycle has nonpositive weight w(C) = o>hd - Oidc - acb - aba + aah < 1 - 2 / 3 - 2 / 3 - 2 / 3 + 1 = 0 since, for all arcs (u, t>), \ A — r 2 < auv < 1 and \ / l — r 2 = yjl — (5/9) = 2/3. | We saw in Example 5.1 that A+ j 4 is a strip graph for all strips of thickness greater than y/5/8, and in the previous property, that Kii4 is not embedable in strips thinner than ^5 /9 . Is one of these bounds tight? As it turns out, the r > ^5/8 bound is tight. The weighted cycle mechanism from the previous section does not provide enough power to prove this by itself. We must therefore resort to more ad hoc arguments. Chapter 5. Strip Graphs 222 Property 5 . 2 1 The star Kit4 is not a r-strip graph for any r < y5/8. Proof: The diameter of any set of points realizing A+ i 4 must be at most 2 since every pair of vertices is joined by a path of length at most 2. In particular, any realization of the four (independent) leaves must have diameter at most 2. However, we will see that any 5/8-realization of four independent vertices has diameter greater than 2. To this end, let {a, b, c, al} be a set of points that realizes four independent vertices in a y/5/8-strip. Without loss of generality, we may assume that x(a) < x(b) < x(c) < x(d), so that \\a — d\\ is the diameter. Therefore x(b) — x(a) > yjl — (5/8) = /^3/8 > 1/2, and this relation also holds for x(c) — x(b) and x(d) — x(c). It follows that x(d) — x(a) > 3-^ 3/8. We may also assume that a and d minimize \\a — d\\, and that b and c maximize ||fe — c|| (over all possible realizations). Then all points in {a,b, c, d} lie on the boundaries of the strip. To see this, suppose first that a does not lie on the boundary, as shown in Figure 5.3. Then we can perturb a such that the resulting realization has a smaller diameter \\a — d\\, - 0 b Figure 5.3: If point a is not on the boundary, then rotate it about b to decrease \\a — d\\. The figure shows two arcs, centered at b and d, and passing through a. In the figure, b lies below segment ad so rotate a clockwise along the arc about b. The arc about d delimits the points that are farther from d from those that are closer to d. as follows. If b lies on or below the line segment ad, as shown in Figure 5.3, then rotate a clockwise about 6, thereby moving a closer to d. This process does not change the distance \\a — b\\ > .1, so that x(b) — x(a) remains greater than yJs/S. It follows that Chapter 5. Strip Graphs 223 x(c) — x(a) = (x(c) — x(b)) + (x(b) — x(a)) > 1/2 -f 1/2 = 1, so that a and c remain independent. By a symmetric argument (reflect the points about the horizontal), if b lies above segment ad, then decrease the diameter by rotating a counterclockwise about b. Similarly, if d does not lie on the boundary, then decrease the diameter by rotating it about c, either clockwise or counterclockwise, depending on whether c lies above or below segment ad, respectively. Now suppose that b does not lie on the boundary. If b lies on or below the line segment ac, as shown in Figure 5.4, then rotate b clockwise about a, thereby moving b farther from Figure 5.4: If point b is not on the boundary, then rotate it about a to increase ||6 — c||. The figure shows two arcs, centered at a and c, and passing through b. In the figure, b lies below segment ac, so rotate b clockwise along the arc about a. The arc about c delimits the points that are farther from c from those that are closer to c. c. Again, this process does not change the distance \\a — b\\ > 1, so that a and b remain independent. Also x(d) — x(b) = (x(d) — x(c)) + (x(c) — x(b)) > 1/2+1/2 = 1, so that 6 and d remain independent. By a symmetric argument (reflect the points about the horizontal again), if b lies above segment ac, then increase ||6 — c|| by rotating b counterclockwise about a. Similarly, if c does not lie on the boundary, then increase ||6 — c\\ by rotating it about d, either clockwise or counterclockwise, depending on whether c lies below or above segment bd, respectively. Chapter 5. Strip Graphs 224 Now that we know that {a,b,c,d} lies on the boundary, we must analyze the possi-bilities. Suppose a and d lie on different levels. Then the distance between them satisfies ||« - d|| 2 > (3x/378)2 + (5/8) = 22 so that the diameter of the point set is greater than 2. On the other hand, suppose that a and d lie on the same level. Then the points in at least one pair (a, 6), (6, c), or (c, d) must also lie on the same level (as each other). The x-distance between this pair is the same as the distance between the pair, which must be greater than 1. Therefore the x-distance between a and d satisfies: x(d) - x(a) > (2^3/8) + 1 > 2.22 so the diameter is again greater than 2. | The following theorem summarizes Example 5.1 and Property 5.21. Theorem 5.22 The star K~i>4 is a r-strip graph precisely when r > Property 5.23 The star Kii4 is not an L\ r-strip graph for any r < 1/2. Proof: This proof is analogous and nearly identical to that of Property 5.21, so it is worded as similarly as possible to emphasize this analogy. Again, we will show that any four independent points {a,b, c, d} under the L\ metric in a strip at most 1/2-unit thick has diameter greater than 2. Again assume without loss of generality that x(a) < x(b) < x(c) < x(d). Therefore x(b) — x(a) > 1 — (1/2) = 1/2, and this relation also holds for x(c) — x(b) and x(d) — x(c). It follows that x(d) — x(a) > 3(1/2). This time, assume that a and d minimize the x-span x(d)—x(a), and that b and c maximize x(c) — x(b) (over all possible realizations). Then all points in {a,b, c, d} lie on the boundary of the strip. Chapter 5. Strip Graphs 225 o d Figure 5.5: If point a is not on the boundary, then rotate it about b to decrease x(d) — x(a) while not decreasing \\a — d\\\. The figure shows two arcs, centered at b and d, and passing through a. In the figure, b lies below a so rotate a clockwise along the arc about b. The arc about d delimits the points that are farther from d from those that are closer to d. To see this, suppose first that a does not lie on the boundary, as shown in Figure 5.5. Then we can perturb a such that the resulting realization has a smaller value x(d) — x(a), as follows. If b lies at or below y(a), as shown in Figure 5.5, then rotate a clockwise about b, thereby moving a closer to d in the x-dimension. Note that this may decrease the L\ distance between a and d, or it may leave it unchanged. However, this process does not change the distance \\a — b\\i > 1, so that x(b) — x(a) remains greater than 1/2. It follows that x(c) — x(a) = x(c) — x(b) + (x(b) — x(a)) > 1/2 + 1/2 = 1, so that a and c remain independent. By a symmetric argument (reflect the points about the horizontal), if b lies above a, then decrease the x-span by rotating a counterclockwise about b. Similarly, if d does not lie on the boundary, then decrease the x-span by rotating it about c, either clockwise or counterclockwise, depending on whether c lies above or below d, respectively. Now suppose that b does not lie on the boundary. If b lies at or below y(a), as shown in Figure 5.6, then rotate b clockwise about a, thereby moving b farther from c. Note that this may increase the L\ distance between b and c, or it may leave it unchanged. Again, this process does not change the distance \\a — b\\ > 1, so that a and b remain independent. Also x(d) — x(b) = x(d) — x(c) + (x(c) — x(b)) > 1/2 + 1/2 = 1, so that b and d remain independent. By a symmetric argument (reflect the points about the horizontal Chapter 5. Strip Graphs 226 a o o d Figure 5.6: If point b is not on the boundary, then rotate it about a to increase x(c) — x(b). The figure shows two arcs, centered at a and c, and passing through b. In the figure, b lies below segment ac, so rotate b clockwise along the arc about a. The arc about c delimits the points that are farther from c from those that are closer to c. again), if b lies above point d, then increase x(c) — x(b) by rotating b counterclockwise about a. Similarly, if c does not lie on the boundary, then increase x(c) — x(b) by rotating it about d, either clockwise or counterclockwise, depending on whether c lies below or above point d, respectively. Now that we know that {a, b, c, d} lies on the boundary again, we must analyze the possibilities. Suppose a and d lie on different levels. Then the distance between them satisfies | | a - d | | i > 3(1/2) + (1/2) = 2 so that the diameter of the point set is greater than 2. On the other hand, suppose that a and d lie on the same level. Then the points in at least one pair, (a, b), (b,c), or (c, d) must also lie on the same level (as each other). The x-distance between this pair is the same as the distance between the pair, which must be greater than 1. That is, the x-distance between a and d satisfies: x{d) - x(a) > 2(1/2) + 1 = 2 so the diameter is again greater than 2. | Theorem 5.24 The star (claw) is a r-strip graph precisely when r > 0. Chapter 5. Strip Graphs 227 Proof: The claw is easy to realize if r > 0 as shown in Figure 5.7. However it is not -e- -e-Figure 5.7: A claw in a thin strip a 0-strip (indifference) graph by Theorem 2.3.4. We can prove this directly, using the weighted cycle mechanism. Consider any orientation of A ^ such that C = (h,c,b,a,h) is a cycle in the corresponding digraph where a < b < c are the independent vertices and h is the hub. This cycle has nonpositive weight w(C) — ahc — acb — aba + aah = 1 - 1 - 1 + 1 = 0 since, for all arcs (if, u), ctUjV = y/l — r 2 = \ / l — 0 = 1. I 5.4 Distinguishing Strip Graphs and Indifference Graphs The weighted digraph corresponding to a r-strip was defined previously (Definition 5.16). This section continues to examine cycles in these weighted digraphs. Let us say that a strip graph is proper if it is not a strip graph for r = 0, that is, if it is not an indifference graph. This section defines the notion of a dangerous cycle, and shows that there is a dangerous cycle in every weighted digraph that corresponds to a proper strip graph. We will see that if such a digraph has a dangerous cycle, then it has one with four arcs. Consequently proper strip graphs contain either a square or a claw, since a dangerous 4-cycle must correspond to one or the other. Chapter 5. Strip Graphs 228 A graph or digraph is signed if there is a positive (+) or negative ( —) sign associated with each edge or arc of the graph. In this section, it is often easier to think of our weighted digraphs as signed, depending on whether the arc weight is positive or negative. Definition 5.25 A cycle in a signed digraph is dangerous if it contains at least as many negative arcs as positive arcs, that is, if ^ > 1, where n is the number of negative cycle arcs and p is the number of positive cycle arcs. | 5.4.1 Dangerous 4-Cycles Lemma 5.26 Let D be the digraph corresponding to a levelled, complement oriented graph G.IfC is a positive, dangerous cycle in D, then C has at least four arcs. Proof: By construction, D has no loops and therefore no 1-cycles, dangerous or other-wise. Similarly, 2-cycles correspond to edges of G, and therefore have two positive arcs. Finally, if C = (a,b,c) were a dangerous 3-cycle (where at most arc (a,b) is positive), then it would have weight w(C) < aab — ctbc — dca < 1 - 1 / 2 - 1 / 2 = 0. This contradicts the fact that C has positive weight. | We already know by Theorem 3.7 that every strip graph is a cocomparability graph. However, Lemma 5.26 further illuminates this property, as the following alternative proof illustrates. Theorem 3.7 (Chapter 3) Strip graphs form a subclass of cocomparability graphs. Chapter 5. Strip Graphs 229 Proof: We need to show that the complement of a strip graph can be transitively oriented. If G is a strip graph, then by Theorem 5.17 it can be levelled and complement oriented such that its corresponding digraph has only positive cycles. This orientation is transitive, for otherwise the corresponding directed graph would have a dangerous triangle, contradicting Lemma 5.26. | L e m m a 5.27 Let D be the directed graph corresponding to a levelled, complement ori-ented strip graph G. If D has no dangerous cycles, then G is an indifference graph (a one-level graph). Proof: Write D = (V,A,w) and define a new weighted digraph D0 = (V, A,w0) where 1 if w(u, v) > 0 w0(u,v)=l - 1 \fw(u,v)<0 0 if w(u, v) = 0 i.e., if u = VQ Now consider any cycle C in Do (and therefore also in D). Since C is not dangerous, p > n. Therefore, Wo{C) = p — n > 0. Hence D0 has only positive cycles. Since Do corresponds to G relevelled so that y(v) = 0 for all vertices v G V, it follows that G is an indifference graph, that is, a 0-strip graph, by Corollary 5.17.1. | By the previous lemma, the digraph of every proper strip graph has a dangerous cycle. This dangerous cycle must have at least four arcs by Lemma 5.26. You may therefore wonder how large the smallest such dangerous cycle could be. This is answered by the following lemma. L e m m a 5.28 If the digraph corresponding to a levelled, complement oriented graph has a positive dangerous cycle, then it has a dangerous cycle with four arcs. Proof: Let D be the directed graph corresponding to a levelled, complement oriented graph G = (V, E). Let C be a dangerous cycle in D with the least number of arcs. We Chapter 5. Strip Graphs 230 may assume that C is simple, for otherwise C can be decomposed into two cycles, at least one of which is dangerous. By Lemma 5.26, C has at least four arcs. Since C is dangerous, it has at least one negative arc. On the other hand, if C had only negative arcs, its weight would be negative. Therefore C has at least one positive arc. In particular, it has a negative arc (£•, c) followed by a positive arc (c, of); see Figure 5.8. Let a precede b in C. Now, either (a,b) is positive or it is negative, and either (a, of) is an edge in G or it is not. This gives us four cases to consider: Case 1: (a, b) is positive and (a,d) 6 E. See Figure 5.8. a Figure 5.8: Case 1: (a, b) is positive, and (a, of) is positive. Since (a, of) is an edge in G, it follows that (a, of) is a positive arc in D. So replacing (a, 6, c, d) with (a, d) in C results in a dangerous2 cycle with fewer arcs, contradicting the assumption that C is a shortest dangerous cycle. Case 2: (a, 6) is positive and (a,d) £ E. See Figure 5.9. Since (a, d) is not an edge of G, either (a, d) or (d, a) is a negative arc in D. If (a, d) is a negative arc in D, then replacing (a,b,c,d) with (a, of) in C results in a dangerous3 cycle with fewer arcs, again contradicting the assumption that C is a shortest dangerous 2It has n — 1 negative arcs and p — 1 positive arcs. 3It has n negative arcs and p — 2 positive arcs. Chapter 5. Strip Graphs 231 cycle. On the other hand, if (d, a) is a negative arc in D, then (a, 6, c, d, a) is a dangerous 4-cycle. Figure 5.9: Case 2: (a,b) is positive, and (a,d) or (d, a) is negative. Case 3: (a, 6) is negative and (a,d) 6 -E. See Figure 5.10. Since (a,d) is an edge in G, it follows that (d,a) is a positive arc in D. In this case, (a, 6, c, d, a) is a dangerous 4-cycle. Figure 5.10: Case 3: (a, b) is negative, and (d, a) is positive. Case 4: (a,b) is negative and (a,d) ^ See Figure 5.11. Either (a,d) or (d, a) is a negative arc in D. If (a, d) is a negative arc in D , then Chapter 5. Strip Graphs 232 replacing (a,6, c, d) with (a,d) in C results in a dangerous4 cycle with fewer arcs, again contradicting the assumption that C is a shortest dangerous cycle. Finally, if (d, a) is a negative arc in D , then (a,b,c,d,a) is a dangerous5 4-cycle. a b c d Figure 5.11: Case 4: (a, b) is negative, and (a,d) or (d, a) is negative. I Lemma 5.29 Let D be the directed graph corresponding to a levelled, complement ori-ented graph G.IfC is a positive, dangerous 4-cycle in D, then C corresponds to a square or a claw in G. Proof: Let C = (a, b, c, d, a) be a positive, dangerous 4-cycle. It cannot have four nega-tive arcs, denoted ( —, —, —, —), since such a cycle would have negative weight. Similarly, it cannot have 3 negative arcs and one positive arc, denoted (+, —, —, —), since its weight would be negative: w(C) = aab - abc - acd - ada < 1 - 3(1/2) = - 1 / 2 . It follows that C has exactly two negative arcs and two positive arcs, isomorphic to either ( - , + , - , + ) or ( - , - , + , + ) . Suppose C = ( — ,+,—,+) as shown in Figure 5.12. By the construction of D, the 4It has n — 1 negative arcs and p — 1 positive arcs. 5It has 3 negative arcs and 1 positive arc. Such a cycle is not only dangerous, it is unrealizable. Chapter 5. Strip Graphs 233 D G Figure 5.12: ( — , + , — , + ) in D corresponds to a square in G. pairs (a, d) and (b,c) are edges in G, and (a, 6) and (c, d) are not. Now, arc (a,c) cannot be negative for, if it were, (a, c, d, a) would be a dangerous triangle, contradicting Lemma 5.26. Its opposite orientation, arc (c, a) also cannot be negative for, if it were, (a, 6, c, a) would be a dangerous triangle. Therefore (a,c) is positive and therefore in G. Similarly, arcs (6, d) and (d, b) are not negative, since they would imply dangerous triangles (a, b, d, a) and (6, c, d, 6). Therefore (6, <f) is in G, and G corresponds to a square. On the other hand, suppose C = ( — ,—,+,+) as shown in Figure 5.13. By the D G Figure 5.13: ( — , —, +, +) in D corresponds to a claw in G. construction of D, the pairs (a,d) and (c,d) are edges in G, and (a, 6) and (6, c) are not. Chapter 5. Strip Graphs 234 Now, edge (a, c) cannot be in G for, if it were, arc (c, a) would be positive in D, and (a, 6, c, a) would be a dangerous triangle. Similarly, arcs (6, d) and (d, 6) are not negative, since they would imply dangerous triangles (a,b,d,a) and (b,c,d,b). Therefore (b,d) is in G, and C corresponds to a claw. | Theorem 5.30 Any strip graph that is not also an indifference graph contains a square or a claw. Proof: Let G be a strip graph. It can therefore be levelled and complement oriented such that its corresponding directed graph D has only positive cycles. If G is not a one-level graph, then D has at least one dangerous cycle. By Theorem 5.28, D contains a dangerous 4-cycle, which corresponds to a square or a claw by Lemma 5.29. | 5.4.2 Connection with Indifference Graph Characterization Another way of arriving at Theorem 5.30 is to examine the characterization of indif-ference graphs. Roberts characterized indifference graphs as claw-free interval graphs (Theorem 2.3.4). Also, Gilmore and Hoffman characterized interval graphs as chordal cocomparability graphs [GH64]. Therefore, indifference graphs are equivalently chordal, claw-free, cocomparability graphs. We know that all strip graphs are cocomparability graphs (Theorem 3.7). Hence, if a strip graph is not an indifference graph, it must either contain a claw or not be chordal. If it is not chordal, it must contain a square, since cocomparability graphs do not have induced cycles with more than four edges (Theorem 4.6). This higher-level proof clearly hides many of the details inherent in dangerous cycle proof of Theorem 5.30. In fact, we could prove Roberts's result (Theorem 2.3.4) using dangerous cycles. Nevertheless, the higher-level proof serves to relate strip graphs with other familiar results. Chapter 5. Strip Graphs 235 5.5 A Characterization of Trees in Strip Graphs What sort of trees are strip graphs? If a tree has any "body" to it, that is, if it contains the subtree in Figure 5.14, then it cannot be a strip graph. The leaves of this subtree have the o 9 Figure 5.14: The leaves of this tree form an asteroidal triple. property that between any pair of them, there is a path that avoids the neighbourhood of the third. A set of three vertices in a graph that has this property is called an asteroidal triple [LB62]. It is well known that all cocomparability graphs are asteroidal-triple free [Gal67]. This is an easy consequence of Corollary 4.7.1, that a path in a (scanning) ordered cocomparability graph dominates all vertices between its endpoints. Since all strip graphs are cocomparability graphs (Theorem 3.7), it follows that strip graphs are also asteroidal-triple free, and therefore free of the tree in Figure 5.14. Trees that are free of this tree (Figure 5.14) are sometimes called caterpillars: remov-ing their leaves results in a path (the spine). This definition implies that every spine vertex has degree at least 2. But for purposes of exposition, assume that the spine is ordered and that no leaves are adjacent to the first and last vertices, i.e., they have degree 1. We already know that K^^ is not a strip graph (Theorem 5.19). Since strip graphs are hereditary (i.e., all induced subgraphs of a strip graph are themselves strip graphs), it Chapter 5. Strip Graphs 236 follows that any tree that contains an induced K i t $ subgraph could not be a strip graph, either. The only trees not ruled out by this brief analysis are caterpillars all of whose vertices have degree at most 4. Remarkably, all such trees are strip graphs, as shown by the following subsection. 5 .5 .1 A Universal Embedding for Caterpillars Any caterpillar with degree at most 4 is isomorphic to an induced subgraph of the in-finite caterpillar shown in Figure 5.15. Any embedding of this infinite caterpillar also d2 9 ds 9 d4 9 O-c 0 -0- -O-c 2 -0-c 3 -0 c4 0 6 a-2 6 « 3 9 a4 Figure 5.15: The infinite degree-4 caterpillar specifies an embedding for an induced subgraph, and therefore for any caterpillar via its subgraph isomorphism. We will now see how to embed the infinite caterpillar. A l l points corresponding to vertices labelled c will lie strictly inside the strip, except for Co which lies on the upper level. Points corresponding to leaves labelled a will always lie on the lower level of the strip. Points corresponding to leaves labelled d will always lie on the upper level of the strip. The following technical definition will be useful before proceeding. Say that three points a, b, and c in the strip satisfy the iterative conditions if Chapter 5. Strip Graphs 237 (11) points a and b lie on the lower level, (12) point b is more than unit distance to the right of a, and (13) point c is less than unit distance from both a and b, as shown in Figure 5.16. In any such triple of points, b will be just a construction point, a h Figure 5.16: Points a, b, and c satisfy the iterative conditions. but point a will correspond to a leaf of the infinite caterpillar labelled a, and point c will correspond to an interior vertex (of degree 4) labelled c. The Initial Step To embed the infinite caterpillar, we will proceed iteratively, from left to right in the strip. To this end, place c 0 on the upper level, at location (0,y/3/2). Place ai slightly more than unit distance from c 0, on the lower level to the right, and by slightly more than unit distance from ai, also on the lower level to the right. More concretely, place ai at (0.51,0) and bi at (1.52,0), as shown in Figure 5.17. We can then place c\ within unit distance of C o , ai, and by. Again, more concretely, place c\ at (0.75,0.5). Clearly, points ai, bi, and C i satisfy the iterative conditions. Chapter 5. Strip Graphs 238 c 0 ai bx Figure 5.17: Embedding the first four vertices of the infinite degree-4 caterpillar in the strip. The Iterative Step We can always continue embedding the caterpillar from three points a, b, and c that satisfy the iterative conditions as follows. Place a new point d on the upper level, between a and 6, exactly unit distance from b, as shown in Figure 5.18. Note that xd — xb — 1/2 a h Figure 5.18: Placing point d, which corresponds to a leaf. since the thickness of the strip is y/%/2, and that xd > xa + 1/2 by condition (i2). It follows6 that d is more than unit distance from a. Then point c is within unit distance of d. If it were not, \xc — xd\ would be greater than 1/2, so that either \xc — xa\ > 1 or \xc — xb\ > 1, which contradicts the proximity of point c to points a and b. It follows that 6Note well that this conclusion does not follow if the thickness of the strip is less than y/3/2. Conse-quently this construction is not applicable to thinner strips. Chapter 5. Strip Graphs 239 vertices a and d are independent but adjacent to vertex c in the strip graph generated by points a, c, and d. We are now ready to construct a new set of points a', b', and d!. Let C(p) denote the unit radius circle about point p, where p is any point in the strip. By construction, C(d) intersects the lower level at point 6, as shown in Figure 5.19. a h Figure 5.19: The unit' circle about d intersects the lower level at point b. By the iterative conditions, C(c) intersects the lower level at a point x to the right of b, as shown in Figure 5.20. Therefore, there is an arc A of C(c) that (1) lies above the lower level, (2) lies to the right of the circle about c, (3) has positive length, and (4) has a tangent with finite positive slope at all points. Figure 5.20 shows arc A in bold. d Figure 5.20: The unit circle about c intersects the lower level at point x. Arc A is the bold part of C(c). Now place b' on the lower level to the right of x such that C(b') intersects arc A at its midpoint, as shown in Figure 5.21, although any point of intersection suffices. Chapter 5. Strip Graphs 240 The circle C(b') intersects the lower level at a point y to the right of point x since the a h h Figure 5.21: Circle C(b') intersects arc A at its midpoint, and it intersects the lower level at point y. tangent to C(b') is vertical at the lower level, but the tangent to C(c) has positive slope. Therefore let a' be the midpoint between x (on the left) and y (on the right), as shown in Figure 5.22, although any point between x and y will suffice. This ensures that a' and b' satisfy conditions (il) and (i2). Figure 5.22: Point c' is on C(c) and inside C(b') Finally, let c' be the point on A midway between the top of A and the intersection of A and C(b'), as shown in Figure 5.22, though any point on A that lies inside C[b') will suffice. Since c' is inside C(b'), it is within unit distance of b'. We need to show that c' is also within unit distance of a'. Since any continuous intersection of a circle (centered in the strip) with the strip (of width \/3/2) has diameter at most 1, it follows that arc A has diameter at most 1. As a consequence, c' is within unit distance of x. Therefore c' Chapter 5. Strip Graphs 241 is within unit distance of a', which lies between x and b', by convexity of the unit-radius disk about c'. In summary, a', and d satisfy the iterative conditions. Furthermore, a' is not adjacent to a or d since it lies to the right of 6, and it is not adjacent to c since it lies to the right of x. Similarly, c' is not adjacent to a or d since it lies outside of C(d). Figure 5.23 shows that portion of the caterpillar generated by points {a, c, d, a', c', a"}, d d' a a' Figure 5.23: The strip graph generated by {a, c, d, a', c', d'}. where d' is unit distance from b'. Note that d' is not adjacent to a, 6, or c. To see this, recall that points a, b and c have x-coordinates less than xai — 1/2, and that d' has an x-coordinate greater than xai + 1/2. It follows that a, b and c have x-coordinates less than Xd< — 1 and therefore lie more than unit distance from d'. We have established the following theorem. T h e o r e m 5 .31 A tree is a y/3/2-strip graph if and only if it is a caterpillar with degree at most 4-Chapter 6 Two-Level Graphs This chapter studies two-level strip graphs, that is, graphs that have strip-realizations with only two distinct y-coordinates. Since the y-coordinates of arbitrary strip graphs can take on an infinite number of values, it is difficult to see the combinatorial aspects of the y-dimension. To emphasize these combinatorial aspects, we define a notion of fc-level graphs, in which the y-coordinates take on at most k distinct values. If k = 1, these one-level graphs are exactly the indifference graphs. For k — 2, the main object of attention in this chapter, each of the two "levels" can be thought of as an indifference graph. But still, two-level graphs are not indifference graphs. It therefore behooves us to to better understand the relation between indifference graphs and two-level graphs. We will make significant steps towards characterizing, recognizing, and laying out (realizing) two-level graphs. We will also see how to develop algorithms for this class of graphs. After a few definitions, we will look at some basic examples in Section 6.2. In par-ticular, we will see that both the square and the claw are two-level graphs (although neither are indifference graphs), but that their realizations are highly constrained. This immediately applies to show that A+,4, and consequently any star with more than three leaves, is not a two-level graph. Finally, we will look at a small graph whose realization is completely constrained, that is, whose levels and complement orientation is completely determined. This small graph will be useful for designing larger graphs in which we wish to restrict the choice for the level of a vertex. Next, we will see that the geometric realization of A;-level graphs can be used to 242 Chapter 6. Two-Level Graphs 243 develop efficient algorithms. Section 6.3 illustrates this fact by developing an efficient algorithm for solving a dominating set problem. This problem, finding a minimum weight independent dominating set, was solved (less efficiently) in the more general context of cocomparability graphs by Chapter 4. The problem is equivalent to finding a minimum weight maximal clique in the complement of a two-level graph; this formulation yields the necessary insights for solving the problem. Section 6.4 examines the relation of two-level graphs with other classes of graphs. In particular, it places two-level graphs farther down a small hierarchy of previously studied perfect graphs that contains cocomparability graphs. We will see that a two-level graph under the Euclidean metric is also a two-level graph under the L\ "city block" metric. We will also see how to compute two illustrative parameters of two-level graphs, by showing that both the interval number and the boxicity of two-level graphs is at most two. Finally, we will see that the edges of a two-level graph can be bipartitioned such that one set induces an indifference graph and the other induces a bipartite permutation graph. The last section, Section 6.5, examines the problem of recognizing two-level graphs. We will begin by assuming that we know which level every vertex must be on (i.e., the graphs are striated), even though we might not know the y-coordinate of that level. We will see that the complements of striated two-level graphs are uniquely orientable. Striated &-level graphs, for k > 2, are not in general uniquely complement orientable, nor are two-level graphs that have not been striated. However, we will see that any transitive orientation of the complement of an unstriated two-level graph is compatible with some realization. Unfortunately, we will also see that, unlike indifference and cocomparability graphs, there is no forbidden ordered-triple characterization for two-level graphs. We will examine the structure of the class of two-level graphs as the thickness of the strip varies. In particular, we will see that the classes for any two distinct thicknesses are Chapter 6. Two-Level Graphs 244 incomparable. Nevertheless, if a graph is both a two-level Ti-strip graph, and a two-level T2-strip graph, then it is a two-level r-strip graph for all values of r between r\ and r 2 . The section concludes by showing how to recognize striated two-level graphs in 0 ( V 3 l o g V) time with the mechanism developed for levelled, complement oriented strip graphs. 6.1 Definitions Let G = (V, E) be a r-strip graph, and let / : V —> R x [0, r] be a realization of G. As usual, we write /(it) = (x j(u),y f(u)), and ctuv — \Jl - (yf(u) ~ Vf{v))2-Definition 6.1 A r-strip graph G — (V,E) is a k-level (r-strip) graph if there is a set { y i , i/2, • • •, Vk} of real values where 0 = yi < y2 < • • • < yk = T, such that G has a realization / satisfying y/(u) G { y i , y2,..., yk} for every vertex it E V. Say that vertex v is on level i , or on the ith level, if yj(v) — yi- I Note that auv = auw whenever v and w are on the same level. Sometimes we would like to require the vertices to be on certain levels, without actually saying what the y-values of those levels are. To this end, let us define a notion of striation, which captures this intuition. Definition 6.2 A k-striation is a function s : V —> { 1 , 2 , . . . , A;}. A graph G = (V, E, s) is k-striated if it includes a fc-striation function s. A striated graph G = (V, E, s) is a striated k-level (r-strip) graph if there is a realization / : V —>• R 2 and a set of k y-values (striae) {j / i ,y 2 , • • • ,Vk} such that 0 = yx < y2 < • • • < yk = r , and yf(u) = y s ( u ) , for every vertex u G V. | Definition 6.3 If the endpoints of an edge in a striated graph (or levelled graph) are on the same stria (or level), call it a level edge, otherwise call it a cross edge. | Chapter 6. Two-Level Graphs 245 One-level graphs are of course (one-dimensional) indifference graphs; there is no harm in assuming that the unused y-coordinate of all vertices is 0, and that all edges are level edges. Two-level graphs, for which k = 2, are clearly included in these definitions. For example, if / is a two-level realization for G = (V, E), then vertex v E V is on level 1 if yj(v) = 0, and on level 2 if yj{v) — T. In addition, since two-level graphs are discussed extensively in this chapter, it is convenient to have the following definition. The intention behind this definition is given by Property 6.5 below. Definition 6 . 4 Let a = y/l — r2 be the critical x-dimension corresponding to the strip thickness r. Note that, if 0 < r < y/3/2, then 1/2 < a < 1. Note also that, if the variable r is symbolic (its value has not been specified), then so is a. | Property 6 .5 Let f be a two-level realization for G. For all vertices u and v on the same level, (u,v) £ E if and only if \xj(u) — xj(v)\ < 1. For all u and v on different levels, (u,v) £ E if and only if \xf(u) — xj(v)\ < a. In either case, (u,v) £ E if \xf(u) - x}(v)\ < a. Proof: If u and v are on the same level, then yj(u) — y/(v) = 0. It follows that Similarly, if u and v are on different levels, then \yj(u) — y/(v)\ — T. It follows that (u, v)£E if and only if ||/(w) - < 1 if and only if J{xf(u) - xf(v))2 + (y/(u) - yj(v))2 < 1 if and only if J(xj(u) — xj(v))2 < 1 if and only if \xj(u) — xj{v)\ < 1. (u,v)£E if and only if - f(v)\\ < 1 if and only if J (xf(u) - xs(v))2 + (yf(u) - yf(v))2 < 1 if and only if J(xj(u) — xj[y))2 + r 2 < 1 Chapter 6. Two-Level Graphs 246 if and only if (x/(u) — xj(v))2 + r 2 < 1 if and only if \xf(u) — Xj[v)\ < V l — r2 if and only if \x/(u) — Xf(v)\ < a. In either case (u, v) G E if \xj{u) — xj(v)\ < a since a < 1. | 6.2 Examples Consider some realization of a two-level graph G = (V, E). Any cycle in the complete graph K(V) on V (or any graph on V, for that matter) must have an even number of cross edges. This is because any cycle must begin and end on the same level. In particular, this is true of any directed cycle in the weighted digraph (Definition 5.16) corresponding to G, the left-to-right orientation, and this levelling. Cross arcs in the digraph have weight ± Q , and level arcs have weight ± 1 by Property 6.5. Recall that a = \ / l — r 2 and that 1/2 < a < 1. Furthermore, arcs in the digraph that correspond to edges in the graph have positive weight, and arcs that correspond to nonedges have negative weight. The next two lemmas rely on Corollary 5.17.1, that a graph is a r-strip graph if and only if it can be r-levelled and complement oriented such that every cycle in its corresponding weighted digraph has positive weight. Lemma 6.6 Let (a,b,c,d,a) be a chordless 4-cycle (a square) in a two-level graph G = (V,E). Then (a,d) and (6, c) are level edges, and the others are cross edges, for every two-level realization f of G for which xj(a) < xj(c) and xj(b) < Xf(d). That is, the orientation of a square determines its levels. Proof: Let the vertex set {a,b,c,d} induce a square in a two-level graph G as shown in Figure 6.1.(i). Let / be a two-level realization for which xj(a) < xj(c) and xj(b) < Chapter 6. Two-Level Graphs 247 b b b c a c Y + d a c Y 1 d a d 0) (iii) Figure 6.1: An induced square in a complement oriented two-level graph, and a cycle in the corresponding weighted digraph. x/(d). Then (a,d,b,c,a) is a cycle in the weighted digraph corresponding to graph G and realization / . Arcs (a,d) and (b,c) correspond to edges in G and so have positive weight, while arcs (d, b) and (c, a) have negative weight, as shown in Figure 6.1.(ii)- The cycle must have an even number of cross arcs. However, it cannot have 0 or 4 cross arcs, since its weight would then be 0. Therefore it has exactly two cross arcs and two level arcs. The level arcs must be the positive arcs (a, d) and (6, c) as shown in Figure 6.1.(iii), since otherwise the cycle would have nonpositive weight. | Lemma 6.7 Let G({a,b,c,d}) be an induced claw, where b is the "hub", in a two-level graph G — (V,E). Then (a,b) and (6, c) are level edges, and (b,d) is a cross edge, for every two-level realization f of G for which xj(a) < xj(d) and x/(d) < xj(c). That is, the orientation of a claw determines its levels. Proof: Let the vertex set {a,b,c,d} induce a claw in a two-level graph G as shown in Figure 6.2.(i). Let / be a two-level realization for which £/(a) < x/(d) and xj(d) < xf(c). Then (a, b, c, d, a) is a cycle in the weighted digraph corresponding to graph G and realization / . Arcs (a, 6) and (6, c) correspond to edges in G and so have positive weight, Chapter 6. Two-Level Graphs 248 (i) (ii) (iii) Figure 6.2: An induced claw in a complement oriented two-level graph, and a cycle in the corresponding weighted digraph. while arcs (c, d) and (d,a) have negative weight, as shown in Figure 6.2.(ii). The cycle must again have an even number of cross arcs, and again it cannot have 0 or 4 cross arcs, since its weight would then be 0. Therefore it has exactly two cross arcs and two level arcs. The level arcs must be the positive arcs (a, b) and (6, c) as shown in Figure 6.2.(iii), since otherwise the cycle would have nonpositive weight. | Lemma 6.8 A+]4 is not a two-level graph. Proof: Suppose there is a two-level realization for K i ^ . Either there is an induced claw with all vertices on the same level, or there is an induced claw with two leaf vertices on a level different from the hub's level. Neither case is allowed by Lemma 6.7. | Lemma 6.9 Let {a,b,c,d,e, /} induce the subgraph G' in Figure 6.3 in some two-level graph. Then any two-level realization orients and levels G' as shown in the figure (orient all complement edges left-to-right). Proof: The subgraph G' is clearly a two-level graph by the realization shown in the figure. It is therefore a cocomparability graph. In fact, its complement is uniquely Chapter 6. Two-Level Graphs 249 Figure 6.3: A uniquely levelled, uniquely complement oriented, two-level graph. orientable. To see this, start with nonedge (a,f). Recall from Definition 4.49 that an arc (a,b) directly forces arc (c, d) (that is, (a,b)T(c,d)) if 1. a = c and (6, d) (fc E, or 2. b = d and (a, c) £ E. The following relations are immediate. (a,/) T (a,c) T (d,c) (a,/) T (a,e) («,/) r (6,/) («,/) r (d,/) (d,c) T (e,c) (d, c) r (d,6) Therefore (a,/) forces the seven remaining nonedges. For example, (a,f)T(a,c)r(d,c)T(e,c). Consequently, if x(a) < x(f) for some realization, then x(a) < x(e) < x(c), x(a) < x(f), x(d) < x(b) < x(f), x(d) < x(c). Chapter 6. Two-Level Graphs 250 Now since (a, 6, d, e,a) and (6, c, e,/, 6) are chordless 4-cycles, two applications of Lemma 6.6 show that (a, 6), (d, e), (6, c), and (e,/) are level edges, and all others are cross edges. | 6.3 E x p l o i t i n g a Geomet r ic Real iza t ion This section develops a data structure that implicitly represents the oriented complement of a £:-level graph G = (V,E,E) given its realization f : V —¥ R 2 . We will see that this data structure also efficiently represents the transitive reduction (Definition 4.32) of the oriented complement G = (V,E). In fact, the forthcoming Theorem 6.22 proves that the transitive reduction (V,R) of G can be computed in 0 ( V l o g V + k2V + R) time. This compares with 0(M(V)), the best known time for the transitive reduction of arbitrary digraphs [AGU72], and 0(VR), the best known time for transitively oriented comparability graphs (§4.2.4). This section applies the data structure to solve the weighted independent dominating set problem on fc-level graphs in 0(k2V + V2) time. In particular, it solves this prob-lem on two-level graphs in quadratic time. This problem is equivalent to the weighted maximal clique problem on the complements of A;-level graphs. For fixed k, these al-gorithms therefore improve on the 0(M(V)) time algorithms for cocomparability and comparability graphs in Chapter 4. 6.3.1 fc-Level Graphs and their Oriented Complements Defini t ion 6.10 A digraph G = (V, E) is the oriented complement (of strip graph G = (V, E) and realization f)\iE = {(u,v) : (u,v) G E and £/(u) < Xf(v)}. | The next property follows immediately from this definition and that of auv (Defini-tion 5.4). Chapter 6. Two-Level Graphs 251 Proper ty 6.11 Let G = (V,E) be the oriented complement of a k-level graph G and realization f. The ordered pair (it, u) is an arc in E if and only if xj(v) — x/(u) > auv. Another immediate corollary is due to k-level graphs also being strip graphs. P rope r ty 6.12 If G = (V,E) is the oriented complement of k-level graph G = (V, E) and realization f, then E is a transitive orientation of E. 6.3.2 A n Impl ic i t Representat ion of the Oriented Complement This subsection shows how to create a representation of the oriented complement G = (V, E) given its realization / . The forthcoming algorithms are easier to express if we augment V with two sentinel vertices s,- and ti on each level i. To this end, let V' = V + {si : 1 < i < k} + ; : 1 < i < k}. Note that \V'\ = | V | + 2k — 0(V) since k < V. Ensure that each S j is adjacent to every other vertex in G by locating it more than unit distance to the left of the leftmost vertex in V. That is, set xj(si) such that x/(si) < mm{xf(u) : u 6 V} — 1 for all i. Similarly, ensure that each ti is adjacent to every other vertex in G by locating it more than unit distance to the right of the rightmost vertex in V. That is, set £/(£;) such that Xf(ti) > max{xj(u) : u £ V} + 1 for all i. Assume that V is ordered by level, that is, it ^ v if and only if xj(u) < x/(v), or xj(u) = xj(v) and yj(u) < y/(v), for all vertices u,v £ V. The following observation is an immediate consequence of this assumption. Observat ion 6.13 Let u,v £ V be two vertices on the same level. If u -< v, then x(u) < x(v). Chapter 6. Two-Level Graphs 252 Defini t ion 6.14 Let V be a set of vertices ordered by level. Let L , p, and n be arrays, where Lv denotes the level of vertex u, p2 denotes the first (the premier) vertex of level i, and n-i denotes the number of vertices on level i. | The following straightforward algorithm computes these values in a single pass over V , once ordered. Notice that Si = pi and that ti = px; + rii + 1. Notice also that {pi + 1, pi + 2 , . . . , pi; + n,-} are the vertices from V on level i, and that ]Cf=i n,- = 2k + V = 0(V). Table 6.1: A l g o r i t h m : I N i T I A L I Z E - O R I E N T E D - C O M P L E M E N T ( V , / ) Input: The vertices V and a realization / of a fc-level graph G = (V, E). Output : Arrays Lv, pi, and n; (Definition 6.14) 1 V <- V U { s , : l < i < f c } U { f , : l < ! < A;.} 2 Order (relabel) V by level so that V = {1, 2 , . . . , |V ' | } . 3 i <— 0 > i is the current level 4 i/o< 1 > Another sentinel 5 for v <- 1 to | V | 6 do i f y/(u) > y; > Check if v is on a new level 7 then z <— i + 1 8 n 8 <- 0 9 Pi u 10 L„ <- i 11 rii <- n,- + 1 12 re turn (L, p, n) L e m m a 6.15 Algorithm I N I T I A L I Z E - O R I E N T E D - C O M P L E M E N T ( V , / ) computes the arrays L, p, and n in 0(V log V) time. Proof: Correctness follows from Definition 6.14. Since the realization function y is onto, Step 9 sets pi every time Step 7 changes the level. Chapter 6. Two-Level Graphs 253 Step 2 is time critical, and takes 0(V'\ogV) = 0(VlogV) time. Steps 7, 8, and 9 are each executed k times. The other atomic steps in the loop are executed once for each vertex in V. I Definition 6.16 For every vertex u £ V and level 1 < i < k, define Fi[u] to be the least vertex on level i for which (u, £ E. | Figure 6.4 illustrates this notion for two levels of a graph. The presence of vertex ti Fi[u] v FiiFiiu]] Fi[Fj[u]] U Level i — : <P--.° ° — 9 — ° ,,.fO ©-Level j -— o'., ..«£>•'' « " ' " " FAU) Figure 6.4: Definition of array F: value F{[u] points to the first vertex on level i that is adjacent to u in the oriented complement graph. ensures that Fi[u] is always well defined. We can implement array F as a one-dimensional array containing k one-dimensional arrays of size n2- each, where 1 < i < k. Recall that £ f = 1 m = 2k + V so that | F | = 0(k + (2k + V)) = 0(V). The arrays F and L constitute an implicit representation of ordered complement G with 0(V) size, as illustrated by the following simple observation. Lemma 6.17 Let G = (V, E) be an oriented complement graph. The ordered pair (u, v) —* is an arc in E if and only if Fi[u] •< v, where i = Lv. Proof: If e E, then Ft[u] X v by Definition 6.16. Conversely, if Fi[u] •< v, then x(Fi[u\) < x(v) by Observation 6.13. Furthermore (u,Fi[u\) £ E by Definition 6.16, so x(u) + Q;„,F,[«] < %{Fi[u\) by Property 6.11. Finally Chapter 6. Two-Level Graphs 254 c*uv — « u,Fi[u] ( s e e Definition 5.4) since v and Fi[u] are on the same level. It follows that x(u) + ctuv = x(u) + aUtFi[u] < x{Fi[u}) < x(v). Therefore (u,v) € E, also by Property 6.11. | The remaining algorithms in this section do not make explicit use of Lv, since they know the level of every vertex. We conclude by showing that inequalities are preserved by "taking F of both sides", if both sides are on the same level. L e m m a 6.18 Let G = (V, E) be an oriented complement graph. For all levels i, ifu^v and u and v are on the same level, then Fi[u] ^  Fi[v]. Proof: Suppose u and v are on the same level j and that u X v. Then x(u) < x(y) by Observation 6.13. Now consider any level i. Since (u, Fj[u]) 6 E and u and v are on the same level, Property 6.11 implies that x(Fi[v]) — x(v) > aViFi[v] = &u,Fi[v]- Combining these inequalities x(Fi[v]) - x(u) > x(Fi[v}) - x(v) > aU}Fi[v] implies that (it, i^[u]) G E by Property 6.11. Finally, F{[u] ^ Fi[v] by Lemma 6.17. | 6.3.3 A n A l g o r i t h m for Cons t ruc t ing the Representat ion We are now ready to describe an algorithm for constructing the array F (Definition 6.16). We will begin by describing an algorithm that, given levels i and j, computes Fi[u] for every vertex it on level j. Theorem 6.19 Given an oriented complement, represented by array p (Definition 6.14) and realization f, of a k-level strip graph, Algorithm TWO-LEVEL-F generates the ith row of array F (Definition 6.16) in 0(n t- + nj) time. Chapter 6. Two-Level Graphs 255 Table 6.2: Algorithm: TWO-LEVEL-F(z , j , p, n, f) Input: Level numbers i and j, arrays p and n (Definition 6.14), and realization / of a A;-level graph. Output: The row F 8 of array F (Definition 6.16). 1 + l > Recall S{ = pi has no incoming arcs in G 2 Fi[pj] <— v > Recall Sj = pj is adjacent to all vertices in G 3 for u <— pj + 1 to pj + rij > u is on level j 4 do while | | / («) - < 1 5 do v <— v + 1 6 Fi[u] <- v 7 return F; Proof: The algorithm computes Fi[u] for vertices u on level j. It does so by simulta-neously scanning the j th level with vertex u, and the ith level with vertex v, which is a candidate for F;[u]. Step 1 initiates the process of scanning v along level i by pointing v to the first "real" vertex on level i. Step 3 steps through each vertex u on level j. The algorithm maintains the invariant that v -< F,-[u], in particular at Step 3; this follows from Lemma 6.18. Steps 4 and 5 then scan v to the right until an arc (u,v) £ E is discovered. By the invariant, v is the first vertex on level i that is adjacent from vertex u, as required. To compute the run time, note that u is set rij times by Step 3, and v is set at most n, times by Step 5 (remember to count the sentinel U) for a total of iii + rij operations. More formally, the expression below is the number of times the algorithm executes Step 5; the sums correspond to Lines 3 and 4. We simply note that Step 5 is called Fi[u] — F»[u — 1] Chapter 6. Two-Level Graphs 256 times by Step 4, and that pi + 1 d: F{[u] ^ti = Pi + n,; + 1. Pj+nj Fi[u] Pj+Uj E E i u = p j + l v = F i [ u - l ] + l U=Pj+ l Pi + ra,; + 1 - (p; + 1) I We are now ready to compute all of F. The algorithm below simply calls algorithm T W O - L E V E L - F as a subroutine for every pair of levels i and j. 5 return F Theorem 6.20 Given an oriented complement of a k-level strip graph represented by a realization f, Algorithm GENERATE-F generates array F (Definition 6.16) in 0 ( V l o g V + kV) time. Proof: Correctness follows from Lemma 6.15 and Theorem 6.19. Step 1 takes 0(V log V) time by Lemma 6.15. By Theorem 6.19, Step 4 takes 0(rat- + raj) time. The sum of the number of vertices on each level over all levels is just the total number of vertices. This Table 6.3: Algorithm: G E N E R A T E - F ( V , / ) Input: Array V of vertices, and fc-level realization / -of a fc-level graph. Output: The array F (Definition 6.16). 1 {L,p,n} <- I N I T I A L I Z E - O R I E N T E D - C O M P L E M E N T ( V , / ) 2 for j «— 1 to fc 3 do for i - f - 1 to fc 4 Fi <- TWO-LEVEL-F(z , j , p, ra, / ) Chapter 6. Two-Level Graphs 257 observation yields the following expression for the total time due to Step 4. k k k k i = i t = l j=li=l . = o(tv) 3 = 1 = O(kV). I Corollary 6 . 2 0 . 1 Given an oriented complement of a two-level graph represented by a re-alization f, Algorithm GENERATE-F generates array F (Definition 6.16) in 0(V log V) time. 6 . 3 . 4 An Implicit Representation of the Transitive Reduction Interestingly, array F is also an implicit representation of the transitive reduction of the oriented complement. With the aid of Lemma 6.21 below, Algorithm k T R A N S (Table 6.4) generates, for any constant number k of levels, all arcs of the transitive reduction in constant time each, given the array F. Lemma 6 . 2 1 Let G = (V, E) be the oriented complement of a k-level graph, and let w be a vertex on level i. The ordered pair (u,w) is an arc in the transitive reduction of G if and only if Fi[u] -<w-< Fi[Fj[u\\ for all levels j. Proof: Suppose first that (it, w) is an arc in the transitive reduction of G. Then (it, w) € E, so that Fi[u] •< w by Lemma 6.17. Also, if F,-[Fj[it]] •< w for any level j , then (Fj[u],w) £ £ by Lemma 6.17, which together with (u,Fj[u\), would transitively imply (it,iy). Therefore w -< Fi[Fj[u]]. Conversely, suppose that (it, to) is not an arc in the transitive reduction of G. Then either (if, I D ) ^ E or (u,w) is transitively implied. If (u,w) £ E, then it; -< F,-[it] by Chapter 6. Two-Level Graphs 258 Lemma 6.17. On the other hand, suppose (u,w) is transitively implied by some chain of arcs (u,v),(v,vi),(vi,V2), • • • ,(vk,w) in E. Then (v,w) G E since G is a transitive orientation of G. Therefore (u,w) is transitively implied by (u,v) G E and (v,w) G E. Then ^ u; by Lemma 6.17. If j = Lv, as shown in Figure 6.5, then Fj[u] ^ u by Level i — Level j — — — '••to""-*<r*s F3[u] v Figure 6.5: Arc (u,w) is transitively implied by (u,v) and (v,w). Lemma 6.17, and taking F of both sides, •< F{[v] by Lemma 6.18. Therefore FilFjiu]] z< Fi[v] X I Table 6.4: Algorithm: k T R A N S ( V , / ) Input: The vertices V and a realization / of a fc-level graph G = (V,E). Output: The arcs in the transitive reduction of G, where G is the orientation of G that is compatible with / . 1 F <- G E N E R A T E - F ( V, f ) 2 for u G V 3 do for i <—1 to fc 4 do riu <- m'm{Fi[Fj[u]] : 1 < j < fc} 5 for v Fi[u] to r8-„ — 1 6 do return (u,v) Theorem 6.22 Given a k-level strip graph, represented by vertices V and realization f, Algorithm kTRANS generates all arcs in the transitive reduction (V, R) of the graph's oriented complement in 0 ( V l o g V + k2V + R) time. Chapter 6. Two-Level Graphs 259 Proof: Correctness follows from Lemma 6.21 and Theorem 6.20. In particular, Step 6 is executed \R\ times. Step 1 takes 0(V\ogV + kV) by Theorem 6.20. Computing the minimum value of in Step 4 takes k operations. Since this step is called \V\k times, this accounts for 0(k2V) operations. Note that the same storage location can be used for all values r,-u; that is, we can replace all variables r J U with a variable r. | Corollary 6.22.1 Given a two-level graph, represented by vertices V and realization f, Algorithm kTRANS generates all arcs in the transitive reduction (V, R) of the graph's oriented complement in 0(V log V + R) time. 6.3.5 Weighted Cliques and Independent Dominating Sets We are now ready to compute a minimum weight independent dominating set in a &-level graph. An independent set is dominating if and only if it is maximal (Observation 4.30 in Chapter 4), and a set is maximal independent if and only if it is a maximal clique in the complement. We will therefore develop an algorithm that finds a minimum weight maximal clique in the complement of the fc-level graph. To do so, we will simply imple-ment Algorithm M W M C - C , minimum weight maximal clique for comparability graphs from Table 4.34, to run in 0(V2) time, given the structure developed in this section. This solves the domination problem since the graph representation V and / also represents the complement of the graph. For convenience, the algorithm is reproduced below. Theorem 6.23 Algorithm MWMC-k finds a minimum weight independent dominating set in (a minimum weight maximal clique in the complement of) a weighted k-level graph in 0(k2V + V2) time. Chapter 6. Two-Level Graphs 260 Table 6.5: Algorithm: M W M C - k ( V , / ) [Minimum Weight Maximal Clique in the complement of a A;-level graph] Input: The weighted vertices V and a realization / of a (vertex) weighted fc-level graph G = (V, E). Output: A minimum weight maximal clique in G. 1 T 4— a transitive orientation of G. 2 augment T with 5 = 0 and t = | V | + 1. 3 (V, R) <— the transitive reduction of T. 4 C 4— a least weight path from s to t in (V, R). 5 return C \ {s, t}. Proof: Since every k-level graph is a cocomparability graph, Theorem 4.34 in Chapter 4 establishes the correctness of the algorithm. Let G = (V, E) be the input fc-level graph, and T = (V, E) be its oriented complement. Then E is a transitive orientation of E by Property 6.12. Let s be any vertex ss-, and let t be any vertex U in Algorithm I N I T I A L I Z E - O R I E N T E D - C O M P L E M E N T , Step 1. Then implement Steps 1, 2, and 3 with a single call to Algorithm k T R A N S . This call takes 0(V log V+k2V+R) time according to Theorem 6.22. As before (Algorithm M W M C - C in Chapter 4), Step 4 executes in 0(V + R) time since R is a directed acyclic graph ([CLR90], page 536-538). The theorem follows since \R\ < \E\ = 0(V2). I Let us now examine some of the consequences of Theorem 6.23. The parameter k does not figure in the complexity of the weighted clique algorithm if k is fixed, since it is consumed by the constants in the big-oh notation. In particular, the algorithm on two-level graphs has a better time complexity. Corollary 6.23.1 Algorithm MWMC-k finds a minimum weight independent dominating set in (a minimum weight maximal clique in the complement of) a weighted two-level graph in 0(V2) time. Chapter 6. Two-Level Graphs 261 Remark The proof of Theorem 6.23 argues that \R\ < \E\ = 0 ( V 2 ) , that is, that the size of the transitive reduction is at most the number of arcs in the graph, which for one-level (indifference) graphs. The oriented complement of the indifference graph shown in Figure 6.6 is the complete bipartite graph Kntn, where n — \V\/2. Clearly, Figure 6.6: The oriented complement of this indifference graph is an oriented complete bipartite graph and is transitively reduced. this oriented complement has n2 arcs. Furthermore, no arc is transitively implied, so the transitive reduction also has n2 = \V\2/4 arcs. 6.4 Relation to Indifference Graphs A one-level graph is clearly an indifference graph. A two-level graph is in some ways like two indifference graphs, one on each level. This initial observation can be misleading, though, since a two-level graph is not just the union of these two graphs. As discussed in Section 2.3.2, indifference have several characterizations and efficient algorithms for many problems. Since we are interested in these things on two-level graphs, it behooves us to study more closely how two-level graphs are related to indifference graphs. Recall (§2.3.2) that indifference graphs are equivalently claw-free interval graphs, that is, claw-free chordal cocomparability graphs. Since two-level graphs are strip graphs, they Chapter 6. Two-Level Graphs 262 are also cocomparability graphs. On the other hand, since C4 is a two-level graph, two-level graphs are not chordal. Therefore they are not interval graphs. Contrast this with indifference graphs, which are in fact unit interval graphs. In this section, we will see that the class of two-level graphs is: • equivalently the intersection graphs of a set of triangles, which are themselves properly included in the class of PI* graphs (§6.4.1), • properly included in the class of unions of two indifference graphs (§ 6.4.3), and • properly included in the class of intersections of two indifference graphs (§ 6.4.4). Furthermore, we will see how to factor a two-level graph G, given its realization, into two indifference graphs whose intersection is G. We will also see that the level-edges—those that go between vertices on the same level—-of a two-level graph induce an indifference graph, and that the remaining edges induce a bipartite permutation graph (§6.4.2). 6.4.1 Trapezoid Graphs and PI* Graphs We already know that two-level graphs are generalizations of indifference graphs, and specializations of cocomparability graphs. This section further restricts two-level graphs in a class hierarchy by demonstrating that they are specializations of trapezoid graphs. In fact, they are equivalently a restricted form of P/*-graphs. Trapezoid graphs were independently defined by Corneil and Kamula [CK87] and by Dagan, Golumbic, and Pinter [DGP88]. Given two horizontal lines in the plane, specify a trapezoid Tv by its four corners [av, bv,cv, dv] where av and bv are x-coordinates on the upper line, and cv and dv are x-coordinates on the lower line, as shown in Figure 6.7. The intersection graph of such a trapezoid representation is called a trapezoid graph (or an II graph in [CK87]). Chapter 6. Two-Level Graphs 263 Figure 6.7: The convention used to specify a trapezoid. Note that a trapezoid graph is an interval graph if av = cv and bv = dv for all vertices v, and a permutation graph if av = bv and cv = dv for all vertices v. Corneil and Kamula have also identified two other subclasses of trapezoid graphs: PI graphs, where av = bv for all vertices v, and PI* graphs where av = bv or cv = dv for all vertices v. The union of interval and permutation graphs is properly contained in PI graphs, which are properly contained in PI* graphs, which are properly contained in trapezoid graphs. In the other direction, Dagan et al. show that trapezoid graphs are cocomparability graphs. In fact, they prove the following theorem. Theorem 6.24 ([DGP88]) Trapezoid graphs are the cocomparability graphs of partially ordered sets with interval order dimension at most 2. The following theorem is very similar in spirit, and could really be considered a corollary of the proof presented by Dagan et al. Instead, the following proof is self contained and follows theirs almost verbatim. Theorem 6.25 A graph is a PI graph if and only if it is the cocomparability graph of the intersection of a linear order and an interval order. Chapter 6. Two-Level Graphs 264 Proof: Let L = (V, <) be a linear order and let I = (V, -<) be an interval order. Let G = (V, E) be the cocomparability graph of L fl I. Then there is a set of real numbers {rv : v £ V} and a set of intervals {Iv : v £ V} such that (u, u) £ E if and only if ru < rv and /„ -< Iv, or r„ < ru and 7^  -<; Iu (note the overloaded inequality/precedes symbols). Define the triangle (degenerate trapezoid) Tv = [av,bv,cv,dv] where dy — by — rv and [c„, dv] = Iv. Then Tu 0 Tv = 0 if and only if u < v and u -< u, or u < u and f - ( w. Thus, G is a PI graph. Conversely, let {Tv : v £ V} be a trapezoid representation of G. For each v £ V , let r„ = a„(= 6„) and 7^  = [cw,4]. Since Tu fl T„ = 0 if and only if r„ < r„ and /„ -< Iv, or r„ < ru and Iv ~( Iu, it follows that G is the cocomparability graph of the intersection of a linear order and an interval order. | Cheah's PhD thesis [Che90] shows how to recognize trapezoid graphs in 0(V3) time. Cheah also mentions that the PI graph and PP graph recognition problems are still open, as of December 1990. Independently, Ma and Spinrad [MS91] developed an 0(V2) time algorithm for recognizing trapezoid graphs. They did this by developing an 0(V2) algorithm for determining if the interval dimension of a partial order is at most 2. By Theorem 6.24, a graph is a trapezoid graph if and only if it is the cocomparability graph of a partial order with interval dimension at most 2, and Ma and Spinrad's trapezoid graph recognition algorithm follows. Triangle Graphs and Two-Leve l Graphs Defini t ion 6.26 A triangle graph is a trapezoid graph for which there is a trapezoid representation and a constant a such that, for all vertices v, 1 . dy by ? 2. av — cv = a, and Chapter 6. Two-Level Graphs 265 3 • dy Cy 1 j or 1 c = d X . K>V U,y , 2. cv — av — a, and 3. Oy d y — 1, as shown in Figure 6.8. | Figure 6.8: Definition of triangle graphs. A l l triangles in a triangle graph are congruent to an upright triangle or its reflection about the horizontal axis. A triangle graph is called acute if its constant a satisfies 1/2 < a < 1. Theorem 6.27 A graph is a two-level graph if and only if it is an acute-triangle graph. Proof: Let G — (V, E) be a two-level graph. Then there is a realization / : ] / - > R x { 0 , r} by Definition 6.1. As before, let a — y/1 — r 2 , and remember that 1/2 < a < 1. Chapter 6. Two-Level Graphs 266 We now represent a vertex on the lower level with the triangle on the left of Figure 6.8, and a vertex on the upper level with the triangle on the right of Figure 6.8. More formally, for each vertex v, define the representing triangle (trapezoid) Tv = [av,bv,cv,dv], where • Cy ~ X ( U ) , • av = by = cv + a, and • dy = Cy + 1 if y(v) = 0; and • dy = X(V), • cv = dv = av + a, and • by — dy + 1 if y(v) — T. It is easy to verify that Tu fl Tv ^ 0 if and only if ||/(u) — 7(^ )11 < 1. Conversely, let G = (V, E) be an acute-triangle graph. Then there exists a constant a and a set of similar triangles {Tv : v 6 V} such that Tu fl Tv ^ 0 if and only (u, v) € E. Set T = y/\ - a2. Note that 0 < T < V3/2 . Assume triangle Tv = [av,bv,cv,dv] as in Figure 6.8. Now define a two-level realization / : V —> R x {0, r} as follows. (C„,0) if dy = by /(") = \ (av,r) if Cy = dv Again, it is easy to verify that Tu fl Tv ^ 0 if and only if ||/(u) — f(v) || < 1. | Corollary 6.27.1 The class of two-level graphs is properly included in the class of PI* graphs (and trapezoid graphs). Chapter 6. Two-Level Graphs 267 Proof: By definition, an acute-triangle graph is also a PI* graph and a trapezoid graph. On the other hand, Ki^ is not a two-level graph (Lemma 6.8). However, it is a per-mutation graph, and therefore a PI* graph as shown by the permutation diagram in Figure 6.9. | Figure 6.9: is a permutation graph and therefore also a PI* graph. Coro l l a ry 6.27.2 Every two-level graph is the cocomparability graph of the intersection of two interval orders. Proof: Since every two-level graph is a trapezoid graph by Corollary 6.27.1, it is the intersection of two interval orders by Theorem 6.24. | 6.4.2 Cross Edges Induce a B ipa r t i t e Permuta t ion G r a p h A realization of a two-level graph will put some vertices on one level and some vertices on the other. As a result, some edges will join two vertices on the same level, and some edges will join two vertices on different levels. We want to see what kind of graph is induced by each of these groups. To this end, let G — (V, E) be a two-level graph, and / be a two-level realization. Define the level edges L C E to be edges between vertices on the same level. That is, let L = {(u, v) : ( M , V) G E and yf(u) = yf(v)}. (6.21) Chapter 6. Two-Level Graphs 268 The associated level-edge graph GL = (V, L) is the graph induced by these edges. Property 6.28 A (not necessarily connected) graph is a level-edge graph if and only if it is an indifference graph. Proof: An indifference graph (Definition 2.2) is trivially a level-edge graph, since it can always be realized on one level (of the two available). Conversely, let G = (V, E) be a two-level graph, and / be its corresponding two-level realization. The function xj is almost an indifference mapping for Gx, we just need to be careful to make the points corresponding to one level far enough away from those corresponding to the other. We can do this by suitably displacing one level. For example, set a displacement 5 > 1 + M — m, where M = max{x/(?j) : yj(u) = 0} and m = m.in{xf(u) : yj(u) = r} . Then the indifference realization fi, : V —> R suffices, where for all v G V, Define the cross edges X of a two-level graph to be those edges between vertices on different levels. That is, let Note that E = X + L. Let T identify the vertices on the top level, that is, let T = {v : y/(v) = T}. Let B identify the vertices on the bottom level, that is, let B = {v : yj(v) = 0} . The cross-edge graph Gx = (T, B, X) induced by these edges is clearly bipartite. We will see (Corollary 6.31.1) that Gx is also a permutation graph. Again, this graph may be disconnected. Spinrad, Brandstadt, and Stewart [SBS87] show that bipartite permutation graphs can be recognized in linear time, and that some problems that are NP-complete even = < xi(v) if y/(v) = ° xf(v) + 8- if yf(v) = r. I X = {(u,v) : (u,u) e E and yf(u) ^ y/(u)}. (6.22) Chapter 6. Two-Level Graphs 269 for bipartite graphs, for example Hamilton Circuit, can be solved in polynomial time for bipartite permutation graphs. Note, however, that both bipartite and permutation graphs are perfect. Indifference graphs and permutation graphs are incomparable: the graph shown in Figure 6.10 is an indifference graph but not a permutation graph (it is not even a compa-rability graph), and the claw A ' l j 4 is a permutation graph but not an indifference graph. Since interval graphs are chordal, and bipartite graphs have no odd cycles, a bipartite interval graph can have no cycles. Furthermore, since indifference graphs are claw free, a bipartite indifference graph can have no vertices with degree 3 or more. Therefore a bipartite indifference graph is just a set of disjoint paths, and therefore also a permuta-tion graph. Just as clearly, not all bipartite permutation graphs are bipartite indifference graphs, the claw K i > 3 and the square C 4 , for example. There are graphs, namely the bipartite tolerance graphs1, that are closely related to bipartite indifference graphs, and are again exactly the bipartite permutation graphs. The following definition is taken [almost] verbatim from Brandstadt, Spinrad, and Stew-art [BSS87], who state that Derigs, Goetke and Schrader [DGS84] had studied this family of graphs earlier. Definition 6.29([BSS87]) A bipartite tolerance graph is a bipartite graph G = (P, Q, E) xAs seems to happen often in the graph theory literature, there is a conflict of definitions here. Bipartite tolerance graphs should be considered their own class of graphs, and should not be confused with tolerance graphs [GM82] that happen to be bipartite. Figure 6.10: An indifference graph that is not a permutation graph Chapter 6. Two-Level Graphs 270 for which there exists a real-valued function / on the vertices such that for all p G P [and] q G Q, [the pair] (p,q) G E if and only if — /(g)| < e for some prespecified tolerance e. | Property 6.30 A graph is a cross-edge graph if and only if it is a bipartite tolerance graph. Proof: Let Gx — (V,X) be a cross-edge graph, and g be its corresponding two-level realization. Define a real-valued tolerance function / such that f(v) = xg(v) for every vertex v G V. Then you can readily verify that G' — (P,Q,X) is a bipartite tolerance graph, where P = {v:yg(v) = 0}, Q = {v : yg[y) = r} , and e = a. Conversely, let G' = (P, Q,X) be a bipartite tolerance graph and / and e be as in Definition 6.29. Pick a value for a in the interval [1/2,1] and set xg(v) = af(v)/e and r = y/l — a2. Then g : V —> R x {0, T } is a two-level realization for Gx = (V, X), where f ( x » , 0 ) ifw G P (x f l(u),r) if v G Q. This is also easy to see. Two vertices on different levels of G are adjacent precisely when \f(u) — f(v)\ < e, therefore precisely when \af(u)/e — af(v)/e\ < a, or \\g(u) — g(v)\\ < 1. Hence Gx = (V,X) = G' = (P, Q,X). I Theorem 6.31 ([BSS87]) Let G = (P,Q,E) be a bipartite graph. Then G is a bipartite permutation graph if and only if G is a bipartite tolerance graph. Chapter 6. Two-Level Graphs 271 Proof: We will prove only one direction, that G is a bipartite permutation graph if it is a bipartite tolerance graph. See [BSS87] for a proof of the converse. The following proof differs from that of Brandstadt et al. in that it demonstrates a connection between cocomparability graphs and bipartite tolerance graphs (and therefore cross-edge graphs). It is included in this thesis to further emphasize these connections. Let G = (P, Q, E) be a bipartite tolerance graph. A graph G is a permutation graph if and only if G and G are comparability graphs [PLE71]. A l l bipartite graphs are comparability graphs (orient the edges from one part to the other; this orientation is trivially transitive). Therefore G is a comparability graph. It remains to show that G is a comparability graph. Let / be the tolerance function corresponding to G. Transitively orient E by defining the relation F = {(u, v) : (u, v)EE and / ( « ) < / ( « ) } . To see that F is transitive, consider (a, b) G F and (b,c) G F. If a, b, and c are in the same part (P or Q), then (a,c) G E, and f(a) < f(b) < /(c), so (a,c) G F. Otherwise, b and a are in different parts, or b and c are in different parts. If b and a are in different parts, then f(b) - f(a) > e, so /(c) - f(a) = /(c) - f(b) + f(b) - f(a) > 0 + e, and again (a,c) G F. If b and c are on different levels, then a symmetric argument shows that (a,c) G F. I Corollary 6.31.1 A graph is a cross-edge graph if and only if it is a bipartite permutation graph. Proof: Follows from Property 6.30 and the theorem. | Although this corollary tells us that every cross-edge graph corresponds to two per-mutations, its nonconstructive nature does not tell us what these permutations are. To construct a permutation model corresponding to a cross-edge graph, let us prove the Chapter 6. Two-Level Graphs 272 following property constructively, even thought it is an immediate consequence of Corol-lary 6.31.1. Property 6.32 The edges that go between the levels of a realization of a two-level graph induce a bipartite permutation graph. Proof: We will show that Gx is a bipartite permutation graph by constructing two permutations P and Q, which we will think of as lists of vertices. Construct P by starting with T in order of increasing x-coordinate. Then add vertices from B in order of decreasing x-coordinate. Insert each vertex from B into P immediately after its rightmost neighbour in T; see Figure 6.11, for example. Note that this process preserves the left to right order of B in P . Figure 6.11: Constructing permutations P and Q from a two-level graph Similarly, construct Q by again starting with T in order of increasing x-coordinate. But now, add vertices from B in order of increasing x-coordinate. Insert each vertex v G B into Q immediately before its leftmost neighbour in T. Note that this process also preserves the left to right order of B in Q. To see that P and Q provide a permutation model for G, note first that no two vertices Chapter 6. Two-Level Graphs 273 in T (resp. in B) are adjacent, since vertices from T (resp. from B) have the same left-to-right order in both P and Q. Now consider an edge (VP,VQ) in the permutation diagram corresponding to a vertex v £ B. By construction, this edge crosses precisely those edges corresponding to vertices between the leftmost neighbour of v and the rightmost neighbour of v inclusive. It is easy to verify (by examining the two-level realization) that these are exactly the neighbours of v. | Recall from Section 1.3.1 that the union G\ U G2 of two graphs (binary relations) G\ = (V,Ei) and G2 = (V, E2) on the same vertex set is the graph G = (V, E\ U E2). The union is called (edge) disjoint if Ei fl E2 = 0. Theorem 6 . 3 3 Every two-level graph is the edge disjoint union of an indifference graph and a bipartite permutation graph. Proof: Property 6.28 and Property 6.32. | 6 . 4 . 3 Short Edges Induce an Indifference G r a p h We will now see that every two-level graph is the union of two indifference graphs. A n easy corollary of this result is that the unit-interval number (forthcoming Definition 6.36) of every two-level graph is at most two. Let G = (V, E) be a two-level r-strip graph, and / be a corresponding two-level realization. Let 5* be the set of "short" edges, that is, (6.23) where a = y/l — r 2 as always. The associated short-edge graph Gs = (V, S) is the graph induced by these edges. P rope r ty 6 . 3 4 A graph is a short-edge graph if and only if it is an indifference graph. Chapter 6. Two-Level Graphs 274 Proof: Let Gs = (V, S) be a short-edge graph, and let G = (V, E) and / be the defining two-level graph and its realization. Obtain the indifference mapping fs for Gs simply by scaling xj. That is, fs(v) = x/(v)/a. Then l / s ( „ ) _ / s ( „ ) l = M ^ M , a so that |/s(tt) —-fs(v)\ < 1 if and only if \xj(u) — xj(y)\ < a, as required. Conversely and similarly, let G — (V, E) be an indifference graph and / its realization. Construct a two-level representation g for some a by scaling / and putting all the vertices on one level. That is, g(v) = (cxf(v),0). Then \f(u) — f(v)\ < 1 if and only if \xg(u)/a — xg(v)/a\ < 1 if and only if \xg(u) — xg(v)\ < a, as required. | Theorem 6 . 3 5 Every two-level graph is the union of two indifference graphs. Proof: Let G = (V, E) be a two-level graph, and / a corresponding two-level realization. Let GL = (y,L) be the subgraph induced by edges on the same level (Equation 6.21), and let Gs = (V, S) be the subgraph induced by the short edges (Equation 6.23). The graphs GL and Gs are indifference graphs by Properties 6.28 and 6.34 respectively. It remains to show that G = GL U Gs, that is, that E — L U S. Clearly, LUS C E. To see that E C L(JS, consider (u,v) £ E. Either y/(u) = y/(v) or yj(u) ^ Dj{v). If yj(u) = yj(v), then (u,v) G L. If, on the other hand, y/(u) ^ y/(v), then \xj(u) — xj(v)\ < a, so (u,v) G S. | The converse of this theorem is not true. For example, K\i4 is not a two-level graph (Lemma 6.8), but it is the union of two edge-disjoint paths that share a common in-termediate vertex. Each such path, together with the remaining isolated vertex, is an indifference graph. A related notion is that of unit-interval number. Chapter 6. Two-Level Graphs 275 Definition 6.36([And88]) A unit t-representation of a graph assigns each vertex up to t closed unit intervals, with two vertices adjacent when any intervals assigned to them intersect. The unit-interval number iu(G) is the minimum value of t such that G has a unit ^-representation. | Andreae [And88] showed that iu(G) < \\(n — 1)] for all graphs on n vertices. He also showed that the extremal graphs are stars2 (and also C 4 ) . Since two-level graphs are 7\i i 4-free, for which iu(G) = 2, this might lead us to believe that iu(G) < 2 for all two-level graphs. However, Andreae showed that the unit-interval number can grow without bound even for claw-free graphs. Nevertheless, the unit-interval number of two-level graphs is at most two; this is a corollary of the previous theorem. Corollary 6.36.1 Let G be a two-level graph. Then iu(G) < 2, and this bound can be reached. Proof: Since G is the union of two indifference graphs GL and Gs, each vertex v E G corresponds to two unit intervals, [ / L ( I> ) , /L (^ ) + 1] and [fs(v)-,fs{v) + 1] respectively. Clearly, we can realize these intervals simultaneously by ensuring that the intervals for GL are disjoint from those in Gs (with a suitable displacement, for example). The resulting set of intervals is a two-interval representation for G. The bound is reached, for example, by C 4 and A ^ . Both are two-level graphs for which iu(G) = 2; recall that neither is an indifference graph. | 6.4.4 Every Two-Level Graph is the Intersection of Two Indifference Graphs The following theorem shows that a graph has a two-level realization under the Euclidean metric if and only if it has a two-level realization under the Manhattan metric. Recall 2 In this context, a graph of order n is extremal if it has unit interval number [|(n — 1)]. More accurately, Andreae showed that the extremal graphs are KiiH-i and C\ if n is even, graphs with induced A ' i n _ 2 if n > 5 is odd, and graphs with induced C 4 if n = 5. Chapter 6. Two-Level Graphs 276 that this property does not hold for unit disk graphs. For example, Ki:5 is an L2 unit disk graph, but it is not an L\ unit disk graph by the Star Lemma (Lemma 3.1). It also does not hold for r-strip graphs. For example, A+ i4 is an L2 strip graph (Example 5.1), but it is not an L\ strip graph (Lemma 5.23). Theorem 6 . 3 7 The class of L2 two-level graphs is equivalent to the class of L\ two-level graphs. In particular, every L2 two-level r-strip graph is isomorphic to an L\ two-level (1 — a)-strip graph. Proof: Let G = (V,E) be an L2 two-level graph, and let f2 : V —¥ R x {0,r} be a corresponding two-level realization, for some r £ [0,^/2] . To avoid multiple subscripts, write xf{ = X{ and yjt = yi for i = 1,2. Define fx : V —l R x [0,1 — a] by setting '1 — aN h{v) = (x2(v), (-^) y2(v)) for every vertex v £ V. Note that 0 < 1 — a < 1/2. To see that / i is an L\ two-level realization for G, first consider two vertices u and v on the same level, so that y2(v) — y2(v) = 0. Then \\fi{u) - fiMU < 1 » | x 2 ( u ) - x 2 ( t ; ) | + ( ^ ) | y 3 ( u ) - y 2 ( t ; ) | < l <B- \x2(u) - x2(v)\ < 1 (x2(u) - x2(v)f < 1 (x2(u) - x2{v)f + (y2(u) - y2{v)f < 1 ^ | | / ( « ) - / ( t ; ) | | 2 < l - H - (u,v) £ E. Similarly, consider two vertices u and v on different levels, so that |y2(u) — y2(v)\ = r. Then | | / I ( « ) - / I ( « ) | | I < 1 ^ \x2(u)-x2(v)\+(^^)\y2(u)-y2(v)\<l Chapter 6. Two-Level Graphs 277 <->• \x2(u) - x2{v)\ + (1 - a) < 1 <-» |x 2(u) — a;2(u)| < a ^> | | / 2 ( u ) - / 2 ( t ; ) | | 2 < l G £ . In both cases, / i (u ) l | i < 1 if and only (u,v) G E. Conversely, let G — (V, E) be an L\ two-level graph, and let / i : R x {0, e} be a corresponding two-level realization, where fi(v) = (xi(u),yi(v)) for all vertices v G V, and e G [0,1/2]. Then, by a similar argument, f2 : R x [0,1 — a] is an L 2 two-level realization for G, where for every vertex u G V. Here, a = 1 — e and r = y/1 — a2. Note that 0 < r < \/3/2, as required. | Recall from Section 1.3.1 that the intersection G = Gi PI G 2 of two graphs Gi = (V,£ i ) and G 2 = (V,^) on the same vertex set is defined by G = (V, E\ f l _E2). The sphericity (respectively cubicity, boxicity) of a graph G is the sm