UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Exploring ecostructure : developing an ecostructural framework as an approach to a mass wasting hazard… Maynard, Russell Paul 2006

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata

Download

Media
831-ubc_2007-0183a.pdf [ 14.01MB ]
Metadata
JSON: 831-1.0100921.json
JSON-LD: 831-1.0100921-ld.json
RDF/XML (Pretty): 831-1.0100921-rdf.xml
RDF/JSON: 831-1.0100921-rdf.json
Turtle: 831-1.0100921-turtle.txt
N-Triples: 831-1.0100921-rdf-ntriples.txt
Original Record: 831-1.0100921-source.json
Full Text
831-1.0100921-fulltext.txt
Citation
831-1.0100921.ris

Full Text

E x p l o r i n g E c o s t r u c t u r e : D e v e l o p i n g a n E c o s t r u c t u r a l F r a m e w o r k as an A p p r o a c h to a M a s s W a s t i n g H a z a r d A s s e s s m e n t by Russell Paul Maynard B . A . , S i m o n Fraser U n i v e r s i t y , 1995  A THESIS SUBMITTED  IN PARTIAL F U L F I L L M E N T  THE R E Q U I R E M E N T S FOR T H E D E G R E E OF  M A S T E R OF ARTS  in  T h e F a c u l t y o f Graduate Studies  (Resource M a n a g e m e n t and E n v i r o n m e n t a l Studies)  THE UNIVERSITY OF BRITISH C O L U M B I A  January 2 0 0 7  © Russell Paul Maynard, 2006  OF  Abstract:  A mass wasting event (a landslide) on the Pitzsimmons Creek near the Resort Municipality of Whistler, B.C. threatens property and public safety through its potential to release again. The 'Fitz Slump',as it is locally known, is a landslide that occurred during unseasonably heavy rains in late August 1991. The Slump is located in a mountainous watershed characterized by steep forested slopes and thick glacial moraines. Several engineering reports have been completed on the Slump. These investigations included sampling, measurement, and extrapolation of discrete sample results to estimate the geotechnical details of the regolith, and importing these measurements into computational models to estimate the probable risks of a recurrence. This engineering approach is utilized as an informative and quantitative assessment to the multivariate dynamics of gravity-induced slope failure. An alternative, the ecostructural approach, also considers these complex dynamics but in contrast explicitly considers ecological influences on slope stability These influences are described by three major structural processes that provide an example of a stabilizing eco-infrastructure, via development of: (1) In-stream Structures, (2) Forest Root Systems, and (3) Preferential Flow Path Systems. These examples illustrate the coevolution of networkforming processes that synergistically produce a stabilizing ecological infrastructure herein termed 'ecostructure'. Ecostructures are the physical components of an ecosystem that form 'conservational' structures. These structures underpin the ecosystem services that develop as a emergent result. Ecostructure is the analog to institutional infrastructure, with certain caveats. The ecostructural approach utilizes a hierarchical template to organize the complexity of a large scale analysis, and attempts to identify explicit examples of ecosystem dynamics that can be characterized as having networked architectures. In this work, and in ecostructure as a concept, the salient points that create a generalizable model are:  "w-  that complexity  can be m a n a g e d  w o r k for organizing the  using hierarchy theory as a  analysis  that important properties are recognizable  only at functional levels  organization a n d at similarly functional scales of that  self-organizing  frame-  systems  tend  to  exhibit  organization  networked  architecture  a n d that this architecture w o u l d s e e m to h a v e general properties can be useful i n terms of m a n a g i n g for ecosystem  Networks,  H e a d w a t e r Hydrology, H e a d w a t e r Geomorphology, Scale, H i e r a r c h y Theory, Sustainability.  iii  that  services  K e y w o r d s : Ecostructure, Watershed Dynamics, Preferential Flows,  organization, Near-decomposability,  of  Self-  Table Of Contents  Abstract  ii  List of Tables  vi  List of Figures  vii  Acknowledgements  ix  1.0 Introduction  1  2.0 Case Study: The Fitzsimmons Slump  6  8.1 A Natural History Background  8  8.8 The 'Slump' is Really a Series of Landslides  10  8.3 Risk Analysis - Engineering Approach  15  3.0 The Ecostructural Approach - Case Study Specific  81  3.1 Self-organizing Ecostructures  84  3.8 Self-organizing Instream Structures  30  3.3 Lateral Tree Root Reinforcement  33  3.4 Preferential Flow Systems  35  4.0 The Ecostructural Approach - Generalized  41  4.0.1 The Upper Level  47  4.0.8 The Focal Level  47  4.0.3 The Lower Level  50  4.1 Ecostructural Analysis - Hydraulic Flow Dynamics  51  4.8 Ecostructural Analysis - Hierarchical//Network Model  57  4.3 An Ecostructural Narative  60  iv  4.4 Ecostructures as a 6IS  65  5.0 Discussion/Conclusion - Leveraging Knowledge  68  5.1 An Ecostructural Index  69  5.3 Climate Change  70  5.3 Future Research  70  5.4 Risk Analysis  74  Conclusion  75  References  78  Appendix A  91  Introduction to 'non-linear'  91  Introduction to 'Complexity'  94  Appendix B  97  Introduction to 'Hierarchy Theory' Nested and Decomposable  97 100  Hierarchy Theory and Near-Decomposability (ND) Appendix C  1  0  4  110  Introduction to 'Network Theory'  110  Topology.  113  Networks - A Nascent Theory.  114  Small Worlds  116  Preferential Attachment  133  Complex (Hierarchical) Network Models  134  V  List of Tables Table 1 - Conventional Engineering vs. Ecostructural Approach  vi  £  List of Figures Figure 1 - Map of Case Study Area  7  Figure 2 - Air Photo of Slides Near the Fitz Slump  11  Figure 3 - A Cross-Section of the Fitzsimmons Slump  13  Figure 4 - The toe of Slide 2  14  .  Figure 5 - Schematic of the Sliding Surface of the Fitz Slump  16  Figure 6 - Schematic Showing the Outline of the Fitz Slump  17  Figure 7 - Diagram of Headwater Coupling  29  Figure 8 - Instream Structures  32  Figure 9 - Root Reinforcement  34  Figure 10 - Strain, stress and failure of roots  35  Figure 11 - Conceptual Model of Preferential Flow Pathways  36  Figure 12 - Hydrogeomorphic Conceptual Model  37  Figure 13 - The 'Focal' level in a nested hierarchy.  43  Figure 14 - Schematic of Hebbian Network Development  49  Figure 18 - Precipitation Data  63  .  Figure 16 - Precipitation Data for August  65  Figure 17 - Comparison of 24-hour Total Monthly Rainfall  64  Figure 18 - Hierarchical Organization of Modularity in Networks...72 Figure 19 - Complexity Pyramid  76  Figure A l - Hierarchy Schematic  104  Figure A2 - Seven Bridges of Konigsberg CTopology)  112  Figure A3 - Small Worlds Networks  116  Figure A4 - Power Law Distributions  120  vii  Figure AS - Scale-Free Networks Figure A6 - Hierarchical Scale-Free Network Models  viii  Acknowledgements  To m y  Conimittee:  K u r t G r i m m Les J o h n  (Supervisor)  Lavkulich Richardson  T h a n k y o u for all y o u r patience a n d perseverance with a p e r s o n s u c h as self.  my-  M o s t of all t h a n k y o u to m y partner, Michelle K o t k o , for w h o m the a b o v e refere n c e to patience a n d p e r s e v e r a n c e w i t h m e is a daily struggle, a n d w i t h o u t w h o m Iw o u l d probably not h a v e finished!  ix  E f f o r t s t o s o l v e a p r o b l e m must b e p r e c e d e d b y e f f o r t s t o u n d e r s t a n d it. H. A . Simon 1  1.0 I n t r o d u c t i o n T h e interface of h u m a n i t y a n d r e n e w a l - r a t e - l i m i t e d n a t u r a l s y s t e m s is increasingly r e c o g n i z e d as problematic. C l i m a t e c h a n g e is o n l y o n e of the m o s t  widely  recognized amongst m a n y large-scale systems that are exhibiting stress a n d potentially d e e p c h a n g e as aresult of h u m a n d e v e l o p m e n t .  2  T h e sustainability ini-  tiative h a s e m e r g e d d u r i n g the last few d e c a d e s as am a n i f e s t a t i o n of this nition; but the m o v e m e n t has largely focused o n 'engineer-oriented'  recog-  solutions  proposing, for example, ecological footprint analysis as the answer. While this facet o f t h e s u s t a i n a b i l i t y c h a l l e n g e is i m p o r t a n t , it d o e s n o t a d d r e s s t h e p r i m a r y p r o b l e m of organizing society's material well-being within the straints posed b y Earth's topologically-organized  3  ecosystem  m o r e con-  dynamics.  T h i s w o r k is p a r t o f ag r o w i n g r e c o g n i t i o n a n d d e v e l o p m e n t o f am o d e r n w o r l d v i e w t h a t is i n c r e a s i n g l y r e c o g n i z i n g t h e significant v a l u e of m o d e l l i n g o r m a p p i n g topological attributes i n addition to the conventional a p p r o a c h of a principally topographical view of geography, ecology, a n d systems in general.  E x a m p l e s  of this are the research focusing o n the surprising fecundity of seamounts nodes of ecological activity far out in the seas.  1  4  as  S o m e of the most intense re-  Epigraph: Simon, H.A. 1996. The Sciences of the Artificial (3rd ed.). Cambridge, MA: The MIT Press.  For instance: Fisheries around the world are showing signs of decline and stress (Pauly et al., 2002; Constance and Bonanno, 2000). Urban centres are becoming large enough to influence local climate change (Changnon, 1992). For the first time in history humans dominate many major (large scale) processes (Vitouseketal., 1997). 2  Topology, as it is used here, is the study of the way in which the modules or constituent parts of a system are interrelated and arranged. One can refer to the topographically-organized nature of an ecosystem: a spatial emphasis. Or one can refer to the topological organization of an ecosystem: relational emphasis. Probably the most readily recognized use of a topological mapping versus a topographical mapping are the subway maps that exist in so many modern cities. A subway map shows only the relationship of the stops in terms of how many stops to a destination, which line to use, etc.. The maps omit geographical scale focusing on ordinal information. 3  4  See, for example: http://www.ices.dk/marineworld/seamounts.asp. also 1  search in the biological sciences is aimed at understanding the topological nature of the genome and protein interactions (Wuchty, 2006; Jonnson and Bates, 2006; Ravasz et al.,2002). The topographic mapping of several species genomes has been achieved and this has highlighted even further the need to understand the very non-linear interactions at the level of the gene loci. The topology of trophic 5  dynamics in a food web is another such area in ecology. Research in this area is 6  recognizing that linkages in a trophic network may be strongly asymmetrical and that this quality contributes to the ecological stability of the network (Holt, 2006; Rooney et al., 2006). Climate dynamics are another important area of research relating topology to topography: where sources and sinks of carbon influence climate far beyond point dynamics, creating non-linear relationships. Such a perspective recognizes that the contribution of an ecological setting may be far more substantial than would be suggested by physical size or location alone. This work will characterize the crucial aspects of an ecosystem as ecological 'nodes' in an ecological network; further, these nodes can be thought of as contributing to and creating ecological infrastructures, or what will be herein referred to as 'ecostructures'. Similarly, 7  ecosystem stability, it will be argued, is a distributed quality and an understanding of the networked topology of the system is crucial to a successful transition to a sustainability paradigm. I am proposing that the primary 'conservational'  5  See Appendix A for a brief digression on linear vs. non-linear.  Specific to lotic headwater systems is research into the relative role of headwater streams to the river continuum. For example, invertebrates such as suspension-feeders (e.g. black fry larvae) are known to capture and ingest fine particulate organic matter (FP0M) and dissolved organic matter (DOM) in the water column. They assimilate a small fraction and egest the remainder as compacted faecal pellets with a volume 10 -10 greater than the food particles ingested. (Malmqvist and Wotton, 2002) The over-all importance of this dynamic is still unclear but the scaling magnitudes may be typical of trophic dynamics in general. The topological factor integrating organic resources would appear to be much more relevant than any topographical relationship here (See Rooney et al., 2006). Network theory in ecology is most productive in the area of trophic dynamics (See for example Strogatz, 2001; Holt, 2006). 6  5  6  The term 'ecostructure' originates, as far as I can tell, with Peter Warshall. He wrote about ecological infrastructures as early as 1976 (per. comm.). See Warshall, 1998 . Warshall's use of the term is analogical, as is mine. I have tried to add more structural and organizational detail than I have ever seen elsewhere. My introduction and use of hierarchy and network theory is unique to this thesis. Ecostructure, then, is the analogue of infrastructure in an ecological setting. Ecostructures differ importantly though since they are a product of ecological components that combine synergistically, spatially and temporally, to create important physical properties ('conservational') that confer resiliency on a system. 7  2  structures of distributed ecological networks can be recognized and labeled as ecostructures. A recognition of tangible ecostructural value could contribute 8  significantly to resource valuation debates and decision making. Ecostructure is the focus for a conceptual tool that models the primary components of an ecosystem as one might model the role of infrastructure in a human settlement. The eco-infrastructure plays a similar role; it is a tangible locus along which goods and services are created and exchanged. Just as infrastructures such as pipelines, energy grids, and highways are basic to a modern economy, ecostructures are basic, or primary, structures in an ecological setting. Ecostructures are the primary structures that facilitate the conservation of resources in an ecosystem setting. Ecostructures have the effect of creating relatively stable sources of primary production that then give rise to nested secondary and tertiary levels of ecological dynamics. These could be trophic dynamics or more generally, energetic gradients (Schneider and Kay, 1994; Schneider and Sagan, 2006). When considering economic or industrial infrastructures, recent research suggests that the topological relations constituting the infrastructure are important to the understanding of the system's resiliency. Ecostructure is an important 9  concept because it pertains explicitly to scaJing issues in ecology; ecostructures emerge across a distributed network of ecological components. What is interesting and non-intuitive is the hierarchical nature of the network topology - this is where I propose that ecostructure, as an emergent property of an ecosystem, originates. The defming of ecostructures in differing ecosystem settings should be achieved explicitly through interdisciplinary teams of researchers. As this is the In the plainest language: ecostructures are the building blocks of an ecosystem. These components have the important role of conserving resources. That is, the ecostructure is made up of components that retain, entrain, and make available over time, resources for the further development of higher levels in an ecosystem setting. 8  Region wide power outages are usually the result of malfunctions that cascade through the network. Energy grids incorporate increasingly sophisticated technology and algorithms that try to diffuse these cascades before they get out of control (Amin, 2003). The 9/11 attacks, it has been posited, targeted the World Trade buildings for economic-based topological, not topographical, reasons. See also Seranno and Boguna, 2003. 9  3  first description of the concept of ecostructure, I will attempt to build a model that is interdisciplinary but lacks the team component. In this regard the effort would undoubtedly be vastly improved by a team iteration. The model here will consider what I argue are the three most important distributed qualities conferring ecological and physical stability in a steep, forested headwater (located in the Pacific Northwest). The three components overlap with a 'warp and woof quality to produce an example of an ecostructure. In a broader context, ecostructure addresses the Achilles Heel in the sustainability literature as it stands today: the sustainability movement has, by and large, focused on engineered solutions to questions of pollution, energy use, and other problems that may be generalized as issues pertaining to the 'ecological footprint' of society (Wackernagel and Rees, 1996). This approach is crucial and necessary, though not sufficient, to address the challenges of creating a transition to a sustainable development paradigm. The engineering approach to sustainabihty is not sufficient because of the glaring and gaping hole in our understanding of what makes a natural system the type of system that possesses the plasticity that allows perturbations to the system to, often times, be absorbed into responses that are governed through positive/negative feedback mechanisms. In other words, why and how do natural systems absorb perturbations, seemingly store information about them, and remain intact and resilient over significant periods of time? Ecostructure - as an analytical tool and as a concept - is a distillation and synthesis of some of the major ecological sciences initiatives of the past several decades with the phenomenology of complexity. The recognition of ecosystems as complex systems requiring an iterative approach to management that utilizes ongoing feedback, led to a school of thought known as 'adaptive management' (Holling, 1978; Walters, 1986). Adaptive management does not reject the use of environmental impact analysis, but rather stresses the need for fundamental understanding of the structure and dynamics of ecosystems. A broadening of the influences considered (largely sociological) led to the 'ecosystems approach'. Concurrently, research in the somewhat broad area of complexity sciences pro4  duced compelling insights with varying degrees of pragmatic success (Simon, 1996). This area of research has demonstrated a cognitive maturation through the ongoing developments in network theory. Network theory is contributing to a deepening appreciation for the role of topology in non-linear systems in general and ecological analysis in particular. Statistics, such as power law distributions, error and attack tolerance analysis, clustering coefficients, and more, are providing the central tools for analyzing network topologies (Barabasi et al., 2000; Barabasi, 2005; Newman, 2003) . This research has made a significant contribution to our ability to consider and model networks as dynamic and evolving systems. Prior to the recent work in this area, networks were modelled as static entities. Most natural science modelling is based on static 'snapshots' of data and the dynamics are extrapolated. Indeed, the engineering approach in the case study is used as an example of why this approach has serious limitations. Network theory offers nascent tools to move to the next stage in our ability to model dynamic network conditions. In short, a synthesis of concepts of complexity (hierarchy) and network theory with a contemporary understanding of ecosystem dynamics has yielded the ecostructural perspective described herein and demonstrated in the case study. Incorporating the ecostructural approach into the engineering community is an essential step towards authentic sustainability planning. In this paper's case study the geotechnical data becomes a quasi-static variable, whereas the biotic component becomes a system leverage point where land use planners can intervene with a long-term objective in mind. The thesis is set up as a comparison and contrast between what I am calling the engineering approach and the ecostructural approach. The engineering approach is the norm in slope stability analysis and its highly quantitative techniques are based almost solely on geotechnical data. The case study will be introduced as it typically would be in an engineering report. This discussion will consider the quality and limitations of the geotechnical data. A brief consideration of risk 5  analysis from the most recent engineering report on the case study site will he presented. Following this, the ecostructural approach to the case study will be introduced by bunding an argument for the use of three ecological components that are presented as being the primary components that synergistically produce hillslope stability. Having laid out the three major components a hierarchical model of the ecostructural approach to the case study will be presented. The modelling exercise will close with a brief description of a possible format for a GIS rendering of the case study using an ecostructural approach. Finally, a discussion is offered of the implications of an ecostructural approach to resource management and decision maJfeing. The paper proceeds from this introduction sequentially through the following steps: 2.0  Case Study: The Fitzsimmons Slump The Engineering Approach to the Slump Risk Analysis  3.0  Ecostructural Approach - Case Study Specific Introduce the three large scale processes above Link them into an ecostructural model of hillslope stability  4.0  The Ecostructural Approach - Generalized Hierarchical outline Integration of Ecostructure modules  5.0 Discussion/Conclusion  2.0 C a s e S t u d y : T h e F i t z s i m m o n s S l u m p  A potential mass wasting event (a landslide) on the Fitzsimmons (Fitz) Creek in the Resort Municipality of Whistler, B.C. threatens property and public safety. The setting for the event, known locally as the 'Fitz Slump', is located in a headwater characterized by steep forested slopes and glacial (lateral) moraines.  6  Fitz Creek drains a watershed that is approximately 100 k m and the river that 2  is Fitz Creek rims largely constrained between Blackcomb and Whistler mountains - two of the most high profile ski destinations in Canada and in fact the world (Whistler is the site for the 2010 Winter Olympic Games). These two coastal mountains frame the location of the Resort Municipality of Whistler (RMOW), B.C., Canada. Whistler is approximately 115 kilometers (72 miles) north of Vancouver, B.C., the largest city in western Canada and the third largest city in the country.  Figure 1. Fitzsimmons Slump is located approximately 2 kilometers above the Whistler Village on the western slope of Fitzsimmons Creek (Photo adapted from EBA, 2005).  The Fitz Slump is a landslide area located about 2 km above Whistler village (Fig. 1). Whistler is a small city really; with a permanent population of only 10,000, but this population supports about 2 million visitors annually. In August of 1991,  7  after 5 days of unseasonably heavy rains, the Fitz slide was mobilized. An area of second-growth forested slope approximately 200 m across and 300 m in length let go under rainstorm-induced elevated pore pressure and slid, dropping about 1.5 m and moving horizontally about the same distance. The slide created a debris dam that eventually became a debris flow, contributing the sediments to a flood in Whistler that caused approximately 2.1 million dollars of damage (EBA, 2005). The Slump has remained largely intact and is moving slowly, with periodic bursts, into the Fitz channel. The 1991 slide movement was estimated to be approximately 6 metres; further large movements have been recorded in 1996-97 (4 metres) and 2002-3 (4.5 metres). The Slump is estimated to contain about 700,000 to 1,000,000 m (EBA, 2005) of glacial morraine sediments. 3  2.1 T h e F i t z S l u m p : R N a t u r a l H i s t o r y B a c k g r o u n d  The engineering approach to a risk analysis situation such as the Fitz Slump would rely primarily on geotechnical data combined with precipitation records. This detailed description would include the geological history and development of the area. A probabilistic rendering of failure potential and probable failure volumes, based on a computer model, would then be presented. What follows in this section is based upon various engineering reports focussing on the Fitz Slump, as well as other resources considering the geological history of the region. The Coast Mountains of British Columbia, the region where the Fitzsimmons Creek valley is situated, is underlaid by a plutonic complex (i.e. originated through volcanic magma likely dating to the Mesozoic era) known as the Coast 10  Plutonic Complex. Overlying this base is a very unstable upper-layer of metamorphic rock. This upper-layer is made up of various types of jointed and foliated 11  18  rock, exhibiting a wide range of rock instability processes (Bovis, 1999). These processes - rock slides, avalanches, and rock toppling - are the primary source The Mesozoic Era is divided into three time periods: the Triassic (245-208 Million Years Ago), the Jurassic (208-146 Million Years Ago), and the Cretaceous (146-65 Million Years Ago). 1 0  1 1  Rock formed through high heat and pressure.  1 8  Foliation - rock formations that are made up of layers or laminae.  8  for debris flows. Debris flow is a geomorphological term for landslides that liquify and mobilize geological materials essentially as flowing mud. A second source for these debris flows are the unconsolidated Quaternary glacial drift and colluvium. These are the sediments left behind as the glaciers retreated after 13  the last Ice Age. Quaternary glaciations have left indelible physical formations that have 14  strongly influenced the geomorphic conditions in the Coast Mountains (Ibid.). The geological profile of the area is extremely heterogeneous. Glacial drifts and colluvium, where thick, create a regolith that is well mixed, porous, and unstable. Other areas can be made up of thinner deposits of glacial sediments, while yet other areas are made up of large sections of bedrock, which is very stable. All of these conditions can, and often are, found in close proximity, making for a very heterogeneous geographical landscape. In the distant past the primary forces of geomorphology were volcanism, plate tectonics and glaciation. The primary forces of geomorphology now are the interface of elevation and slope (the legacy of the past), and more to the point in this study, precipitation. Precipitation-induced mass wasting events are a natural mobilization of hillslope resources into the stream arid river systems of the Pacific Northwest. Located on the northeastern flank of the Fitzsimmons Range, the Fitz Creek originates in the glacier fields that he about 14 km to the southeast of Whistler Village. Fitz Creek flows northwest through a linear and well defined valley. The creek flows through Whistler Village along an engineered floodway and empties into Green Lake. The area of the watershed that is above the Slump is estimated (Golder, 1993) to be approximately 68 km , so about 2/3 of the watershed is lo2  cated above the Slump area. The river's longitudinal profile has a distinct nick-  A heterogeneous mixture of material that as a result of gravitational action has moved down a slope and settled at its base. 1 3  1 4  The last major glaciations, which retreated > 10,000 ybp.  9  point where the gradient of the river changes from about 3% above the nick-point (the upper valley) to about 10% below the nick-point. This nick-point, a point where the valley floor drops suddenly, corresponds roughly to the demarcation between the upper and lower portions of the Fitzsimmons Valley The upper valley is dominated by bedrock and a relatively low profile of glacial till. The lower valley is typical of the area that we find around the Slump: mixed and deep glacial till deposits. Just above the Village of Whistler the river begins its transition to an alluvial fan that ultimately drains into Green Lake. The valley, near the Slump area, has a wide TJ-shaped contour that is approximately 4 km wide and about 600 m deep (see Fig. 3). The smooth contour of the valley indicates the glacial history, as does the thick mantle of till. The till consists in the main of a sandy, dense silt, mixed with a variable gravel grain size including copious cobble and boulder. The lower valley walls are characterized by thick lateral moraines. The river has incised into the sediments of this lower half of the valley, creating a narrower, steeper valley which is about 30 m high at the Slump site (Fig. 4). 2.2 T h e ' S l u m p ' is R e a l l y a S e r i e s o f L a n d s l i d e s  The situation in the Fitz Slump area is reasonably well documented and the cumulative studies, dating back to the late 1980s, provide a picture of a river valley that is 'in process' (EBA, 2005). The use of the label, 'in process', is intended is to convey that the area is still explicitly unstable and evolving geomorphically. Golder (1993) is the most detailed of the series of reports that exist and has become the benchmark for information on the Slump. The most recent study, the EBA (2005) report, relies heavily on the Golder report for background and detail, but the EBA report is written more than ten years later and is valuable for what it adds to the earlier perspectives. Probably the most valuable perspective it contains is the report's emphasis on the unstable nature of much of the surrounding area of the Slump. The Golder report mentions the wider condition of multiple landslides at various degrees of development, but it is the EBA report that pro-  10  vid.es m o r e detail a n d explicitly l i n k s t h e s e o t h e r slides to the e v o l u t i o n of the Fitz Slump.  Figure 2 - The fractured and unstable neighbourhood of the Fitz Slump (slide 9, in red). The Fitz Slump is the focus of the concern, but the area around the Slump exhibits slides in various stages of development. The scale for this photo has not been determined, the river segment is approx. 2 km. (Photo adapted from EBA, 2005)  T h e E B A r e p o r t r e c o g n i z e s at least 9 large l a n d s l i d e s c o n t i g u o u s to, o r n e a r  the  S l u m p . F i g u r e 2 is a n a e r i a l p h o t o g r a p h o f t h e v a l l e y at t h e S l u m p a r e a , w i t h  the  Fitz C r e e k r u n n i n g n o r t h w e s t t o w a r d s W h i s t l e r Village (left to right i n the photo). A s the photo shows, the area has been heavily logged in the past along  the  lower slopes of the valley. A s well, the a r e a has seen a considerable a m o u n t of de-  11  velopment in the form of access roads and ski runs being cut into the slopes above the lower elevations. These landslides differ by degree in terms of the various stages that each slide exhibiting: •  Slide 1 appears to be larger than the Fitz Slump, but is relatively old and is considered to be 'mature'. Mature being essentially stable, having found a new seat so to speak.  •  Slide 2, just upstream of slide 1, is considered to be recently 'reactivated', meaning it is currently unstable and moving towards the stream. Figure 8 (below) is a photo of the toe of slide 2. Notice the thickness of the lateral moraine in the vicinity of the Fitz Slump. The river has cut through much of the moraine at the valley floor.  •  Slides 3, 4, and 5. The EBA report sites these three slides as 'mature'. They are, relatively speaking, small slides exhibiting rounded crowns located just below the road.  •  Slide 6, located just opposite of the Fitz Slump, is a large event. This slide could potentially be larger than the Fitz, and is considered to be at an 'intermediate' stage of development. The EBA report postulates that this slide may have obstructed the river at one point, pushing the flow into the bank of the Fitz Slump section. This displacement of the original path of Fitzsimmons Creek may have under-cut the bank, triggering the Fitz Slump. EBA estimates this activity to have occurred about 50 -100 years ago.  •  Slides 7 and 8, downstream of the Fitz, are smaller slides at a more mature stage. Slides 7 and 8 could be considered to be at a similar stage as slides 3 to 5  12  •  Slide 9 is the F i t z s i m m o n s Slide  The disaggregated  (Slump).  material that one w o u l d see f r o m the s t r e a m looking east  the edge, or c r u m b l i n g face, of Slide 6 p r o b a b l y r e p r e s e n t s c o l l u v i u m that  origi-  nally existed adjacent to the w e s t e r n slope of the ancient c h a n n e l (the slope . is n o w t h e F i t z S l u m p ) . F i g u r e 4 is a g o o d r e p r e s e n t a t i v e  at  that  photograph of the  t h i c k n e s s o f s o m e o f t h e l a t e r a l m o r a i n e s i n t h e v i c i n i t y o f t h e F i t z S l u m p . It is also indicative of the erosional activity ongoing at this point i n the  valley.  Figure 3 - A cross-section of the Fitzsimmons Slump. Showing the elevation of the framing mountains (Whistler and Blackcomb) and a schematic drawing showing the present creek and, to the right, the armour of boulders that may have been the creek bed in the distant past before the creek flow was forced west. (Schematic  adapted from EBA 2005)  13  Figure 4 - The toe of Slide 2. Downstream of the Fitz Slump on the opposite side; see figure 2 for context. This imagine provides a sense of the thickness of the glacial till in the vicinity of the Fitz Slump. (Photo taken from EBA 2005)  T h i s scenario, regardless of the degree to w h i c h the historical details are  correct,  is indicative of the c o m p l e x interactions o c c u r r i n g at the valley b o t t o m w h e r e d y n a m i c c h a r a c t e r of t h e a r e a is a r e s u l t of geology, h y d r o l o g y a n d  geomorphol-  ogy interacting through time. A s the east valley wall crumbles into the p u s h i n g it into t h e w e s t wall, t h e w e s t w a l l w e a k e n s  a n d creates the  the  stream,  conditions  that a g g r a v a t e the possibility of a slide. T h e slide c o u l d o c c u r s h o u l d the  regolith  a b o v e u n d e r g o a p r o c e s s w h e r e the soil p o r e p r e s s u r e is s t r e s s e d ( i n c r e a s e d ) the point w h e r e the regolith m a t r i x undergoes a critical but subtle shift the i m b r i c a t e d structure of the regolith to 'unlock'. T h e engineers that  to  allowing authored  t h e E B A r e p o r t h a v e d e s c r i b e d a n d l a b e l e d t h e series o f slides individually. T h i s is likely to d r a w attention to the differing degrees of evolution that the  various  slides r e p r e s e n t . T h e salient p o i n t h e r e is t h a t this a r e a o f t h e v a l l e y is v e r y stable a n d a failure of a n y one of these slides could potentially release one,  unsome,  or all of the others. T h e evolution of e a c h of these slides has the potential to the evolution of the  effect  area.  A s w e will see, h o w water, as precipitation, m o v e s t h r o u g h these m e d i a , f r o m h i g h elevations to the valley bottoms, is c r u c i a l to u n d e r s t a n d i n g the m e c h a n i s m s of steep forested m a s s wastings  14  the  driving  a n d debris flows s u c h as the  Slump.  This geological portrait provides a natural history of the region that begins with the major geomorphological processes being tectonic and magmatic. These give way to epochs of glaciation where the major geomorphic drivers are the weight and movement of glaciation compressing and eroding the bedrock into TJ-shaped valleys such as the site for the Fitz Slump. These processes, although still active to a degree, are now minor drivers. In contrast to the engineering perspective, the ecostructural perspective views the major geomorphological drivers of the present asfirstand foremost climatic in the form of precipitation and to a significant degree biological: the coastal rainforest ecosystem is an active player in the determination of the retention of slope sediments. 2.3 R i s k A n a l y s i s - E n g i n e e r i n g A p p r o a c h  All the engineering reports reviewed on the Fitz Slump agree that the principal driver of the mass wastings was and is precipitation. The dominant type of model then, for considering the likelihood of a debris flow in a setting such as this, steep and forested, is to characterize the geophysics of the regolith combined with elevation data and precipitation data (Bovis, 1999). The various reports looking at the Fitz Slump have used variations on this model and all conduct their probabilistic analysis and conclusions on the physical structure of the geology of the valley combined with actual and modelled precipitation data. In a standard geophysical engineering model precipitation "influences slope stability indirectly, through its effect on pore water conditions in the slope material... [this] has led to the use of terms like 'critical rainfall' or 'triggering influence' " (Caine, 1980) The Golder report (1993) had postulated a deep sliding surface, yet the EBA engineers discounted the likelihood of this based on a reconstruction of the vector inclinations. The depth of the sliding surface will impact the calculation of the amount of debris potentially available to enter the river.  15  FITZSIMMONS CREEK SLIDE  Figure 5 - A schematic showing the plausible depths of the sliding surface of the Fitz Slump. Note the close proximity of the bore hole locations (Schematic adapted from Golder, 1993)  The data collected soon after the initial recognition of the slide were largely built around the borehole data and the slide monitoring survey station data collected over the following year. The remainder of the data used to create a quantitative analysis was based on precipitation records that span 36 years and the slide movement records that can be inferred from after the fact measurements. The problem with the data is the quality. Figures 5 and 6 shows the borehole locations as being very tightly clustered, and all located on the access road. Access for the cumbersome borehole equipment is indeed an issue but it delivers a very limited level of detail in terms of characterizing the regolith profile and the bedrock surface characteristics.  16  Figure 6 - Schematic showing the outline of the Fitz Slump and the location of the slide monitoring stations as well as the tight cluster of borehole sites (labeled BH 1, BH 2, etc., near the middle of the slide on the access road which is shown as broken lines traversing the slide). (Schematic adapted from Golder, 1993)  Likewise, the paucity of detail in precipitation data (covering approx. 35 years) creates a calculated return period for the anomalous August 1991 rainstorm event (that triggered the Fitz Slump) of approximately 700 years. The Golder report concedes that this is essentially a meaningless calculation (due to a lack of long term data) and instead settles for a less quantitative statement that labels the storm event as "exceptionally high for an August rainfall." (Golder, 1993). The data picture determined from measurements like the size of the scarp head, the curve of the toe, etc. are limited in quality since it can not be known for certain when the slide began (decades, months, days before the slide was recognized). In an attempt to identify the timing and magnitude of unusual rainstorms Golder (1993) analyzed the historical series of greatest 24-hour rainfall totals on record. The analysis indicated that unusually large rainfall events have occurred during the May to September snow-free period only in August 1991 (the Fitz Slump storm) and September 1957. However, storm events of this magnitude oc17  cur much more frequently in the typical rain season from October to April (Fig. 17). The following is the EBA (2005) report description of the slump: It involves about 0.7 to 1 million, cubic metres of glacial drift that is currently moving down slope into Fitzsimmons Creek at a rate of several centimetres per year, with intermittent, significant short term movements. The RMOW monitors the slide, and has recorded 4 major movements in the past 26 years. A flood in 1991 caused estimated damages downstream of $2.1 million, and was associated with slide movement estimated at about 6 m. Recorded movement of 4 m in 1996-97, and 4.5 m in 2002-03 indicates a continuing risk to the resort area.  The EBA (2005) engineering report then concludes by offering three scenarios pertaining to the probabilities of debris flow severity that are broken down into: (I)  The 1 in 10 Year Event - Past observations indicate that displacements of the order of 1 to 3 m, triggered by rain storms or spring melt, occur on average with approximately 10 year intervals (1.5 m in 1997, 3 m in 2003). Such displacements may lead to the partial blockage of the channel, as small landslides descend from the toe scarp, covering the channel with up to a 3 to 5 m thickness of debris. The maximum debris volume from Fitzsimmons Slide would be 6,000 to 18,000 m . 3  (II)  The 1 in 50 Year Event - A greater displacement, in the order of up to 10 m would correspond to a major infiltration event, comparable to the August, 1991 slide. Based on precipitation analysis carried out by Golder (1993), this has a return period in the order of 50-100 years and produces a displacement of the Fitzsimmons landslide in the order of 10 m. Associated toe sliding could create a dam in the order of 5 to 8 m high blocking the creek. The total amount of material deposited in the creek could range up to 60,000m , although all of this is unlikely to enter the stream at once. More likely, the immediate sediment influx from the toe of the slide would be in the order of 20,000 to 40,000 m , with additional material eroded over subsequent years, as the channel re-establishes itself. 3  3  (III)  The 1 in 100 Year Event - From the dynamic analysis, we estimate that a large-scale movement, leading to a complete blockage of the channel, would require either extreme groundwater infiltration, or possibly a strong earthquake. Either condition is subjectively estimated to have an annual probability of approximately 1 in 100 years. In such an event, the channel would be completely blocked, and the creek would be forced to flow over the surface of the terrace, which is partly protected by existing boulder deposits. The terrace would likely be subject to backward erosion, since the substantially deeper present channel serves as the erosion base. Conservatively, one may consider that the terrace deposits would act as a landslide dam with a crest height of about 10-15 metres and a crest width of approximately 100 to 200 m, measured in the downstream direction. Thus, the dam would likely erode relatively slowly, producing an estimated debris volume of 70,000 to 100,000 m . Again, only a fraction of this volume would be expected to enter the stream instantaneously. A subjective estimate is up to 30,000 m . The remainder would be eroded over a few years. 3  3  18  I am left with the conclusion that while the work done on the Fitz Slide is valuable and contributes to understanding the risks involved, it falls short in developing a credible quantified risk assessment. Statements like the description of the slide creeping at a rate of 'several centimetres per year' - with the caveat that it moves by several metres during certain rain storm events, leave one wondering just what the slide is capable of at any given time. The Slump's creep is punctuated at best; a measure of a mean annual movement here is meaningless. Given the lack of detail in the knowledge of the bedrock topography (insufficient number of boreholes and a dearth of grain in terms of site placement), the thickness and well mixed nature of the unstable till mantle, and the large and significant variation in the actual movement of the slide and its neighbouring slides - a meajiin^ful risk assessment seems elusive to say the least. This does not mean that decision-makers are left with no credible alternatives. The following sections will describe a pragmatic alternative that builds on the knowledge of the Golder and EBA reports to deliver a dynajiric analysis that will allow the RMOW to understand the slide's behaviour in terms of the geological characteristics and, importantly, in terms of the ecological characteristics. The former attempts to determine the stability of the regolith, while the latter attempts to determine how precipitation moves into and through the regolith. The former is immune to resource management to a very large degree, the latter is not. As we move to the consideration of the ecostructural approach to the slide, I would ask that the reader keep in mind "What would an engineering analysis have concluded about the slide area a year before it let go?" Would it have concluded that the risk was very high? Would it have compelled decision-makers to avoid the damages that ensued during the flood that occurred as a result of the 1991 slide? These questions are addressed as we move through the following sections. To conclude this section, the Fitzsimmons Slump is a landslide area that is made up of thick mixed glacial deposits underlain by bedrock. The Fitzsimmons Valley, longitudinally, has a gentle slope from the glacier sources to a nick point just 19  above the Slump site. The area above the nick point has little or only shallow glacial tills deposited and is quite stable. Below the nick point the glacial deposits become thicker and the valley grade becomes considerably steeper, causing the river to incise into the valley bottom. This erosional process is, in places, undercutting the hillslope walls, destabilizing the areas above. The Fitz Slump is such an area. It was the site of a large failure in late August of 1991, the result of an anomalous rainfall event that lasted 5 days. Standard engineering analysis has characterized the situation in some geotechnical detail and has provided a good foundation from which to move from this secondary level of analysis to a tertiary level which builds the modelling of the valley from a 3 dimensional analysis into a process-driven 4 dimensional analysis . To accomplish this we will employ the concept of the building of ecostruc15  ture. Ecostructure is the self-organizing development of the abiotic/biotic interface in a complex organizational manner that literally creates and maintains, in this case, slope stability. To describe ecostructure in this situation we will employ three overlapping concepts that have significant traction in the literature: (I)  Instream Structures - these structure play an important role in diffusing the hydrological erosive capacity of stormwater flow  (II) Lateral Root Reinforcement - The root mats of the forested hillslopes entrain the regolith surface and form networks of roots that act as structural anchors (III) Hillslope Hydrology Preferential Flow Paths - a model of hillslope hydrology that recognizes the important role that root systems play in the self-organizing diffusion of potential pore pressure gradients  By 4 dimensional I am referring to the explicit modelling of the temporal scale and the effect land-use decisions will have on long-term structures such as tree root distributions ergo PPs, etc. 1 6  20  3.0 The Ecostructural Approach - Case Study Specific The ecostructural approach is premised, on the idea that a complex ecosystem (in this case a steep, forested watershed) is effectively modelled as a system of modular components. These components are organized in such a way that they form a hierarchy of interdependent and coevolving processes. Issues of scale, and therefore of hierarchy, are central to ecological organization. Hierarchical organization is prevalent in complex systems, and in biological and ecological systems this organizational structure favours stability and evolution (Simon, 1962, 1973) . The components or modules of the system interact to form levels organ16  ized around shared rates (frequencies) of interaction. The rates of interaction are the common denominator used to ascertain organizational levels. For instance, the symbiotic interaction between the soil fungal population and the fine tree root development is closely coupled (Campbell et al., 2004). Fungal associations, such as ectomycorrhizae, are thought to be tree species specific (Griffiths et al., 1991 &? 1996) and the removal of forest stands via harvesting has been shown to affect the distribution and availability of ectomycorrhizal mats (Swift et al., 2000); the availability of hyphal symbionts would seem to strongly influence the distribution of conifer seedlings (Griffiths, 1991). We are therefore interested in the distribution of ectomycorrMzae/fine roots as a result of land use decisions since the fine root process will determine, or initiate, the distribution of tree root preferential flow paths (PFs). In this relative way the ectomycorrhizal association with the fine roots of a tree species can be grouped as a module that considers fine root development and distribution. The relative high rate of symbiotic interaction is such that the ectomycorrhizal associations are tightly grouped into one module, whereas the interactions between species roots (competitive) occur at a lower rate so the modules could be separated if that level of detail were germane to the analysis. What is A more in-depth synopsis of hierarchy theory can be found in the appendix. The contribution of a hierarchical organization to stability and evolution is exemplified in the parable of the Watchmakers Hora and Tempus (Simon 1973 and 1996). Simon's use of a watchmaker in this context is wonderfully tongue in cheek given the historical role of the Watchmaker in debates of evolution. 1 6  21  important here is that we are able to manage complexity by grouping interactions according to strength (interaction rates) and create a hierarchy of modules such that we are focusing on the crucial dynamics according to our goals: modelling the physical ecological structures that attenuate hillslope hydrology. In Hierarchy Theory this partitioning of components is based on what is conceptually referred to as being a nearly-decomposable system. A non-linear system is not completely decomposable because the components interact. But the researcher can determine which components interact strongly (thus forming a module or component) and which interact weakly to achieve a decomposition that is near enough to complete so as to be meaiimgful. By decomposing the complex system into modules or components we achieve a simplification. Our organizational technique is recognizing that the modules interact over time and space but possibly at such a weak level as allows us to ignore the interaction until a different scale of analysis is chosen. The litmus test then is whether we can use our knowledge of a system to decompose our organization into modules when the separation is meaningful (useful), as opposed to intra-modular separation which is problematic as the mutual interaction is so tightly coupled as to be difficult to distinguish . 17  Ecostructures are the integration of ecological components in a synergetic fashion. There are many common examples of components that together provide emergent properties, yet alone possess none. This perspective is hierarchical as 18  the components at each level of the hierarchy provide the building blocks (modules) for the emergent properties at the level above. For example, reduce a large forest to a small stand and at some point you lose the ability of the forest to retain This is actually very common. Consider atomic theory: we arbitrarily group energy and mass into subatomic particles (the differing types of particles are grouped into modules like quarks or neutrinos) , which when combined based on interaction rates make up the next level of organizational scale. The atom. Atoms that interact strongly are grouped into modules (elements) of carbon, oxygen, etc. Atoms that interact strongly are combined into the next level of meajxLngful organization. This level is labeled molecules. At each level we are focusing on interactions that explain behaviour while ignoring detail that is, in Ockham's sense, irrelevant. 1 7  1 8  Atoms only form molecules in specific ratios; the words on a page have far less meaning individually.  22  moisture, humidity and a concomitant average temperature . As we will see, the 19  ability of a forest to entrain soil and regolith, and to strongly influence the character of the hydrology in that forest is an example of ecostructure at the level of the forest or landscape in a hierarchical perspective. From this level emerges higher levels that include regional dynamics such as climatic and carbon cycling characteristics.  20  A n attempt to explicitly apply the above as a case study is presented. The study considers a common natural hazard problem in the mountainous Pacific Northwest (PN) that has the potential to cause loss of property and life and considers the situation from two perspectives: (i) a standard, well recognized engineering perspective that focuses on the physical properties of the regolith (described in section 2.0);  and (ii) the second perspective extends the engineering perspective  by creating a model that shifts the focus to the major properties that actively contribute to hillslope stability via the hillslope hydrology regardless of the type of substrate. We are contrasting two methods of analysis here: (1)  the engineering approach,,  which is the established method for characterizing risks in such a setting. The engineering approach is based on material properties and geotechnical theory, and (2)  the second method is the focus of this thesis and is referred to as the  ecostructural approach. This approach originates in this work. The ecostructural approach utilizes and builds on the first approach, but also contrasts with it significantly. The ecostructural approach, in this instance, considers explicitly, the system's dominant biotic characteristics that affect the drivers initiating the potential for mass wasting. In this setting - a steep forested watershed in the Pacific Northwest (PNW) - all the research areas involved (hyAt the risk of over-simplifying, the physical forest structure breaks desiccating winds at the periphery, and the canopy helps to shade and hold moisture. This creates an ambient humidity and temperature in a forest that is much less variable than exists just outside the forest. Forests at the regional level influence the climate, entrenching the cycle. 1 9  At the small scale (e.g. plot level) carbon cycling dynamics might focus on populations of detritivores, but at the regional level the focus would likely be on climatic variables (See Wiens, J . A. 1989. Spatial Scaling in Ecology). 8 0  23  drology, geomorphology, forest ecology) agree that the principal driver of mass wasting events, given the underlying geology, is precipitation (Bovis and Jakob, 1999; Caine, 1980; Church, 1999; Gomi et. al., 2002; Montgomery and Biiffington, 1997; Golder, 1993). In particular, precipitation events (rain, rain-on-snow) create a threshold condition in terms of substrate particle pore pressure, and if breached the regolith may let go and slide, even Uquify. The ecostructural approach takes the consensus of the role of precipitation to be the starting point of the dynamic analysis and examines the primary processes 21  that affect the movement of precipitation through the case study setting. The traditional approach to such a situation (the engineering approach) is a detailed assessment that characterizes the geology and in particular the geological profile, creating a geotechnical data base that considers how it is impacted by elevation and precipitation. The ecostructural approach presents an assessment based upon the notion that the ecological system in question can be decomposed into a manageable few parts that can then be understood as providing structure and stability to the system.  22  The ecostructural approach differs considerably  from the engineering perspective since, for example, it takes the forest to be a major structural influence on precipitation distribution in terms of the over-all watershed hydrology.  3.1 Self-organizing Ecostructures Ecostructure is an integrative concept: it is an emergent property  23  made up of  ecosystem modules integrated across scales. Ecostructure is defined here as the  Dynamic analysis here refers to the characteristics of the system that are occurring on a time scale that is within the resource management parameters. Characteristics such as the make up of the geological profile are taken as constants. The density, age, and management of the timber resource will likely be within the scale of the management mandate, as will land-use issues. These are the dynamic variables. 8 1  The rationale for creating a minimum number of components to consider and over what scales is based on the work of H. A. Simon and his writing on hierarchy theory, in particular his notion of nearly decomposable systems (Simon, 196S, 1973, and 2002). 3 3  I use emergent property to describe a property that arises as a result of a synergy of components. The property must be important enough that it is a feedback mechanism that strengthens the system's ability to persist. 2 3  24  integration of ecological components that integrate via a networked architecture to provide recognizable ecosystem services and functionality In plain language, ecostructure for an ecological setting is the analog to institutional infrastructure, with certain caveats. Ecostructure is a distributed quality that contributes to ecological homeorhesis . 24  Consider that organisms are centrally controlled biological entities with various mechanisms to maintain a homeostasis; whereas ecosystems, in contrast, achieve a distributed control that aims, self-organizationally, at a homeorhetic goal. Ecostructures are made up of ecological modules that integrate synergis25  tically to achieve a homeorhetic system which creates a positive feedback in terms of ecosystem development. It is explicitly a subjective concept - the measure of which is whether or not it is useful. Ecostructure is most valuable when it causes the analysis to explicitly consider scale issues and the modular make up of an ecosystem resource. Most ecologists, and biologists for that matter, recognize the existence of so called redundancy in ecosystems or biological systems. Junk DNA or the 'aeroplane and rivet' theory of biodiversity are examples of potential redundancy . The role of redundancy, 26  and its value, are poorly understood; for our purposes it is enough to recognize the existence of a heterogeneous continuum of members or components of an ecosystem and the role that ecostructure can play in teasing out the way such components are organized, grouped, and topologically integrated. This is an ex-  Homeorhesis: derived from the Greek for ''similar flow", is a concept encompassing dynamical systems which return to a trajectory, as opposed to systems which return to a particular state, which is termed homeostasis. In ecology the concept is important as an element of the Gala theory, where the system under consideration is the ecological balance of different forms of life on the planet, (http: //en. wikipedia. org/wiki/Homeorhesis) 8 4  This is a controversial description but it works well with an ecostructural perspective. Forests, for example, occur all across the globe with amazing levels of general homogeneity. From species make up to serai processes. There would seem a high level of self-organization that is tightly evolved so as to produce very predictable dynamics whether in boreal Russia or northern Ontario. See Dempster, 2000 for an interesting concept of distributed ecological process control. 2 5  Paul Ehrlich's "rivet hypothesis". The theory states that removing species from an ecosystem is like extracting rivets from an airplane in midair. The plane can lose a few rivets without failure, but at some point, a wing falls off. 2 6  25  plicit attempt to further what Holling (1978) referred to as the need to stress the fundamental understanding of the structure and dynamics of ecosystems. The way in which components are organized in an ecosystem is not centrally directed as it is in an organism; in an organism the DNA molecular code largely directs development. In an ecosystem setting the development of the system (say a forest) occurs through a distributed genetic process. Each member of the system has a developmental response to conditions that arise . A n ecosystem, therefore, 27  has no central genome for directing development. Instead the system uses a distributed genome, the genome of the constituents. Inherent in this distributed genome is all the limitations or constraints of each species. This provides for a lot of variability. Because the system development is unidirectional the process is sensitive to contingency. It matters what happens through time. The geological history in the case study sets parameters (along with climate), and the ecostructural response has as its 'goal' to create a stable ecosystem in which each component has the opportunity to perpetuate. The manner in which ecosystem components act upon and react to contingency and energetic forcing is a process of selforganization (Schneider and Kay, 1994; Schneider and Sagan, S005). The following case study is set in a challenging environment that exemplifies strong contingency and forcing (steep gravitational forces, storm precipitation events, etc.). The Pacific Northwest  28  (PNW) is a mountainous region that is tectonically-  active and still populated with glaciated high-elevation valleys. Typically, the glacier-fed headwaters of a watershed in this region tend to be steep and heavily forested. Steep being defined as a gradient that it is too large  29  to allow for the  There is a concept of ecosystem development called 'Sympoietic'. Sympoietic systems are homeorhetic (Homeorhesis, derived from the Greek for "similar flow", is a concept encompassing dynamical systems which return to a trajectory, as opposed to systems which return to a particular state, which is termed homeostasis), evolutionary, distributively controlled, unpredictable and adaptive. 2 7  8 6  29  The Pacific Northwest is a region extending from northern California to southeastern Alaska. > 4  9 gradient channels.  26  build-up of much in the way of alluvial sediments. In this mountainous geogra30  phy, headwater streams that drain the hillslopes are usually small due to the extreme topography. The gullies that drain the headwater areas are mostly steep, narrow, incised channels that form the dominant pathways for the transfer of sediments from the hillslopes to the stream channels (Mstor and Church, 2004) and on to the rivers. The PNW is typified by relatively dry summers and a wet, mild, fall/winter season that can bring monsoon-like rain events. The exaggerated, vertically undulating, surface of the hillslopes produces many small catchments that drain the immediate area, creating small streams (gullies) that are tightly coupled to the surrounding hillslopes (Fig. 7); the hydrology is such that stream flows are 'flashy'. That is, stream volumes and velocities can change quickly, by orders of magnitude, during a storm event. It is this tight coupling that makes a headwater system more important to the river continuum (Vannote et al., 1980) than might be obvious at a cursory consideration (Wipfli, 2006). The headwater system can make up 70-80% of the watershed catchment area (Gomi et al., 2002). These many small catchments integrate to become the source areas and transient sinks for water, nutrients, sediments, and biota (Wipfli, 2006); this pattern of source and sink replicates in forest ecosystems and coastal waters worldwide (Sidle et al., 2000). As such, the small physical presence of a headwater stream belies its cumulative impact as one scales up from the headwater catchment, to the watershed, to the region. The primary driver in these settings of watershed systems, whether considering cumulative impacts or local stability issues, is the hydrologic response. The hydrologic response controls the timing and fluxes of water, sediments and nutrients/ pollutants (Sidle, 2000). The surrounding forested hillslopes are the source of nearly all that makes a headwater stream unique: low light levels, al-  Alluvial sediments are those placed as a result of hydrologic influence, as compared to colluvial sediments, which occur from the influence of gravity (i.e. rocks and sediments falling down the slope). 3 0  27  lochthonous  31  organic inputs as nutrients, sediments originating via mass wast-  ing and the structural role the forest plays in enhancing the surprising level of slope stability I say surprising because the steep, high elevation, topography that characterizes headwater streams in the PNW operate under some of the largest gravitational potential energy gradients on Earth and produce, in general, "the largest sediment fluxes of any terrestrial landscapes" (Church, 1999).  32  Consequently, it is  important to consider how such a dynamic system, occurring amidst such steep potential energy gradients (gravity, winds and intense precipitation), maintains such a stable biomass-rich ecosystem (Waring and FrarJdin, 1979). Lotic eco33  systems in general are some of the most persistent ecosystems on Earth. Above, the primary driver of watershed dynamics in these settings was described as the hydrologic response: the hydrologic response controls the timing and fluxes of the hillslope hydrology. Without a strong understanding of the processes, controls, and architecture of these systems, long-term plajining and management of watersheds will suffer (Sidle, 2000). Given this recognition we can ask questions such as: Can we articulate the sources of this unlikely stability? Do the sources of this stabilizing effect have a sensitivity with regards to time? Are these organizational attributes being taken into careful consideration when we practice resource management? This work will tackle these issues. Figure 7 presents the coupling that exists in a PNW watershed. The success of downstream fisheries and ecosystems is a product of the upstream mechanisms that create a relatively stable regime of water, integrated nutrient supply, and AHochthonous materials are derived from outside a system, such as the leaves and stems of terrestrial plants that fall into or are transported into a stream. Autochthonous materials are those produced inside the system. In a stream example such materials would be photosynthetic plants like mosses and periphyton and biofllms that are a mix of algae and diatoms (and others). 3 1  In a personal communication Roy Sidle has taken issue with this quote from Church. Regardless of whether the sediment fluxes are 'the largest' they are indeed large and the major geomorphological action in the PNW. 3 2  Lotic ecosystems are freshwater flowing environments. Lentic ecosystems are relatively still, such as a lake, pond, or swamp. 3 3  28  sediment fluxes (Wipfli, 2006). The graphs on the right of the diagram reflect the attenuation that the hydrologic response helps create. Without the ecostructure provided by the forested slopes the main stem hydrograph would look more variable (flashier). The ecostructures we are about to delve into help to entrain sediments and nutrients, delivering them downstream in a significantly more manageable format than would exist in a headwater denuded of trees.  Figure 7 - Map and diagramatic schematic views of a drainage basin to illustrate the concept of 'coupling' between a stream channel and adjacent hillside slopes. On the left side of the diagram are schematic graphs of characteristic grain size distributions through the channel system. In each graph, the next upstream distribution is shown (dashed line) so the intervening modification by stream sorting processes may be directly appraised. On the right hand side of the diagram the graphs illustrate the attenuation of sediment movement down the system. (Schematic adapted from Church, 2002)  The graphs in figure 7 are of sediments but their magnitudes correlate well with the hydrograph as it is the hydrologic component that entrains the sediments.  2 9  The characteristics of coupling are synonymous with our mechanism of ecos34  tructure. Without the ecostructural component the graphs on the right would be far less contrasting. The physical structure of the ecostructural component entrains both sediments and nutrients, it also acts as a source of inputs (LWD, leaf Utter, etc.). So the ecostructural component strongly influences the delivery of sediments and nutrients and as such is an important mechanism in the character of 'coupling' in headwater streams. In figure 7 we can imagine the grey area as the forested watershed slopes that we are considering. To model the stabilizing dynamics of a headwater system we will focus on three major components of the system. I will introduce the three areas and then begin to build the three into a model of ecostructure. 3.2 S e l f - o r g a n i z i n g Instream Structures  As Church (1999) describes it "Rivers [and streams] are self-organized, conditionally stable systems". A headwater stream may be described as a combination of interacting morphological elements; the end product being a system capable of locally maximizing energy degradation through turbulence. These elements are morphologies such as channel width and substrate roughness - the crosssectional variation that creates, among other frictions, the spill resistance from protruding boulders, rocks, and large woody debris (LWD). The structure build35  ing mechanism is a hydrological/ gravitational force that arranges clasts  36  and  LWD in such a way that they form interlocking structures capable of withstand-  Coupling refers to the very tight relationship that exists between the hillslope runoff and the character of the stream hydrology. The steep forested hillslopes give rise to 1st order (small) streams that receive most or all of their nutrients/sediments from the surrounding slopes. As well the stream hydrographs are much more reflective of the slope hydrology than larger streams and rivers (Gomi et al., S002). 3 4  Large woody debris (LWD): Logs, limbs, or root wads 4 inches or larger in diameter, delivered to river and stream channels from streamside forests (in the riparian or upslope areas) or from upstream areas. LWD provides streambed stability and habitat complexity. LWD recruitment refers to the process whereby streamside forests supply wood to the stream channel to replenish what is lost by decay or downstream transport. www.stateofthesalmon.org/resource/glossary.asp 3 5  Clast - an individual constituent, grain or fragment of a sediment or rock, produced by the weathering of a larger rock. 3 6  30  ing forces that a Shields' formula would predict to entrain or move (Fig. 8). If 37  the stream is not configured into a state of maximum energy degradation then the system is unstable locally and there will be an increased propensity for the system to shift until it finds a locally maximized state (Church, 1999; Halwas and Church, 2002). Church attributes the quasi-stability of channels to be the result of "a succession of processes, beginning with the arrival in the channel of a keystone that the flows cannot normally move. Other stones moving along the bottom of the channel become imbricated behind this stone." (Church, 2001). As well, large woody debris dams (logjams) may form and entrain sediments and nutrients, creating habitat for invertebrates as well as long-term instream structure. Land-use activities such as timber harvesting often strongly influence LWD and sediment recruitment in terms of these instream structures and over-all reach hydrogeomorphology (Gomi et al., 2003). But, as will become apparent, I will go further to ascribe the 'succession of processes' to include tree root mat formation and their effect on hillslope hydrology. These latter processes restrict the abiotic hillslope resources that enter the stream, thereby affecting the "succession of processes". The emphasis on local, as opposed to a global maximum, refers to the recognition that instream structures are limited to the resources that enter the stream. A stream will undergo a process of scouring and imbrication over time and through storm events, this process eventually 'finds' a local maximum that contributes to a stable riparian environment. Should a storm event occur that is large enough to reset the instream structures the process would begin anew. In this way streams are self-adjusting, conditionally stable systems.  Shield's Formula: Ratio of the Bed Shear Stress (the stress on the river bed caused by the flowing water) to the resisting forces of the bed-material. 3 7  31  Figure 8 - Examples of imbricated, channel-spanning structures that are conditionally self-organized. In steep headwater streams these structural inputs are largely colluvial; generally speaking they enter the streams during relatively rare high magnitude stormwater events. The largest clasts in the stream become 'keystones'. Schematic of a typical step-pool reach (Schematic adapted from Church, 2002).  To sum up, the largest clasts in the stream become 'keystones' (Halwas and Church, 2002): /  these large clasts are immovable to most flows and once in place the keystone encourages the imbrication of gradually smaller clasts till a channel-spajining structure has formed.  /  These instream structures act as hydraulic energy diffusers creating a flow that is sub-critical (Froude # <1)  38  and therefore minimally ero-  sive.  The Froude number is a dimensionless value that describes different flow regimes of open channel flow. The Froude number is a ratio of inertial and gravitational forces. 3 8  Gravity (numerator) - moves water downhill  Fr = V/(gD)  1/2  Inertia (denominator) - reflects its willingness to do so. Where: V = Water velocity; D = Hydraulic depth (cross sectional area of flow / top width); and g = Gravity When:  Fr = 1,  critical flow,  Fr > 1, Fr < 1,  supercritical flow (fast rapid flow), subcritical flow (slow / tranquil flow)  32  /  Colluvial inputs, the source materials for instream structures, can be a result of mass wasting, and this wasting mechanism is influenced directly by the hillslope stability and indirectly by the hillslope hydrology.  3.3 Lateral Tree Root Reinforcement The concept of lateral root reinforcement dates back to the 1950s in forestry biology. Initially the emphasis was on basal root development as it was thought that the stability of hillslope soils originated with the basal roots penetrating fractured bedrock (Krogstad, 1995) (see #3, Fig. 9).  39  This model of stability may work well in shallow soils but it is inadequate when considering thicker soils and as well it seems to negate the effects of the considerable amount of lateral root structure. Most of the root mass of a tree occurs near the stump (#1 in Fig. 9), forming a densely woven root-saturated soil that is far more stable than the soil around it. From this base form the lateral roots, some of which extend to the neighbouring trees - even several trees away (#2, Fig. 9). These lateral roots interweave, creating a network that spans potential failures and produces a holding pattern on the hillslope. This pattern will be uneven, or heterogeneous, across space though the more stable blocks downslope can "act to buttress a less stable block upslope, and prevent slope failure" (Ibid.). Figure 9 illustrates the mechanisms of lateral root reinforcement. Lateral reinforcement may be highly significant due to its pervasiveness, especially in areas such as zero-order basins where the soil may be too deep for basal root structures to be important. When a slope failure begins to occur, it is the roots crossing the failure boundary that offer resistance. Resistance is a function of root orientation and strain (Krogstad 1995; see Fig. 10). The possible root orientations and the sinuosities, The Krogstad citation is used as a proxy for a fairly vast amount of literature on this area. Krogstad's thesis contains copious primary references. I use his work as a nod to a graduate students work and also since I used his very accessible diagrams. For my purposes Krogstad's work was a concise source. 3 9  33  or tortuosity, create a potential for a threshold effect. If sinuosity dominates over the straight roots then there will be a threshold response as the shortest roots are strained first.  Figure 9 - Mechanisms of Root Reinforcement: (1) strong, dense distribution of basal roots creates a 'block' of well entrained soil; (2) Roots crossing and over-lapping connect the 'blocks'; (3) Deep roots may penetrate bedrock cracks helping anchor soil blocks; (4) Stable blocks can buttress less stable blocks above (Adapted from Krogstad 1995).  After a stand has been harvested the fine roots die back "dwmdling in number and strength" (Ibid, Sidle and Ochiai, 2006). The result is reduced integrity of the soil-root matrix as individual root systems begin to resemble islands rather than a patchwork. The role of root/shear strength is not well articulated. Tests have traditionally been applied in a mechanical fashion to single roots in various mediums. It is likely that root strength is complex, but that the formation of mats is a crucial aspect. Lateral root mats, then, have the potential to form an interwoven mesh. A mature Douglas-fir stand, if viewed from above, would resemble a patchwork quilt of individual root systems joined at the edges by the mass of fine roots that are so entangled as to make it extremely difficult to discern where the fine roots originate. As Krogstad points out in his thesis, "This may seem unlikely in drier sites,  34  where trees are widely spaced, but even on these dry sites, several trees must be killed before a gap is created even temporarily between the root systems of the living trees (Parsons, et al., 1994)".  a  b  e  f  a  b  €  From top to cotton*: Initial oonditioixB give way to stress and strain of the root network as slope failure proceeds.  f a  b  c  T 4  Figure 10 - Strain, stress and failure of roots with different orientations and sinuosity. Straight roots oriented parallel to the strain (a) begin to strain immediately and can break while more sinuous roots (c) or obliquely oriented roots (b) are still straightening out (Adapted from Krogstad 1995).  3.4 P r e f e r e n t i a l Flow S y s t e m s Mass wasting events are one of the dominant geomorphological processes in a forested headwater system (Church, 1999; Wu and Sidle, 1995), the other being stormwater events. Stormwater events are the regular, seasonal, monsoon-like precipitation storms that swell streams to their baiik-full levels, often initiating erosional processes. These erosional processes are not the result of unsustainable pore pressures in the regolith - the ecosystem is able to throughput the hydrologic gradient, moving the precipitation into storage and streams. Mass wasting and stormwater events are thus intimately tied. Mass wasting happens when stormwater events saturate the soil profile, raising pore pressure in the soil matrix to unsustainable levels. Areas of least stability may undergo a transition where the soil matrix lets go and a slide initiates. Preferential flows serve to attenuate stormwater dynamics by creating more efficient routes for hillslope hy-  35  drology to migrate downslope. Preferential flow paths help to prevent the buildup of hyper-saturated hillslope soils, a precursor to slope failure. Sidle et al. (2000) have been developing a conceptual hydrogeomorphic paradigm  40  that models the hydrologic dynamics of steep forested hillslopes as a net-  work of flows: overland, subsurface  41  and subsurface preferential flows (PFs).  Figure 11 - Sidle et al.'s (2001) conceptual model of preferential flow ( PF) pathways. Pathways include (1) PF occurring in the organic horizon - an extremely porous organic soil allowing high infiltration; (2) macropores interacting with surrounding mesopores to enlarge PFs during wet conditions; (3) connection of individual macropores by physical interaction creating a PF network; (4) connection through porous zones of buried organic matter; (5) contact with ground water at the bedrock or other lithic boundary; (6) PF through bedrock fractures; (7) exfiltration of water from shallow bedrock fractures; (8) flow over microchannels on the surface of bedrock or other substrate (i.e. substrate topography control). White shaded zones represent possible linked or connected preferential flow paths; broken lines delineate specific * connected' pathways (Adapted from Sidle et al., 2000). That the authors' choose to label their work a 'paradigm' speaks to their strong feelings that the model of hillslope hydrology is timely and important in terms of how we understand hillslope hydrology. The so-called paradigm shift is in recognizing hillslope hydrology as being a product of spatial and temporal effects as riparian, hillslope and geomorphic hollows (zero-order basins) link up via PFs through the wet season. 4 0  If the rate of rainfall exceeds the soil infiltration rate (the rate at which soil absorbs water), then water pools on the soil surface. If the soil surface is sloped, the pooled water flows downhill toward the channel system. This is referred to as overland flow, sheet flow, or surface runoff. It is also called Hortonian flow after B.F. Horton, the hydrologist who first described this process in the 1930s. Subsurface flow occurs on hillslopes with shallow permeable soil layers overlying low permeability layers. 4 1  36  Preferential flow paths (Fig. 11) are a network of porous pathways that increase, through channelling, the efficiency of subsurface flows. PFs are largely made up of micropores, which consist of live and decaying roots, aiiimal burrows (and other bioturbations), subsurface erosion, and surface bedrock fractures (Ibid.). Conceptual model of a watershed PF network linking up through time  Figure 12 - Hydrogeomorphic conceptual model of sources and pathways for Stormflow generation during a sequence of increasing antecedent moisture (a) dry; (b) slightly wet; (c) wet; (d) very wet. (Adapted from Sidle et al. 2000)  This is thought to encourage lateral flows of stormwater. Macropores tend to be more densely and evenly developed in the topsoil (0 and A horizons) as compared to the subsoil. Both hillslope stability and watershed hydrology are strongly influenced by the pathways that precipitation takes as it makes its way through the headwater system.  37  PF systems are dynamic; the extent of their hydrologic activity is a function of the increase/decrease of antecedent moisture  42  conditions. As antecedent wet-  ness builds, from dry soil conditions to increasingly wet conditions - the result of serial precipitation events, (Fig. 12), the macropores are activated through a gradual process of wetting and are induced into action through the moistureinduced networking of the pores (Ibid; Gomi et al., 2002 ). The increase in moisture levels creates a situation where the '"area of influence expands' (i.e., connecting with adjacent 'mesopores'." (Per. Comm. Sidle). Figure 12 is a schematic of Sidle et al.'s (2000) hydrogeomorphic model. The model illustrates a spatially distributed, temporally lagged response to antecedent moisture. Generally, precipitation events occurring after a dry period will contribute a relatively small percentage of the rain to stormflow in the headwater streams. As antecedent wetness builds, a non-linear effect occurs as progressively more of the mcoming precipitation makes it into the streams as stormflow. Characteristics of a steep forested headwater, such as: the high infiltration capacity of the forest floor, the hydraulic conductivity of forest soils, geomorphic hollows acting as reservoirs, and the network of macro and mesopores, create a lagged response in a typical hydrograph. Headwater streams in the Pacific Northwest originate on steep hillslopes; these slopes contain relatively small zero-order basins (as Sidle 2000 &? 2001 refers to them) or geomorphic hollows. These common, but little-known, basins occur where the topography creates a slight hollow that acts to accumulate sediments and water. But unlike the smallest streams, first-order streams, these hollows are too slight to create a surficial stream. The hollows may not be visible if they have filled with sediments over time.  Antecedent moisture is the moisture in the regolith, the ground, that occurs as a result of the previous precipitation event. As antecedent moisture levels increase more and more precipitation must move down the slope. 4 2  38  These hollows are determined, by the evolution of hillslope processes, often reflecting structural weakness or accelerated weathering (Church, 1999; Sidle, 2000). Hillslope hydrology drains into these declivities and creates saturated pockets on the hillside. As well, hollows are places where soil accumulates either through creep, weathering, or slippage. Such sites of soil and moisture build-up are excellent locations for the development of large, healthy trees. The unfortunate corollary is that this combination of water accumulation and soil build-up induces a propensity to fail in the event of storms. Tree root preferential flows may have a significantly amehorating impact on the propensity to fail by providing a diffuse network to move precipitation down slope gradually as the hollows fill to capacity becoming linked to the hillslope PF network. The primary maiiifestation of zero-order basins to hillslope hydrology is to create a non-linear contribution to the time lag that is so pronounced in watershed hydrology in the Pacific Northwest. The build-up of antecedent moisture creates an opportunity for the hollows to link up into the macro/mesopore network of PFs (Sidle, 2000). The zero-order basins have a role then that begins as upslope storage, but becomes downslope supply as the basin water table fills and antecedent wetness builds to create a network of linked nodes - at first discontinuous, then forming a continuous subsurface flow dissipating the hydrologic/gravitational energy gradient. Zero order basins are places of local positive feedbacks, accumulating resources that are then liberated in rain events. During dry conditions, very typical of summers in the PNW, the water yields from the hillslopes are low and the majority of stormflow is generated from channel interception and saturated overland flow in the riparian zone (Ibid.). What doesn't become yield becomes potential antecedent moisture for the next event. As antecedent wetness increases the response becomes significantly nonlinear. This lagged response reflects the cumulative effect of antecedent moisture build-up in a PF network. Sidle's research in the steep, forested headwaters in Japan pro-  39  duced a data set of measurements with preferential flows accounting for up to 25% of subsurface flows by the time the hillslope was 'very wet'; this is in contrast to no PF yields in dry antecedent conditions, and less than 2% in wet conditions. "This increase in preferential flow is attributed to an expansion of macropore networks in time and space. Such expansion may occur through a series of complex mechanisms that allow these systems to become self-organized as antecedent moisture increases." (Ibid.). In summation, I am focusing on three watershed dynamics that I will argue are sufficient to model the ecostructure that creates the stabile ecosystem that we recognize in a steep forested watershed in the Pacific Northwest: 1.  Instream structures, primarily imbricated clasts and LWD, combine to create a self-organizing, conditionally stable system through the diffusion of the potential hydrological energy gradient. This creates and provides a dense reticulate stream network that efficiently drains the headwaters and allows for a stable riparian ecosystem to develop.  2.  A stable riparian ecosystem begets the conditions for the entrainment of soils. This is a positive feedback mechanism in the watershed ecosystem. The development of a stable riparian area encourages the development of upslope forestation and further entrainment of hillslope sediments. This is the build out of the lateral tree root component. The development of the root system strongly (a non-linear effect) influences the effectiveness of the contribution of tree roots to shear strength and the Preferential Flow mechanism - mature root systems are far more effective than immature root systems.  3.  PF mechanisms are sensitive to antecedent moisture conditions and are dynamic through time as opposed to static conditions such as regolith composition, elevation, slope, etc. It is likely that root development is influenced through time by the PF of hillslope hydrology, further reinforcing the cycle.  40  4.0  The E c o s t r u c t u r a l A p p r o a c h - G e n e r a l i z e d  Ecostructure, as an analytical tool, and as a concept, is a distillation and synthesis of the major ecological sciences initiatives of the past several decades, as well as incorporating the sciences of complexity. The recognition of ecosystems as complex systems requiring an iterative approach to management has led to a school of thought known as 'adaptive management' (Walters, 1986). Further, the recognition of the need to integrate various areas of research in order to implement adaptive management has led to an interdisciplinary technique known as the 'ecosystems approach'.  43  Hierarchy theory dates back to the post war era and the origins of the digital revolution. Hierarchy theorists such as H. A. Simon were pioneering organizational techniques to manage the complexity of building machines (programs really) that could solve problems (Simon, 1962). More recently, research in the complexity sciences and complex adaptive systems has produced varying degrees of pragmatic tools; some of the most readily available tools originate in network theory. Network theory has produced a deepening appreciation for the role of topology in ecological analysis (Jordan and Scheuring, 2002) and more specifically has offered statistics such as power law distributions, error and attack tolerance analysis, clustering coefficients and more as the initial tools in mapping the hierarchical topology of networks (Barabasi, 2002 and 2005). The appendix provides an outline of hierarchy and network theory; what is especially germane here is the organizational aspect of hierarchy theory and the topological perspective of network theory. In its past, network researchers assumed a static and reasonably homogeneous structure of networks. Modern research has however shown that networks are often distributed in a very skewed manner and that they have architectures that Adaptive Management at its most facile is learning by doing. More specifically, adaptive management is a rigorous combination of management, research, monitoring, and means of changing practices so that credible information is gained and management activities are modified by experience. Uncertainty is explicitly acknowledged. Ecosystems management explicitly recognizes the social component in resource management blending social, economic, physical, and biological needs and values. Network theory has been used to model trophic networks, protein networks, disease vectors, and most famously, the WWW. . 4 3  41  reflect preferential attachment and 'small world' pathways that greatly increase a network's efficiency and robustness. These insights play an important role in an ecostructural perspective. The ecostructural approach begins with a decomposition of the watershed into a hierarchical framework (Fig. 13). The first step in a hierarchical analysis is to determine an appropriate 'focal' level (Wu and David, 2002). The goal when determining a focal level is twofold: first, to simplify the complexity of the system being examined through organizational techniques; and second to explicitly consider scale multiplicity. The latter is a central objective of landscape ecology and a major challenge in resource management in general. Scale issues are often central to complexity issues and are closely related to hierarchical properties of landscapes: the relationship between pattern and process is scale dependent (Allen and Hoekstra, 1992) and can be thought of as being made up of two general classes of components: synchronic and diachronic (Salthe, 2005). Synchronic complexity is the dimension of complexity that originates as the nested levels of a hierarchy that operate across different scales. Diachronic complexity is essentially developmental - it originates as the structure of change: serai development such as the seres of a forest. Ecostructure explicitly considers both of these general classes: synchronic, across scales; diachronic, through time. In the case study complexity arises, for example, from the scale issues around hillslope heterogeneity; it also arises as a result of contingency, as land use decisions impact the development of the ecostructural capital. Identifying the focal level is a matter of identifying the spatial and temporal scale over which the system in question is active relative to the issue at hand. In the case study we are considering changes in the stability of forested hillslopes that are triggered by extreme precipitation events. The focal level sits between a higher level that represents context, constraint and control of the system (Fig. 13). The level below the focal level represents the initiating conditions and mechanisms that create the system dynamics.  42  These levels are organized, into a nested hierarchy; levels tend to he asynchronous through time and spatial scale. Lower levels reflect higher frequencies (shorter temporal scales) and smaller spatial scales; the inverse is true of higher levels. The interaction of these components within level, and the way in which the various levels of the hierarchy interact is such that, given a specified focus of level and scale - in this case a large scale watershed (landscape level) - the dynamics of the modules and levels, above and below, can be given as constants in our analysis. The levels below happen at a fast enough rate, relative to the focal level, to be taken as variables that reflect the average of the higher frequencies, and the levels above occur at a slow enough rate to likewise be taken as averages, in this case quasi-static constants.  44  Two sefiamatie reprssentstam of a  nested  hierarchy  Level *1 L»vel +1  Control  Containment (nastanf Boundary conditions  Level 0  Focal Level  X.  P  6 b  Level 0  Components Mechanisms lriitiathi§ ceiwItkHis Level -1  Level-t  Figure 13 - The 'Focal' level in a nested hierarchy. (Schematic adapted from Wu and David, 2002)  Upper levels generally act as constraints to the focal level, whereas lower levels reflect the mechanisms that give rise to the structure and function of the focal  This 'focal' approach can be seen in Wu and David, 2002. The idea of relative rates of change and quasivariables is a hybrid of concepts from Simon's (1962; 1973) near-decomposability and to a lesser extent Wu and Sidle (1995). 4 4  43  level. In hierarchy theory this is known as vertical near-decomposability and it allows for the detail at upper and lower levels to be averaged or taken as quasistatic. This is a crucial step in managing a complex system into an intelligible model that produces pragmatic results. Articulating a focal level and rationalizing upper and lower levels is crucial and somewhat subjective. The spatial scale of the hierarchical model is taken as the (approximately) 100 k m watershed that the Fitzsimmons Greek drains. The temporal scale is ap2  proximately 100+ years, to reflect both forestry dynamics and a significant return period for precipitation/climate data. Our focus is on understanding the trigger mechanism for initiating slides. Resource managers can't influence the precipitation regime but we can influence the hillslope hydrology to some extent through land use decisions. The upper level will be primarily a geological level. This is the area that an engineering approach focuses on. The nature of the regolith and bedrock structure are, for all practical purposes, static. Elevation, aspect and the distribution of the lateral moraines are the boundary conditions in the analysis. These characteristics in the system are beyond management issues and occur at rates that are magnitudes slower than the hydrological cycles we are focusing on. The lower level in the hierarchy considers the processes that are occurring at rates that are magnitudes faster than those explicitly effecting the PF hydrology of the hillslopes. We include them in our analysis since it is at this level that the conditions for PF development are laid. The initiating lower level dynamics are the bioturbations and fine root devel45  opment of the forest and understory root mats. These processes are extremely detailed and one would need to consider soil types, pH balances, tree species and ages, relative health or vitality of trees and shrubs, and many more variables that really would not provide us with data or detail that we could practically use  Bioturbations are the disturbance of sedimentary deposits by living (biological) organisms. 44  to model or understand the hillslope bydrological character of the valley that is our spatial extent. Temporally, this level is cycling monthly, daily and hourly as growth cycles, symbionts, and daily temperatures/hydrological properties are determdboing the activity of fine root developments. We can utilize the dynamics of this level to inform the analysis in terms of relative contributions. A n understanding of the average growth rates, the gross influences of ectomychorrizal and fine root formations, allow us to model the hydrophilic nature of the forest floor and PFs. The focal level then, will be the intermediate scale of the hillside that is the forested slopes of the Fitzsimmons valley. Typically this would consider a grain size of square kilometres, the grain extent that reflects the discontinuous nature of the regolith and the preferential flow pathways in a watershed and such important influences such as canopy density/ tree stand density. The temporal scale of our focal level will be typical of land use planjoing - decadal. Traditional engineering analysis of slope stabmty/failure issues typically does not include the hillslope vegetation, or if it does it gives it only a cursory examination. The Golder and EBA reports on the Fitz Slump are a case in point. In the academic literature more and more studies and models are incorporating vegetation parameters into their analysis - often referred to as 'root cohesion factors' (Wu and Sidle, 1995; Glade and Crozier, 2006). The vegetation parameter in modern models acknowledges a 'root cohesion' value or coefficient whereas the ecostructural approach goes much farther and in contrast builds a model (in this case), or at very least, a resource management technique that utilizes data at a resolution that is practical and most importantly dynamic with respect to time. The question was posed above: what would a traditional engineering approach have concluded in a study done one year prior to the significant slide in 1991? Would an engineering approach have concluded the likelihood of failure as low, as it did after the fact? Would the- slide occurring have changed the conclusions of engineers after the fact? It would not in the case of an ecostructural approach.  45  A n ecostructural approach would have raised a flag concerning the effects to the hillslope hydrology that were initiated by the history of timber extraction and the development of the ski hills above. A report based on an ecostructural technique would have concluded that much of the lower valley, with its thick lateral moraines and compromised PF ecostructure, was at risk of failure, or at very least, active development in terms of erosion rates. The hierarchy schematic from figure 13 will be used to organize the analysis. Our focal level, as has been discussed, will be the active components of the hydrological dynamics that are well recognized as the driver of slope instability: precipitation events and their influence on soil pore pressure. The determinants of pore pressure are manifold - a very good example of what Weaver (1948) referred to as organized complexity (See Appendix). At the focal level the choice is to focus on a temporal and spatial scale that is relevant to a resource manager/decision maker concerned with adjudicating competing land-use options. That is, the temporal/spatial scale is one in which resource management issues can be instrumental: generally decadal to reflect land use issues around forestry and the management of a ski resort, and over hectares and tens of hectares for similar reasons. The ecostructural approach encourages the analysis to keep the hierarchy to three levels: the upper level, as a quasi-static top-down constraint to the focal level; the lower level, as bottom-up processes that ultimately build the biotic network topology of the ecosystem; and the focal level. The focal level is the level of interest. As the level of interest changes so too would the upper and lower levels as each is relative. Once again, the value of this approach is to simplify the analysis and to thereby recognize the relative detail that one needs at the upper and lower levels as dictated by the degree of near-decomposabUity of the system at the focal level.  46  4.0.1 The Upper Level Much of the data used in an engineering approach would inform the upper levels of the analytical hierarchy. These are largely geological processes and geotechnical data whose characteristics are static (geotechnical properties, aspect, elevation, gradient, etc.) or happen on much slower time scales and over larger spatial scales than would normally be the purview of resource managers. These factors are termed 'quasi-static' in models incorporating a root cohesion coefficient and in this respect the engineering perspective is very valuable in terms of recognizing their contribution to landslide susceptibility (Wu and Sidle, 1995). The engineering reports discussed have established the inherent instability of the thick glacial tills that line the lower elevations of the lower half of the Fitzsimmons valley. These thick tills have been the site of evergreen forest ecosystems since the retreat of the last glaciation. The area in question was the site of extensive clear-cut forestry harvesting techniques during the period 1959 through 1963. The lower elevations are well covered by second-growth forest, while old-growth covers the upper slopes. The extent of the harvested area and its proximity to the area in question is evident in figure 7, while the thickness of the under-lying till is evident in figure 8. The quasi-static soil pore properties of the thick till base, along with the range of other static variables (slope, aspect, etc.) are the purview of the engineering approach; but this approach is an inherently rear-view mirror perspective left on its own. Resource managers and/or decision makers cannot do anything about these factors, and the analysis does little to inform decision-makers as to how land-use decisions will affect the geotechnical conditions over time.  4.0.8 The Focal Level Noting the trend, mentioned above, towards 'root-cohesion coefficients' showing up in technical analyses of slope stability, lends credence to the ecostructural approach of focusing on the dynamics that initiate slope failures. Viewing the role of root cohesion as having a significant role as a stabilizing structure leads quite  47  naturally to an initiation of an ecostructural approach which recognizes this area's influence on the hydrological dynamics of hillslope hydrology and therefore follows up by surveying the research on the mechanisms that effect these dynamics. As discussed earlier, conifer canopies in the PNW can impact a hydrological mass balance study in a significant and quantitatively meaningful way. Incoming precipitation as throughfall can be reduced by an average of 25%. The throughfall is also diffused in terms of velocity and eventually encounters a forest floor that is porous and as a result hydrophilic. The organic horizon contains voluminous hyphae that encourage the path of the mcoming precipitation to be vertical, initially, as opposed to a strictly gravity induced horizontal path. The hydrological path is then strongly encouraged down into the organic horizon where at the soil/ rhizosphere region the pathway takes a preferential route (via bioturbations, roots, etc.) that has developed in response to a 'precipitation/abiotic environment/forest development' circuitous feedback network. We have seen that the temporal scale is important in this regard because the PF network, as a product of the forest root ecosystem, is sensitive to contingencies that persist for very long periods. The model of development being utilized here is reminiscent of a type of neural network development known as 'Hebbian learning'. Hebb was a psychologist working at McGill during the 1940s and 50s. His interests were focused on how people learned and how that is expressed synaptically . His work influenced 46  much of the theory involved in mathematical neural network modelling and the term Hebbian learning is common in this area. In neural net modelling, path-  His theory became known as Hebbian theory and the models which follow this theory are said to exhibit Hebb's Rule. This method of learning is often expressed as: 'When an axon of cell A is near enough to excite cell B and repeatedly or persistently takes part in firing it, some growth process or metabolic change takes place in one or both cells such that A's efficiency, as one of the cells firing B, is increased. This is more often paraphrased as "Neurons that fire together wire together.' Hebbian learning, then, refers to the interactive and plastic nature of neural development. 4 6  48  ways are weighted to exhibit preferred paths that strengthen as they are used.  47  Ulanowicz (1997) has used a similar theory to describe the development of ecosystem pathways. As ecosystemic pathways are utilized they strengthen. The recurrence of this utilization is further exploited by the development of niche trophic networks. These networks can be very difficult to discern as they may be happening on various scales both temporally and spatially. Headwater trophic dynamics are a case in point. They are poorly understood and difficult to quantify as they occur as highly pulsed (power law distributed), hydrologically driven, seasonally sensitive dynamics (Wipfli et al., 2006).  •fr  pat«b. developments  Figure 14. Schematic showing network development. Dots represent the system components and each arrow symbolizes a process by which one component influences another, (a) Early ecosystem (inchoate stage) development typically will have many potential trophic relationships that grow as the system builds biomass and diversifies; maturation of the system will lead to feedback mechanisms 'pruning' trophic structure that is less efficient in the given context, (b) The thicker arrows represent increases in through-put at the expense of the alternate route (Schematic adapted from Ulanowicz, 1997).  This Hebbian/Ulanowicz model of development may be supported by the decadal persistence of PFs (stable PFs that persist and build over time) and the fact that these pathways consistently show significantly higher levels of soil organic carbon when compared to the surrounding soil matrix (Hagedorn and Bundt, 2002; Bundt et al., 2001a,b) . The build up of organic carbon along PF paths, indicating 48  ecological activity, is analogous to the thick arrows in figure 14 (b), whereas the soil matrix flow would be the thin arrows.  It is an interesting digression here to note that in neural net research the 'black box' nature of neural nets is considered a weakness. The discerning of strong versus weak pathways in a neural network is held up as a viable solution; mechanistic relationships may be gleaned from such insights into modelled data sets. 4 7  4 8  Bundt et al. refer to PF paths as biological 'hotspots' in soils. (Bundt et al., 8001b)  49  The ecostructural approach models the forest ecosystem as a network based on a nested hierarchy of interacting structures and processes. Figure 14 alludes to the development process of the PF network. These dynamics may be modelled as a PF module in the focal level of the ecostructure hierarchy. Along with the modules of precipitation and first order stream hydrology, forest floor and litter structure, zero-order basin dynamics, etc., these modules reinforce each other through time and space and create emerging levels of infrastructure. Figure 14, in this context, represents the interplay of precipitation and PF networks as they develop and strengthen in the regolith. The hydrological PF pathways and the root cohesion dynamics are intimately tied to the precipitation regime (Bowling et. al., 2002) via Hebbian learning (Fig. 14). By this I am referring to the developmental linkage between seasonal precipitation regimes and the development of root systems to intercept and exploit this resource. Root networks develop, in part, to capture soil moisture and these networks form and strengthen along repetitious routes of climatically and topographically stable mcoming precipitation (including stored precipitation in the form of snow/glacial melt). The distribution of which is controlled top-down by the constraints of the abiotic environment (elevation, slope, substrate type, etc.) and the bottom-up development of positive feedback from the gradual building of system-reinforcing hydrophilic conditions such as forest floor litter that acts as both a physical barrier to erosion and as a source of rhizospere organic nutrients.  4.0.3 The Lower Level Soil respiration rates, as discussed, are largely attributed to fine root development, which are placed in the lower level of the hierarchy model. The frequency of interaction at the fine root level is too high to be considered in detail, that is to say, a detailed rendering would not further the goals of the modelling exercise. But the effect of this component of the ecosystem is relevant at the spatial scale we are interested in at the focal level. Fine root development is an initiating proc50  ess in the development of root PF pathways. Over the temporal/spatial scale under consideration these fine root dynamics eventually determine the larger root distribution that Sidle et al. (2000) modelled in the hydrgeomorphic paradigm. This is also the scale of plot level research on instream structural dynamics. Large precipitation events will create hydraulic forces that entrain sediments and possibly reset the instream structure of the low order reticulate network of streams. Localized adjustments will be made to the structure of the headwater streams, changing the local habitat regimes and altering flow dynamics at the small scale; but the over-all effect will average out to maintain a stable hydraulic regime that does not threaten hillslope stabilty in a major way. This lower level is of interest to resource managers in as far as these are the initiating conditions for a vibrant rhizosphere, thus a productive PF network. Land use decisions that compliment the dynamics at this level will contribute to the success and resilience of the focal level. 4.1 Ecostructural Rnalysis - Hydraulic Flow Dynamics  An ecostructural approach to the Fitz Slump would begin with a geotechnical report, just as would be the case in a traditional engineering scenario. But rather than that being the extent of the analysis (even with a root cohesion factor), an ecostructural approach would then exploit the point of consensus that exists among slope stability specialists, forestry ecologists, hillslope hydrologists, etc.: Debris flow initiation dynamics are strongly determined by hillslope hydrology behaviour (Glade and Crozier, 2006). So what determines this hydraulic flow behaviour? Beginning at the origin of the path of hydraulic flow in our setting - the forest canopy - precipitation in any form undergoes an important spatial redistribution that occurs by throughfall (Keim et al., 2005; Savenije, 2004). A forest canopy 49  Throughfall is the precipitation that reaches the ground as opposed to total precipitation; some portion of the total will never reach the ground but instead will evaporate from the canopy - this is Interception. 4 9  51  of the type typical in the Pacific Northwest produces a surface area far in excess of its areal dimension (Waring and FrajiMin, 1979). The appropriate dimension 50  here is fractal; the increased surface area catches rain/snow and contributes to evapotranspiration processes, reducing the amount of precipitation reaching 51  the ground by some 25% (Spittlehouse, 1996). It has been well recognized since the mid 20th century that forest canopies play an important role in modifying the intensity and erosivity of precipitation (Keim et. al., 2004a; Keim and Skaugset, 2003). The precipitation that enters the forest beyond the canopy, the throughfall, has undergone a reduction in velocity and this portion of the rain event encounters a forest floor that is a mat of forest Utter - rarely is it exposed soil. This is the first step in an ecostructural recognition of the role of the forest canopy in reducing "peak rainfall intensities sufficiently to reduce the probability [my bold] of landslides initiating by infiltrating rainfall'' (Ibid.).  52  Researchers have studied the throughfall looking for PFs in terms of patterns of throughfall being dictated by canopy architecture (Keim et al., 2005). While the canopy architecture undoubtedly influences throughfall, predictable patterns are very difficult to discern due to the canopy level effects of rainfall intensity, wind, species and morphology of a forest canopy. It may be easier to link the persistence of throughfall to the decadal-scale temporal stability of preferential flow paths (Hagedorn and Bundt, 2002). The understory that is less protected by the canopy will likely be a site for brush and smaller perennial forms of forest succession species; but open areas quickly The Pacific Northwest is -unique for its forest biomass. This includes a canopy area that has, on average, a leaf area index that is 10:1 (and often more) to its m ground surface (See Waring, 1979). This logarithmic relationship takes time to develop - 50 years is typical and is a further illustration of the importance of the temporal scale in ecostructural analysis. 5 0  a  Evapotranspiration is an inexact term used commonly in hydrology to refer to a group of processes. Evapotranspiration is the sum of evaporation at the canopy (interception), transpiration (a physiological process), surface and/or open water evaporation. The relative contributions of each changes with ambient conditions; the sum of these is evapotranspiration. 6 1  Unfortunately, and surprisingly, I can And no research on the direct effects of canopy structure and its effects on precipitation velocity. In discussions with Sidle he noted that a canopy quickly becomes saturated during heavy rain, thus all subsequent rain would reach the ground -1 am only arguing in terms of canopy effects on velocity of the throughfall. 5 2  52  'green up' and. the ecostructural properties will still largely remain intact as long as these openings are not so large as to threaten the PF network's viability. At the forest floor, the hillslope surface is a relatively thin layer of highly organic soil - the organic horizon. This layer is largely made up of detritus from the trees, whether that be needles, twigs, branches, or the breakdown of fallen trees (Waring, 1979). Ecosystems such as these in the Pacific Northwest rely on fungal detritivores as the primary decomposers in this system of coarse woody carbohydrates (Griffiths, 2001). These fungal decomposers form significant mats (Ibid.) and tend to create cohesion in the organic soil matrix. The forest floor then is literally matted with the hyphae network of the decomposer fungi. These mycorrhizal mats occur in dense distributions within forests across the globe and are highly correlated to their proximity to trees (Griffiths et al., 1996). These mats are long-lived structures and they influence the productivity of their site - the mats tend to be associated with mature stands, and further, they tend to promote the seedling success of the conifers (Griffiths et al., 1991). This positive feedback mechanism is an initiating type of preferential flow in our dynamic system; seedling survival rates will be influenced by available soil hydrology and in turn the development of the seedling/tree root system will further encourage the efficient flow of throughfall along PFs. Although the ectomycorrhizal role in terms of hydrology is unclear, its role in terms of a network feedback mechanism is recognized from the perspective of encouraging the domination of conifers in the Pacific Northwest (Griffiths et. al., 1991). What is clear is that the forest floor, at the organic horizon, is a fractal surface with a high fractal dimensionality (it is fissured); this surface and its porous nature demand the vertical sorption of the mcoming precipitation. Ecosystem respiration is the product of metabolism. Soil respiration is the largest component of forest ecosystem respiration. Root, fine root, and associated mycorrhizal respiration produce roughly half of soil respiration, with much of the remainder derived from decomposition of recently produced root and leaf litter (Ryan and Law, 2005). These indicators of root and mycorrhizal activity point to 53  a rhizosphere that is signincantly biologically active fixing carbon below ground; the result being root development and respiration (Campbell et. al., 2004). Research tracing the soil organic carbon (SOC) in preferential flow paths and the forest soil matrix have likewise pointed to the significant role PF paths play in determining concentrations of organic carbon and nitrogen available to plants for uptake in relatively short temporal cycles. (Bundt et. al., 2001) The extent and importance of the fine root/mycorrhizal network is supported by the fact that in conifer-dominated forests such as in the PNW the average Leaf Area Index (LAI) to ground area (m ) is approximately 10:1, and has often been found to be much s  greater (Waring and Fraiiklin, 1979). Such large LAIs must have some physical relationship to root mass in order to support respiration. Research has shown that, using soil respiration as a proxy for root metabolism, there is indeed a relationship but it is not straight-forward. The slow growth of conifer needles means that seasonal and annual soil respiration fluxes are difficult to link to the considerable activity at the fine root/mycorrhizal level. The relationship has been found to work well at the large scale where the proportion of LAI and root mass can be better compared (Campbell et. al., 2004). A large LAI may infer a likewise large and productive fine root/mycorrhizal rhizosphere. For this reason LAI measures may be a reasonable proxy for tree root distribution in an index of ecostructure (more on this in the discussion). To summarize: Precipitation throughfall is signincantly reduced and diffused at the canopy level due to interception and evaporation. The forest floor litter forms a Mghly fractal matrix that is very porous and permeable to the throughfall and is readily sorbed into the organic soil layer. The sorption mechanism begins the vertical hydrologic pathway, minimizing the horizontal surficial alternative that would induce erosion. Indeed, it is relatively rare to see surface flow during rainstorms in a forested setting; when it does occur it usually indicates a compacted soil or some such comparable situation (e.g. bedrock). By and large, the hydro-  54  logic path during a precipitation event is initially vertical : down into the rhizo53  sphere where the flow then is channeled into the PF pathways that have developed along the successional trajectory of the forest. This detail is meant to set the stage for the coarser grained model to follow and in a hierarchical setting much of this detail would correspond to the level below our focal level (processes and component interactions happening at a signiflcantly faster rate than the focal scale). This lower level is the level of organization at which much of the PF mechanisms are initially being developed and maintained . A n ecostructural model of a watershed hydrology like the one we are considering would proceed thus: /  Instream Structures: •  The steep slopes initially drain into developing gullies that we recognize as 1st order streams  •  These streams go through a process of self-organization of in-stream structures that, through time, create semi-permanent energy-diffusing, turbulance-inducing, instream structures  •  These in-stream structures create a first-order stream network that efficiently drain headwaters ; minimizing erosion of the stream banks ergo allowing a riparian zone to develop  /  Forest Root Reinforcement and PFs: •  The stable riparian area encourages the development of the conifer and hardwood species that further stabilize the riparian area  During the drought conditions of summer, naturally occurring waxes and oils in forest soils may produce a hydrophobic effect further exacerbating the time needed during dry periods for PP antecedent wetting conditions to build to levels where PP networks link across the hillslope. 5 3  55  •  The forest develops upslope. Creating an ecological positive feedbackstyle development that encourages the spread of hyphal mats, ergo tree seedlings, ergo the PFs that co-evolve with the development and morphology of the forest. This development also creates a forest litter organic soil horizon that is highly effective in terms of reducing overland flow  This model of hillslope/watershed stability is sensitive to the temporal scale. Consider a forest fire: the level of erosion and the effect of this on hillslope stability will change through time as the forest undergoes successional development. PFs are thought to be stable for decades (Hagedorn and Bundt, 2002); this, combined with the time it takes to grow a mature, stabilizing, rhizosphere would lead one to conclude that preferential flow pathways are sensitive to land use plaiining that would affect the volume and density distribution of the PFs. The chronosequence effects of disturbance in Pacific Northwest forest ecosystems are still 54  very much under scrutiny (Bond and Fraiiklin, 2002) but decades of research has clarified that fire regimes, harvesting techniques and disease/pests can have strong effects on the physical structure of a forest. For this reason land/natural resource use decisions should be adjudicated through a recognition of ecostructural values. This is similar to the debate on the role of eco-services and their value (Costanza et al., 1997; Foley et al., 2005), but the concept of Ecostructure differs considerably from this important area. Ecostructure is focused on determining and delineating the components that create the ecosystem services; in this sense ecostructure is closer to the natural sciences. The ecosystem services literature is often economic in nature and more along the lines of a prescriptive dialogue, whereas we are concerned with cnscerning, first, how the ecosystem service is supported by the ecological infrastructure. In this sense we are firstly concerned with a de-  The sequential set of changes in structure and composition (serai) of plant communities.  56  scriptive process. From this knowledge we can offer resource managers prescriptive measures based on ecostructural analysis. A temporal consideration that we have touched on earlier when introducing Sidle et al.'s (2000) 'hydrogeomorphic model' of PFs (see figure 6) is the lagged response of the PF network in terms of antecedent moisture conditions. The model illustrates a spatially distributed, temporally lagged response to antecedent moisture such that a region of hillslope may be 'disconnected' from the larger PF network until the soil moisture conditions build up from dry to wet. These factors in and of themselves are not new insights. It is the combination of insights combined with the recognition of the role the topology (the architecture of the system or network) in making land-use decisions that makes the ecostructural perspective unique and valuable. We can now build an ecostructural model of the case study area that provides insights to managers that are not available in the engineering approach. 4.2  Ecostructural Analysis - Hierarchical/Network Model  In urban settings past and present development along waterways has often required significant expenditures to dredge, to recreate fish habitat, and to install green belts. This is done to protect waterways from becoming contaminated with non-point source pollutants, to replace lost natural flushing regimes and to reestablish fish habitat when lost. A body of literature exists that deals with rectifying or preventing such problems; some of the remedial techniques involved are called 'best management practices' and they often involve utilizing engineered ecological infrastructures.  55  These efforts are a reaction to the need for address-  ing 'context' when designing projects that will impact ecostructural values. A n ecostructural cost/benefit analysis would consider the long-term costs of a project in terms of replacing naturally occurring eco-services that would be lost during development. In the Fitz Slump situation, valuable ecological infrastruc5 5  For a broad treatment of Best Management Practices in this area see Schreier et al., 1997.  57  ture that creates hillslope stability and moderates the Fitzsimmons Slump hydrology was reduced as a result of clear-cut harvesting. Given that this occurred long before the recreational value of Whistler could have been recognized (1950s) it is unrealistic to compare the decision then to harvest with the recognized values now that Whistler is a major development. The case study serves better as an indication of how to contrast a technique that is increasingly out of touch with modern needs to one that attempts to utilize scientific knowledge across disciplines.  Table 1  - Conventional Engineering v s . Ecostructural A p p r o a c h : Fitz S l u m p  Engineering  Ecostructural  Highly Quantitative  Quantitative/Qualitative  Scaling Issues are Implicit  Scaling Issues are Explicit and Central  Almost Exclusively Geotechnical in Nature  Explicitly Interdisciplinary - Geotechnical; Forest Ecology; Forest Hydrology, Resource Management, etc.  Treats Hydrologic Flow as Planar/ Homogeneous  Treats Hydrologic Flow as Conditioned/Heterogeneous  Tends to Rely Heavily on Extrapolated Data  Tends to Rely on 'Adaptive Management' Methodology  Recommendations and Solutions are Engineered/ Replace Context  Recommendations and Solutions T r y to Maintain Context - Cost Saving/ Efficient  Topographical Emphasis  Topological Emphasis  Reductionist: Modelling Pore Pressure Dynamics  Hierarchical/Systems: Modelling Processes Through Time  To this point I have contrasted the engineering approach to the Fitzsimmons Slump with the ecostructural approach. Table 1 compares the major characteristics of the two approaches. The use of the term 'Context' (6th row of the table), refers to an important idea relevant to, but missing, in much of the resource management literature. When we consider context we are considering the pro58  duction side of the system, as well as the the consumption side of the system (Allen et al., 2003 and 1999). Managing for context means considering the services that are lost when an ecosystem's ecostructure is olminished or destroyed through land use decisions. If those services have to be replaced artificially we need to consider the effort and cost of engineering such services, including maintenance costs that will accrue well into the future. An example would be the significant effort and monies that are spent on fisheries programs aimed at rectifying services that are lost when a river is dammed or riparian habitat lost to development. The Golder (1993) and EBA (2005) reports on the Fitz Slump have both recommended that the flood zone of the Fitzsimmons Creek be modified, requiring a permanent dredging, diking, and monitoring solution. Managing for context requires that the design balance these future costs with the benefits. An ecostructural perspective allows for an explicit consideration of 'context' so that decision-makers can adjudicate costs of land-use decisions more efficiently. In the case of the Fitz Slump the issue is how to balance the needs of a growing and space constrained village with the risk of flooding. An ecostructural perspective would adjudicate the 'costs' of development that would further reduce the ecostructural integrity of the valley hillslopes, putting additional pressure on the government to engineer solutions to the increasing risk of slope failure. For example, the development of subdivisions for suburban grow create impervious surfaces that require extra capacity on the part of the stormwater sewage system. The costs can be prohibitive when a system is already at capacity and many districts are utilizing 'best management practices' to create storm water reservoirs that are designed to act as urban wetlands. Likewise, the Resort Municipality of Whistler could develop the floodplain into an area that would act as a green space and wildlife corridor. This does not solve the parking issue but it is hard to imagine that as a problem this would be classified as an intractable one.  59  4.3 fln Ecostructural Narratiue The fine root/ectomycorrhizal activity in the rhizospere, coupled with other fast cycle dynamics, such as the diurnal cycle's effect on evapotranspiration, form the bottom-up mechanisms that serve as the components of the initiating conditions in the PF system. These components and their dynamics operate within the topdown constraints of the climate and the geotechnical conditions described in the engineering approach (the upper level). In between, the focal level of resource managers, is the interaction between precipitation events and the mechanisms that potentially determine the effect that these events have on soil pore pressure. The analysis of this complex system, a steep, forested watershed, and the complicated problem within it (the Fitz Slump) , can be pragmatically attacked at the 56  focus of land use and it's effect on the forest ecostructure. From an ecostructural perspective the forest provides quantifiable levels of services in the form of significant reduction of mcoming precipitation as evaporation; reduction of the velocity and diffusion of the throughfall: both at the level of the canopy, and the evaporation/transpiration that a forest cover continually operates. Transpiration levels in the winter season are not thought to be especially significant, but this may not apply in the Pacific Northwest where the coniferous forests are actively metabolizing (transpiring) much of the year, especially during winter (Waring and Franklin, 1979). Forest litter, combined with the mycorrhizal and fine root mats, produce a distinctly vertical flow for throughfall hydrology - this has a tangible value in terms of reducing surface erosion and as a store of system resources in the form of carbon. Finally, roots and bioturbations, acting as preferential flow pathways, provide long-term networks that have the effect of diffusing the soil pore pressure potential effectively. This is a significant source of hillslope stability during storm events.  5 6  See appendix for contrast between complexity and complicatedness re: Weaver 1948.  60  T h e hydrogeomorphic modelling of PFs has a n important a n d valuable  insight  t h a t is m i s s e d i n o t h e r w o r k s c o n s i d e r i n g hillslope h y d r o l o g y ( e v e n w i t h r o o t h e s i o n factors). Sidle et al.'s  (2000) m  odel has focused o n the temporal  dimen-  sion of the PFs; specifically, t h e y consider the role of t i m e o n a n t e c e d e n t  moisture  b u i l d - u p a n d h o w t h a t effects the s y n c h r o n i z a t i o n of P F s a c r o s s ahillslope c a l l t h a t f i g u r e 12 i s a d e p i c t i o n o f a h y d r o g e o m o r p h i c d y n a m i c t h a t  co-  5 7  .  Re-  increases  non-linearly with the increased antecedent wetness of the regolith. This  m o d e l  implicitly recognizes the explicit heterogeneous nature of the spatial element hillslope h y d r o l o g y . H e t e r o g e n e i t y m a y arise, for e x a m p l e , as ar e s u l t o f t h e  in vari-  ous ages of the stand population, stand density, species type, the character of the regolith or historical contingencies  s u c h as fires, p r e v i o u s landslides,  etc.  T h e t e m p o r a l d i m e n s i o n of Sidle's m o d e l c a n be thought of as being a hillslope m a d e u p of locally networked PFs that have breaks between t h e m across  the  larger hillslope. A s antecedent wetness builds t h r o u g h the wet season, the lands' of networked PFs link up via the influence of zero order basins rarer P F connections that inevitably develop over time.  5 9  5  8  a n d  the  This synchronization  P F s a c r o s s the hillslope, t h r o u g h time, is integral to the l o n g - t e r m stability efficiency of the  'is-  of  a n d  ecosystem.  T h e hydrologic flow paths in these steep, forested watersheds ultimately e n d  u p  draining into first-order perennial streams. These streams play a n important  role  in hillslope stability t h r o u g h the i n s t r e a m structures detailed above. In the  event  of aseasonal storm, the P F n e t w o r k initiates a n d g r o w s as seasonal soil moisture content increases a n d w e t s (links) the P F islands into al i n k e d a n d  networked  hydrological infrastructure. T h e flow ultimately swells the s t r e a m network,  but  This is a very valuable insight in terms of 'scaling up' from plot level research to spatial scales that are more relevant to resource management issues. The synchronization of PF pathways over time through the build up of antecedent wetness is a pivotal emergent property of the ecostructure that has developed. 67  Zero order refers to the non-perrenial nature of these hillslope concavity's drainage. They tend to hold antecedent wetness until the volume of water breaches the concavity, releasing stored water during very wet conditions only. 58  Tree root distributions tend to be distributed as power laws: most roots are short andfine,with a few being very long and of greater tensile strength. (Inferred from Sakals and Sidle, 2004. See figure 2 in that paper.) 59  61  in the vast majority of cases is contained and diffused in terms of the hydrological energy potential. In the theoretical model developed here, timing is important in terms of the system's ability to moderate and manage fluxes of precipitation. From an ecostructural perspective, the trigger of the Fitz Slump has to do with the integrity of the ecostructure and the time of year it occurred more than the geophysical factors. The geophysical factors are important but there is little a resource manager can do in terms of effective management here. As discussed, the Fitz Slump is just one of a family of slumps occurring in the immediate area (Fig. 2). This area is all within the thick glacial till/clear-cut zone of the lower valley. The thickness of the heterogeneous till deposits creates an inherently unstable area; one that is involved in a long-term evolution as the river undercuts and erodes the thick tills. In this scenario the value of the ecostructure increases, since it is the only manageable leverage point in the matrix of complexity that is the Fitz Slump. It is noteworthy that this case study was chosen only as an example of a steep, forested watershed type. Only once I had begun the analysis did it become evident that this case study was a perfect fit as a hypothesis test case. If the Ho was "No difference in effect in terms of the PF model", the Fitz Slump would likely reject the hypothesis. Sidle et al.'s model of the PF mechanism stresses the role of antecedent moisture build-up as a prerequisite for the integration of the PFs across the hillslope. The storm of August 1991 was not unusually large relative 60  to the storms typical of the fall/winter season but it was very large relative to typical summer precipitation events (Fig. 15 - 17).  It is important to note here that Sidle et al.'s work on PFs has focused on shallow forest soils which are heavily affected by biotic factors. The influence of PFs here is greatly reduced because of the thick nature of the tills. I am arguing that the PFs are still an important interface between mcoming precipitation and pore pressure buildup. 6 0  62  100-4—•«.  "Slope Di«plac«ment  E .  E  E .  I .e  2  1-3  I  4  JJ  CL  Total precipitation during event 157.5 mm  60-  O  o  1  0)  UJ  40-! •  01-Aug  il  07-Aug  F i g u r e 15. P r e c i p i t a t i o n d a t a f o r t h e r a i n storm believed t ohave triggered t h e Fitz S l u m p i n A u g u s t 1991. T h e slope  m e n t d a t a i sinset f o r a c h r o n o l o g i c a l perspective. (Schematic adapted  14-Aug  19-Auo  25-Aug  31-Aus  06-Sep  Date (1991)  displace-  from  Golder, 1993).  In other words, the system has the capacity to handle rain events of this magnitude - it does so during the winter storm seasons. But the 1991 storms events that triggered the Slump occurred during very dry conditions . One of the storm 61  days saw considerably more rainfall (64 mm) than was typical of the month.  WHISTLER/ALT A LAKE: 1 9 5 0 » 1 9 9 2 AUCUCT RAIKFAU. AND PRtCIPlttTlON  F i g u r e 16.Precipitation data for August across t h eperiod for which data is available. T h e s t o r m o f 1991 w o u l d s e e m very anomalous. (Schematic adapted f r o m Golder, 1993).  6 1  See footnote #41 for hydrophobic effects of desiccation.  63  Reading of the chronology of the events this way would correlate well with the Sidle hydrogeomorphic model, since antecedent moisture levels would have been low, or 'dry', and the hillslope PF network would have been fragmented as a result. The summer conditioned soil matrix/PF capacity was greatly overwhelmed, and as a result, pore pressures increased locally to unsustainable levels releasing an unstable (at that threshold) section of the hillslope.  The ecostructure provided by the coniferous canopy and the root mat networks is the first line of defense during storm events. The clear-cutting of this line of defense is well recognized in the literature in terms of its costs to hillslope stability (Dhakal and Sidle, 2003; Schmidt et. al., 2001; Sakals, 2004). The fragmented areas of PF would normally link up through the wet season via antecedent moisture build-up in the rhizosphere. The discontinuous nature of the PF network would begin to synchronize through the 'small worlds' network of linkages that exist between the subsections of PF nodes. For this reason timing 62  is important. The small-world PF paths need time to become effective at moving subsurface moisture along to the nearest-neighbour node. The fact that the 1991 6 8  Recall the inferred highly-skewed (fat-tailed) distribution of tree root morphology. See previous footnote  (34).  64  storms occurred during the dry season lends the ecostructural perspective verisimilitude. The picture that emerges is of a steep, forested, hillslope that is potentially possessed of zero-order hollows that are networked via PF mechanisms; the efficiency of these mechanisms are sensitive to temporal and spatial constraints . The hillslopes are large and heterogeneous in terms of geotechnical characteristics and in terms of ecosystem characteristics. This creates a network of roots and bioturbations that, although long-lasting, are still dependent on time to develop. This would explain the observed tendency of clear-cut harvesting to increase the likelihood of slope failure (Schmidt et al., 2001). The probability distribution of this effect (slope failure) has temporal correlations to clear-cut harvesting techniques that decrease slowly after harvesting and only recover long after the area has re-grown (Sakals and Sidle, 2004). Other research, comparing root cohesion factors at different stand types (old-growth, industrial, harvested) has found that industrial forests can take more than a century to achieve the root cohesion of old-growth forests (Schmidt et. al., 2001): "We find that median lateral root cohesion of unharvested old-growth forests (25.694.3 kPa) exceeds that of industrial forests (6.8-23.2 kPa) up to 123 years old."  This again would correspond with the model of ecostructural PF development we have been developing here. The natural regeneration of forests, the maturing of these forests, occurs as a self-organizing process. The regenerating forest prunes tree density and siting through natural selection-like forces; these include tree root systems 'following', and as such influencing through feedback mechanisms, the hydrological regime of the watershed. 4.4 Ecostructures as a GIS At the beginning of section three the analogy of a complex ecosystem being modelled as a modular hierarchy was introduced. I propose that the levels in the hierarchy can be effectively modelled as over-lays in a GIS rendering of an ecosystem. When over-layed, they may be interpreted as a 'map' of an ecostructure that 65  could, be used, to make management and. valuation decisions. In this case, the Fitz Slump, the GIS would consist of a digital elevation model (DEM) layer that contained elevation, aspect, etc. (our semi-static variables); a rudimentary hydrological model; and raster layers that outlined the lateral moraines, mcluding rough estimates of thickness. As well, the GIS would map forest cover or a LAI rendering. A qualitative over-lay could map the first and second order streams in terms of their stability. This would require an index of characteristics that pointed to well functioning in-stream structures.  63  These first two layers would effectively constitute the engineering data to a large degree. The other layers would model the forestry component, showing harvested areas and stand ages (semi-static variables). A lateral-root density layer could be extrapolated given stand density and ages, species type, soil conditions, and land use. These latter two levels are not unique to the ecostructural approach, there are models that incorporate similar variables effectively to consider the contribution of root development to shear strength in the regolith (Schmidt, 2001; Wu and Sidle, 1995). Such a scenario might involve the government using a GIS system to map valleys like the Fitzsimmons Valley, that have probable value as recreational or commercial settlements. A GIS could reflect the ecostructural perspective through the choice of what mapping is used to create the GIS layers. In the case study instance this might include several geotechnical layers , with over-lays of forestry 64  data (age, type, density, harvesting history). Maps such as this could produce zones where human settlement is too risky, or zones where the geotechnical perspective flags caution and therefore an ecostructural perspective would further flag the value of maintaining any ecostructural services that are providing stability and hydrological attenuation.  A stream that showed signs of sediment movement, fine grained particles in excess of typical levels, recently eroded riparian areas, etc.  6 3  An ortho-photo over-layed on a DEM containing slope, aspect, and elevation data. A polygon layer detailing the till thicknesses. A hydrological model based on the DEM, etc. 6 4  66  There are public spaces (primarily parking lots, but also a public transit loop) in the existing Fitzsimmons Creek flood zones that are protected by dikes and berms but are still at risk should the Slump create a debris dam large enough to breach the dikes. Both the engineering and ecostructural approaches agree that the Fitz Slump area is inherently unstable. The engineering approach is far more comfortable with a quantitative-based risk assessment as evidenced by the E B A report's risk assessment. The report's risk assessment and recommended remedial measures have largely been embraced by the Resort Municipality of Whistler, and as a result it has chosen to maintain public services (parking lots) in the flood zone. Where the ecostructural approach would differ is in its inclusion of the nongeotechnical GIS over-lays and utilizing the results to produce a heuristic that analyses how these major biotic components affect the hydrology dynamic that is so well recognized as the trigger for slope failure. Ecostructure explicitly considers the temporal dimension in terms of the role of network dynamics through time. A GIS that models the ecostructural dynamics over a period of 50 or 100 years could be constructed based on insights from network theory. Network theory's most practical insight has been to deconstruct how networks grow and decay, creating the beginnings of an architectural perspective towards network development. In the past networks were viewed as either random or regular, but 65  now it seems that networks are more likely to be distributed in a fat-tailed architecture (Milo et al., 2002).  66  It may be possible to apply this knowledge to project  how land use decisions concerning forest stands will affect the ecostructural integrity - a GIS layer could model and project the forest root density and distribu-  6 5  See Appendix C for background on Network Theory.  Milo et al.(2002) introduce the concept of'Network Motifs' as network "patterns of interconnections occurring in complex networks at numbers that are significantly higher than those in randomized networks. We found such motifs in networks from biochemistry, neurobiology, ecology, and engineering. The motifs shared by ecological food webs were distinct from the motifs shared by the genetic networks of Escherichia coll and Saccharomyces cerevisiae or from those found in the World Wide Web. Similar motifs were found in networks that perform information processing, even though they describe elements as different as biomolecules within a cell and synaptic connections between neurons in Caenorhabditis elegans. Motifs may thus define universal classes of networks. This approach may uncover the basic building blocks of most networks." 6 6  67  tion based on network development insights. From this map one could produce various scenarios that reflected the competing land use options. It is interesting to note that the use of a GIS is potentially hierarchical since the GIS levels are modular and each module could conceivably be updated without the need to alter the other components of the relational data base - a near-decomposable quality of the system. 5.0 Discussion/Conclusion - Leueraging Knowledge  The ecostructural perspective presented here is an attempt to take a complex system and organize it into a tractable scenario that will give resource managers a pragmatic insight from which to make decisions as to the best course of action given certain parameters. Essentially, this allows the distillation of a problem into a rational choice scenario so that land use decisions may be made based on an understanding of ecosystem resiliency dynamics. If one ignores this goal then what is left is a potentially large expenditure in terms of financing the engineered remediation that replaces the natural context that we have lost and/or removed usually unintentionally. From a systems theory perspective the ecostructural analysis of the Fitz Slump presents resource managers with a leverage point in the system: leverage points are places in a complex system where a small change in one area can produce significant changes in the system at large (Meadows, 1997). In a resource management context the infiltration dynamics of the hillslope hydrology are a perfect leverage point. With the knowledge provided from utilizing an integrated, scalesensitive analysis such as the above, a resource manager can consider the cost to the ecostructure at each junction in a land use decision process. A removal of the canopy/PF root network will have tangible costs in terms of the mechanisms that determine hillslope hydrology. Consider the Fitzsimmons Creek valley in the late 1950s. The question of whether to clear-cut the lower valley was likely viewed as being quite straight forward: a clear-cut harvesting technique would maximize timber value and 68  would, not Imowingly reduce future values. But the long-term effect of a clear-cut harvesting technique is now recognized as having a destabilizing effect on steep hillslopes (Schmidt et al., 2001; Sakals and Sidle, 2003). In terms of the case study area, the effect has been to reduce the future value of the valley as a site for an international resort community. The engineering perspective logically details remediation through the implementation of further engineering: berms to constrain debris flows, monitoring for emergency conditions, and dredging to maintain capacity. This approach contributes little to the knowledge (Klemes, 2000) base of decision-makers; there is no insight into what creates and maintains a relatively stable watershed setting. Armed with what we know now it is not at all clear, or even likely, that a government would choose the short term economic gains of harvesting to the long-term value of an intact watershed. The ecostructural approach recognizes the fact that some of the structural integrity of the hillslope has been lost during the initial harvesting of the forest, the building of roads and subsequent development of ski runs. That forces the decision maJfcing into a narrower, more reactive pattern. But at the same time allows it to utilize a leverage point, the existing structure and future management of the forest, to actively incorporate the ecostructural knowledge into land use planning. This would flag any plans that might further reduce the hillslope ecostructure as a result of cutting in roads, ski runs, or any other development that impacts PF mechanisms and hillslope hydrology in general. 5.1 An Ecostructural Index An ecostructural analysis would need to develop some form of an index tofirmup the analysis for decision-makers. An index could weigh the relative values of canopy structure and tree stand density (stems per hectare). A Leaf Area Index (LAI) may be useful, but this would require more research into the relationship between LAI values and underlying forest ecology values such as tree root distributions. It is unlikely that an index could be drawn up from existing ecological knowledge. Such an index would require an approach like that of adaptive man69  agement. Indeed, an ecostructural perspective would be an important component of an adaptive management program since it seeks to address the need to understand ecosystem development and processes.  5.8 Climate Change Planning for climate change in terms of hillslope stability increases the direct value of an ecostructural approach. There are precious few options in a re67  source manager's tool kit for minimizing the potential disruptions of climatic fluctuations. A n ecostructural approach to management seeks to leverage and utilize the landscape level infrastructure that has successfully mitigated environmental challenges to the hillslope ecology over thousands of years. In this case the debate on species diversity would play a role in the management of ecostructural capital.  5.8 Future Research A crucial and valuable insight from the ecostructural perspective is that structure always affects function and vice versa. Resource managers and decision makers (policy makers) need the tools to be able to adjudicate competing land uses with and understanding of how different choices will effect the structure and ftmctioning of a local ecosystem. The structural changes that result from land use decisions will have functional repercussions. Through starting to understand these structural/functional trajectories in ecosystem serai settings we can better inform resource management techniques. Ecostructural analysis, though still largely a qualitative and developing concept , is an important conceptual 68  tool to better map the process dynamics of ecosystem services. The development of an ecostructural analysis could contribute greatly to ecosystem service valuation techniques.  Climate change scenarios likely reduce the value of mainstream risk analysis if climate variability is anticipated to increase - further weakening statistical evaluations. 6 7  Ecology is a discipline that is very comfortable with diffuse concepts. This reflects the complexity of the study material. Core terms such as 'ecosystem', 'watershed', and especially 'sustainability' are just a few examples of terminology that lack concrete definition. 6 8  70  This thesis began by laying out the structure in a hierarchical manner. Instream structures evolve geomorphically over time to create stable, potential-energy diffusing, mechanisms. Forests create soil retaining root networks. PF networks further create hydrological potential-energy diffusing mechanisms that attenuate the impacts of storm events. These PF networks offer differing capacities to diffuse the hydrological potential through time and space. In other words the systems grow and decay, are sensitive to contingencies and temporal characteristics such as antecedent soil moisture. It has been well recognized in the literature, for some time, that clear-cut harvesting increases the likelihood of hillslope failure but a detailed understanding of the mechanism has been unavailable. An ecostructural perspective begins to detail the process-driven mechanisms that answer this question. Further field work needs to be done to map the hydrological implications of this thesis. It may be a leap of theory to take the Sidle PF network model, detailed above, and explain the dynamics of antecedent wetness as wetting, expanding, and thus creating a 'small-world' network mechanism. But if it is correct it would impact decision making and increase the ability of land-use decisions to become more in line with the objectives of the concept of long-term sustainability. This is because such a perspective would emphasize the topology of ecosystem ecostructures. On a purely theoretical level the detail of the ecostructural model may be extended by using the hierarchical network model introduced in the network theory section of the appendix (recreated here as Fig. 18 (c); also see Fig. 6 in the appendix). This model provides a hierarchical mapping of the principal components, the system modules, and links them, within each layer (horizontally) and layer upon layer (vertically), while retaining the ability to pull out a section or a module for clarification or the addition of data. This process leaves the over-all model intact and relevant - nearly decomposable. In figure 18 (C.) the different modules are colour-coded for differentiation: the left-hand side is an idealized schematic ar71  ranged, to illustrate topological attributes. The right-hand, side schematic is more topographical. Each colour-coded network could conceivably be one of the functional modules at that level of organization. The functional modules combine into a topological mapping that renders what we are describing as ecostructures - distributed, inter-acting networks of ecological entities that create ecosystem services. To be clear, this work is not asserting that the above network model provides a description of a PF hillslope hydrology model; rather, they serve as a general template to model or guide consideration on the topic of ecological topologies. In this respect I think they are excellent fodder for consideration. A. ) A schematic illustration (left most) of a scale-free network, whose degree distribution follows a power law. In such a network, a few highly connected nodes, or hubs (blue circles), play an important role in keeping the whole network together. A typical configuration (right) of a scale-free network with 256 nodes is also shown, obtained using a scale-free model, which requires the addition of a new node at each time step such that existing nodes with higher degrees of connectivity have a higher chance of being linked to the new nodes. [Preferential Attachment] B. ) Schematic illustration (left) of a manifestly modular network made of four highly interlinked modules connected to each other by a few links. This intuitive topology does not have a scale-free degree distribution, as most of its nodes have a similar number of links, and hubs are absent. A standard clustering algorithm uncovers the network's inherent modularity (right) by partitioning a modular network of N= 256 nodes into the four isolated structures built into the system. C. ) The hierarchical network (left) has a scale-free topology with embedded modularity. The hierarchical levels are represented in increasing order from blue to green to red. Standard clustering algorithms (right) are less successful in uncovering the model's underlying modularity.  Figure 18. - Hierarchical Organization of Modularity in Metabolic Networks (Adapted rom Ravasz et. al., 2002)  72  The network may be further described using network theory statistical values. Error and attack tolerance is a measure of the number of links that can be randomly removed before the network balkanizes (breaks up into subsystems) or collapses. The degree to which the network exhibits a scale-free distribution  69  compared to the network's clustering coefficient gives an analysis picture of the degree to which the system is self-organizing: this may reflect stand maturity and the type of stand (industrial vs. natural). In terms of streams and instream structures it is interesting to consider the ratio of scale-free distribution to clustering coefficient as a possible measure of the 'stability' of headwater streams. This is an area that is actively researched but little consensus has been achieved as to how to measure low-order stream characterization. This is not the place to go into detail, but I propose a research objective that would measure the ratio of a clustering coefficient, as a measure of instream structure, to the scale-free character of instream particle or clast size. A stream exhibiting a ratio of higher instream structure to scale-free grain size might suggest a stream that has under-gone the restructuring process and is now in a state of local 'stability'. That is, the stream can handle all characteristic flows. It would take an uncharacteristically high flow to reset the stream instream structures. Streams that are in a state of flux (e.g. stream sections having under-gone slope failures) exhibit sections of small grain size amongst sections of strong imbrication. The small grain size sections have been entrained until another, or a series, of high flows flushes the sections to a state where the large clasts remain indefinitely and small particles tend to move through only transiently. This can be seen in headwater streams that exhibit signs of stability such as mosses and biofilms that reflect stable sites, and stream banks/riparian areas that show signs of stability (trees growing right to the bank edges, established perrenials).  This is limited by the natural growth constraints of forest physiology. 73  5.4 Risk Analysis The ecostructural perspective puts less weight on quantitative risk analysis. This area is not a highly successful area of research in complex systems; the focus of the assessment is far too non-linear to behave well in a normal statistical setting. There is a growing body of work that questions the risk assessment industry and recognizes the increasingly limited role of the normal curve in characterizing what often tend to be non-normal data sets (Mandelbrot and Hudson, 2004; Barabasi and Albert, 1999; Bartlett et al., 2004; Mitzenmacher, 2003; Willinger et al., 2004) . TO  Risk assessments, for this reason, need to be more of a qualitative component. Multiple scenarios depicting possible risk values; explicit recognition of data limitations in the derivaton of such analysis. The EBA report fulfills the former need and forgoes the latter. Consider that, Few of the adaptive systems that have been forged by evolution or shaped by man depend on prediction as their main means for coping with the future. Two complementary mechanisms for dealing with change in the external environment are often more effective than prediction: homeostatic mechanisms that make the system relatively insensitive to the environment and retrospective feedback adjustment to the environment's variation.  (Simon, 1996) Adaptive management was born of a recognition of the limitation of risk analysis when dealing with complex systems and recognizes this quality ex. plicitly. Ecostructure attempts to harness the two mechanisms that Simon acknowledges as important to planning and risk: •  homeostasis (or homeorhesis as was utilized above to differentiate distributed control), recognizing and detailing the feedback dy-  Indeed, the title of the Willinger et al. (S004) paper is' More "Normal" Than Normal: Scaling Distributions and Complex Systems.'. 7 0  74  namics of the network architecture that creates and maintains ecostructure, and •  retrospection, through an adaptive management paradigm that seeks to constantly adjust the knowledge base, using modular modelling techniques, and recognizing the role of context in resource management decision making.  Conclusion The concept of ecostructure and the laying out of an ecostructural perspective relies on a process-oriented understanding of ecosystems as being made up of modules that are networked together in a synergetic fashion. The high level of non-linear behaviour in an ecological network demands that analyses be organized and analyzed as hierarchical - specifically, a nearly-decomposable hierarchy. This creates a rationalized theoretical framework from which the system can be modelled using the values of Okcham's Razor and at the same time explicitly acknowledges scale effects. This last quality, scale effects, should be a very sobering or humbling quality to any ecological researcher or resource manager. Scale issues are at the heart of what we call complexity. Figure 19 illustrates well the concept of the hierarchical framework with the dynamics of networked architecture: a series of organizational levels that build and integrate functional modules into distributed large-scale organizations (modelled as networks) that exhibit unique properties at each level (emergent properties). In this work, and in ecostructure as a concept, the salient points that create a generalizable model are: ^  that complexity can be managed using hierarchy theory as a framework for organizing the analysis  75  ^  that important properties are recognizable only at functional levels of organization and. at similarly functional scales of organization that self-organizing systems tend, to exhibit networked architecture and that this architecture would seem to have general properties that can be useful in terms of managing for ecosystem services  F i g u r e 19 - Oltvai and Barabasi's Complexity Pyramid. Genetic and molecular form and function give rise to functional modules that build across scales to form large-scale organizations. (Adapted from Oltvai and Barabasi,  2002).  Above, I wrote "Ecostructure explicitly considers the temporal dimension in terms of the role of network dynamics through time." This is the fulcrum of the concept of ecostructure, without this facet ecostructure is little more than adap-  76  tive management or ecosystems management. Ecostructure is an attempt to model the 'functional modules' (see Fig. 19) that create the physical capacity to conserve resources in the system. This allows the next level of organization to develop based on the quasi-stable state of primary resources available at the ecostructural level. Ultimately, we are trying to recognize and map the qualities and quantities that give a system its capacity to be resilient and productive in terms of ecosystem services. A topological awareness in self-organizing systems means an explicit rendering of the system through time. This rendering begs questions such as "What is the general level of capacity in the system at this time and what determines this capacity? How did the system build through time? How does it maintain and grow going forward? Can we determine leverage points in the system that can be utilized to manage for sustainable ecosystem services?" FmaHy, this work derives from the perspective that the underlying model of functionality in modern thought is moving to that of the 'nested network' (Fig. 19), just as the underlying model that has served since the influence of Descartes and Newton was the machine. The machine analogy is still useful, but, in a Kuhnian manner, it has reached its zenith. A n emerging framework of understanding and modelling is the nested (hierarchical) network. This work is aimed at being a pragmatic example of this migration of thought.  77  References  Abrahams, A. D., G. Li, and J . F. Atkinson. 1995. Step-pool Streams: Adjustment to Maximum Flow Resistance. Water Resour. Res. 31: 2593-2602. Allen, T. F. H. and T. W. Hoekstra. 1992. "Toward A Unified Ecology." New York: Columbia University. Allen, T. F. H. and D. W. Roberts. 1998. Integrating Pattern, Process, and Scale. In "Ecological Scale Theory and Application." (Eds.) David L. Peterson and V. Thomas Parker. New York: Columbia University. Allen, T. F. H., J . A. Tainter and T. W. Hoekstra. 1999. Supply-Side Sustainability Syst. Res. 16: 403-427. Allen, T. F. H., J . A. Tainter and T. W. Hoekstra. 2003. "Supply-Side Sustainability." New York: Columbia University. Amin, M . 2003. North America's Electricity Infrastructure: Are We Ready For More Perfect Storms? Security & Privacy Magazine, IEEE Vol. 1(5): 19- 25 Barabasi, A. L. 2002. "Linked: The New Science of Networks^ Cambridge, MA.: Perseus. Barabasi, A. L. 2005. Taming Complexity. Nature Physics 1, 68-70. Barabasi, A. L., R. Albert. 1999. Emergence of Scaling in Random Networks. Science 286, 509-512. Barabasi, A. L., R. Albert, H. Jeong, and G. Bianconi. (2000). Power-law Distribution of the World Wide Web. Science, 287, 2115b.  78  Barabasi, A. L., H. Jeong, E. Ravasz, Z. N'eda, A. Schuberts, and T. Vicsek. 2002. Evolution of the Social Network of Scientific Collaborations, Phys. A, 311: 590614. Bartlett, A. A. and R. G. Fuller, V. L. P. Clark, J . A. Rogers. 2004. "The Essential Exponential! For the Future of Our Planet." Lincoln, NE: University of NebraskaLincoln. Bond, B. J . and J . F. Franklin. 2002. Aging in Pacific Northwest Forests: A Selection of Recent Research. Tree Physiology 22: 73-76. Bowling, D. R., N. G. McDoweU, B. J . Bond, B. E. Law, and J . R. Ehleringer. 2002. 13C Content of Ecosystem Respiration is Linked to Precipitation and Vapor Pressure Deficit. Oecologia. 131: 113-124. Bovis, M . J . and M . Jakob. 1999. The Role of Debris Supply Conditions in Predicting Debris Flow Activity. Earth Surf. Process. Landforms 24: 1039-1054. Brand, S. 1999. "The Clock of the Long Now." New York: Basic Books. Bundt, M., Ja"ggi, M., Blaser, P., Siegwolf,,R., Hagedorn, F., 2001a. Carbon and Nitrogen Dynamics in Preferential Flow Paths and Matrix of a Forest Soil. Soil Sci. Soc. Am. J . 65: 1529- 1538. Bundt, M., F. Widmer, M . Pesaro, J . Zeyer and P. Blaser. 2001b. Preferential Flow Paths: Biological Hot Spots in Soils. Soil Biol. Biochem. 33: 729- 738. Bunnell, F. L. and D. J . Huggard. 1999. Biodiversity Across Spatial and Temporal Scales: Problems and Opportunities. Forest Ecology and Management 115: 113126. 79  Caine, N. 1980. The Rainfall Intensity - Duration Control of Shallow Landslides and Debris Plows. Geogranska Annaler. Vol. 62, No. 1/2: 23-27. Campbell J . L., 0. J . Sun and B. E. Law. 2004. Supply-side Controls on Soil Respiration Among Oregon Forests. Global Change Biol. 10: 1857-1869. Changnon, Stanley A. 1992. Inadvertent Weather Modification in Urban Areas: Lessons for Global Climate Change Bulletin of the American Meteorological Society. 73: 619-627 Church, M . 1999. Mountains and montane channels. In: Allison, R. J . and P. T. Burt, editors. "Sediment Cascades: A n Integrated Approach." Chichester: Wiley. [In Press] Church, M . 2002. Geomorphic Thresholds in Riverine Landscapes. Freshwater Biology 47: 541-557. Constance, D. H. and A. Bonanno. 2000. Regulating the global fisheries: The World Wildlife Fund, Unilever and the Marine Stewardship Council. Agriculture and Human Values 17: 125-139. Costanza, R. and R. d'Arge, R. de Groot, S. Farber, M . Grasso, B. Hannon, K. Limburg, S. Naeem, R. V. O'Neill, J . Paruelo, R. G. Raskin, P. Sutton, M . van den Belt. 1997. The Value of the World's Ecosystem Services and Natural Capital. Nature, 387: 253.  Dhakal, A. S. and R. C. Sidle. 2003. Long-Term Modelling of Landslides for Different Forest Management Practices. Earth Surf. Process. Landforms 28: 853-868.  80  Dempster, Beth. 2000. Sympoietic and. Autopoietic Systems: A New Distinction for Self-Organizing Systems. In Proceedings of the World. Congress of the Systems Sciences and. ISSS. 2000. J.K. Allen and. J . Wilby, eds. [Presented at the International Society for Systems Studies Annual Conference, Toronto, Canada, July 2000.] Depew, D. J . and B. H. Weber. 1995. "Darwinism Evolving: Systems Dynamics and the Genealogy of Natural Selection." Cambridge, Mass.: MIT Press. EBA Engineering Consultants Ltd. 2005. Fitzsimmons Creek Landslide Hazard Assessment. Whistler, British Columbia. Submitted To: Land and Water British Columbia. Faloutsos, M., P. Faloutsos, and C. Faloutsos. 1999. On Power-Law Relationships of the Internet Topology. Computer Communications Review 29: 251.  Fisher, S. G. 1997. Creativity, Idea Generation, and the Functional Morphology of Streams. J . N. Am. Benthol. Soc. 16(2): 305-318. Foley, J . A., and R. DeFries, G. P. Asner, C. Barford, G. Bonan, S. R. Carpenter, F. S. Chapin, M . T. Coe, G. C. Daily, H. K. Gibbs, J . H. Helkowski, T. Holloway, E. A. Howard, C. J . Kucharik, C. Monfreda, J . A. Patz, C. Prentice, N. Ramankutty, P. K. Snyder. 2005 .Global Consequences of Land Use. Science: Vol. 309: 570-574. Giller, P. S. and B. Malmqvist. 1998. "The Biology of Streams and Rivers." Oxford Univ. Press. Glade, T. and M. Crozier. 2006. A Review of Scale Dependency in Landslide Hazard and Risk. In "Landslide Hazard and Risk." [E-Book] (Eds.) T. Glade, M . G. Anderson, M . J . Crozier. London: Wiley &? Sons.  81  Golder Associated Ltd. 1993. Geotechnical Investigation and Stability Analysis, Fitzsimmons Creek Slide. Report to Resort Municipality of Whistler. Gomi, T., R. C. Sidle, and J . S. Richardson. 2002. Understanding Processes and Downstream Linkages of Headwater Systems. Bioscience 52: 905-916. Gomi, T., R. C. Sidle, R. D. Woodsmith, and M.D. Bryant. 2003. Characteristics of Channel Steps and Reach Morphology in Headwater Streams, Southeast Alaska. Geomorphology 51: 225-242. Griffiths, R. P., G. A. Bradshaw, B. Marks, and G. W. Lienkaemper. 1996. Spatial Distribution of Ecotomycorrhizal Mats in Coniferous Forests of the Pacific Northwest, USA. Plant and Soil 180: 147-158. Griffiths, R. P, M . A. Castellano, and B. A. Caldwell. 1991. Hyphal Mats Formed by Two Ectomycorrhizal Fungi and Their Association with Douglas-fir Seedlings: A Case-study. Plant and Soil 134: 255-259. Griffiths, R. P. and A. K. Swanson. 2001. Forest soil Characteristics in a Chronosequence of Harvested Douglas-fir Forests. Can. J . For. Res. 31: 1871-1879. Hagedorn, F. and M . Bundt. 2002. The Age of Preferential Flow Paths. Geoderma 108: 119-132. Halwas, K. L. and M . Church. 2002. Channel Units in Small, High Gradient Streams on Vancouver Island, British Columbia. Geomorphology 43: 243-256. Holt, R. D. 2006. Asymmetry and Stability. Nature Vol. 442 No. 7100: 223-328.  82  Holling, C. S. 1978. "Adaptive Environmental Assessment and Management." New York: John Wiley &? Sons. Jonsson, P. F. and Paul A. Bates. 2006. Global Topological Features of Cancer Proteins in the Human Interactome. Bioinformatics. Vol. 22 No. 18: 2291-2297. Jordan, F. and I. Scheuring. 2002. Searching for Keystones in Ecological Networks. Oikos 99: 607-612. Keim, R.F., Skaugset, A.E. 2003. Modelling Effects of Forest Canopies on Slope Stability. H y d r o ! Process. 17: 1457-1467. Keim, R.F., Skaugset, A.E. 2004a. A Linear System Quantification of Dynamic Throughfall Rates Beneath Forest Canopies. Water Resources Research 40, W05208, doi:10:1029/2003WR002875. Keim, R.F., A.E. Skaugset, T.E. Link, and A. Iroume. 2004b. A Stochastic Model of Throughfall for Extreme Events. Hydrology and Earth System Sciences 8, 23-34. Keim, R. F., A. E. Skaugset, and M Weiler. 2005. Temporal Persistence of Spatial Patterns in Throughfall. Journal of Hydrology 314: 263-274. Klemes, V , 2000. Common Sense and Other Heresies: Selected Papers on Hydrology and Water Resources Engineering. Canadian Water Resources Association, Cambridge.  Kolasa, J., J . A. Drake, G. R. Huxel, and C. L. Hewitt. 1996. Hierarchy Underlies Patterns of Variability in Species Inhabiting Natural Microcosms. Oikos 77: 259 266.  83  Krogstad, F. 1996. A Physiology and Ecology Based Model of Lateral Root Reinforcement of Unstable Hillslopes. Unpub. Masters Thesis. University of Washington. Lamberti, G. A., S. V. Gregory, L. R. Ashkenas, R. C. Wildman, and K. M . S. Moore. Stream Ecosystem Recovery FoUowing a Catastrophic Debris Flow. 1991. Can. J . Fish. Aqaut. Sci. 48: 196-208. Levin, S. A. 1992. The Problem of Pattern and Scale in Ecology. Ecology 73: 19431967. Ludwig, D., R. Hilborn and C. Walters. 1993. Uncertainty, resource exploitation, and conservation: Lessons from history. Science 260: 17 &? 36. Malmqvist, B. and R. Wotton. 2002. Do Tributary Streams Contribute Signincantly to the Transport of Faecal Pellets in Large Rivers? Aquat. Sci. 64: 156-162. Mandelbrot, B. and R. L. Hudson. 2004. "The (Mis)Behavior of Markets: A Fractal View of Risk, Ruin, and Reward." New York: Basic Books. Meadows, D. 1997. Places to Intervene in a System. Whole Earth 91: 78-84. Milo, R., S. S. Shen-Orr, S. Itzkovitz, N. Kashtan, and U. Alon. 2002. Network Motifs: Simple Building Blocks of Complex Networks. Science 298: 824-827. Mitzenmacher, M . 2003. A Brief History of Generative Models for Power Law and Lognormal Distributions. Internet Math 12: 226-251. Monger, J . and R. Price. 2002. The Canadian Cordillera: Geology and Tectonic Evolution. CSEG Recorder: http://www.cseg.ca/recorder/pdf/2002/02feb/feb02 02.pdf  84  Montgomery, D. R. and J . M . Buffington. 1997. Channel-reach morphology in mountain drainage basins. GSA Bulletin 109: 596-611. Montgomery, D. R., K. M . Schmidt, H. Greenberg, andW. E. Dietrich, 2000. Forest Clearing and Regional Landsh'ding. Geology 28: 311-314. Newman, M . E. J . 2003. The Structure and Function of Complex Networks. SIAM Review. Vol. 45 (2): 167-256. Nistor, C. J . and M . Church. 2005. Suspended Sediment Transport Regime in a Debris-flow Gully on Vancouver Island, British Columbia. Hydrological Processes: Vol. 19 (4): 861 - 885. Oltvai, Z. N. and A.-L. Barabasi. 2002. Life's Complexity Pyramid. Science 298: 763-764 . Pauly, D., V. Christensen, S. Guenette, T. R. Pitcher, U. R. Sumaila, C. J . Walters, R. Watson, and D. Zeller. 2002. Nature 418: 689-695. Punti, S. P. 2003. Irreversibility and Criticality in the Biosphere. Unpub. PhD Thesis. University of Barcelona. ISBN 84-688-3032-1. Ravasz, E., A. L. Somera, D. A. Mongru, Z. N. Oltvai, and A. L. Barabasi. 2002. Hierarchical Organization of Modularity in Metabolic Networks. Science 297, 1551. Rietkerk, M., J . van de Koppel, L. Kumar, F. van Langevelde and H.H.T. Prins. 2002. The Ecology of Scale. Ecological Modelling 149 (1-2): 1-4. Rooney, N., K. McCann, G. Gellner, and J . C. Moore. 2006. Structural Asymmetry and the Stability of Diverse Food Webs. Nature 442: 265-9.  85  Ryan, M . G. and B. E. Law. 2005. Interpreting, Measuring, and Modeling Soil Respiration. Biogeochemistry 73: 3-27. Sakals, M. E. and R. Sidle. 2004. A Spatial and Temporal Model of Root Cohesion in Forest Soils. Can. J . For. Res. 34: 950-958. Salthe, S. 2005. The Natural Philosophy of Ecology: Developmental Systems Ecology. Ecological Complexity "Vol. 2 (1): 1-19. Savenije, H. H. G. 2004. The Importance of Interception and Why We Should Delete the Term Evapotranspiration From Our Vocabulary. Hydrol. Process. 18: 1507-1511. Schneider, E. D. and J . J . Kay. 1994. Life as a Manifestation of the Second Law of Thermodynamics. Mathematical and Computer Modelling. Vol. 19, No. 6-8: 25-48. Schneider, E. D. and D. Sagan. 2005. "Into The Cool - Energy Flow, Thermodynamics and Life." Chicago: Univ. Chicago Press. Schmidt, K. M., J . J . Roering, J . D. Stock, W. E. Dietrich, D. R. Montgomery, and T. Schaub. 2001. The Variability of Root Cohesion as an Influence on Shallow Landslide Susceptibility in the Oregon Coast Range. Can. Geotech. J . Vol. 38: 995-1024.  Schreier, H., K. Hall, S. Brown, W. Tamagi, L.M. Lavkulich. 1997. "Integrated Watershed Management." A n electronic multi-media textbook (700 pages) for Internet Graduate Study Courses. CD-ROM @ IRE, UBC, Distributed learning, Continuing Studies, UBC. Serrano, M . A. and M . Boguna. 2003. Topology of the World trade Web. Physical Review E 68: 1-4.  86  Sidle, R.C. 2000. Watershed Challenges for the 21st Century: A Global Perspective for Mountainous Terrain. In "Land Stewardship in the 21st Century: The Contributions of Watershed Management." Eds. PF. FfoUiott, M.B. Baker, Jr., C.B. Edminster, M.C. Dillon and K.L. Mora. TJSDA Forest Service, Rocky Mountain Research Station, Ft. Collins, CO, RMRS P-13, pp 45-56. Sidle, R. C , Y. Tsuboyama, S. Noguchi, I. Hosoda, M . Fujieda and T. Shimizu. 2000. Stormflow Generation in a Steep Forested Headwaters: A Linked Hydrogeomorphic Paradigm. Hydrol. Process. 14: 369-385. Sidle, R. C , S. Noguchi, Y.Tsuboyama and K. Laursen. 2001. A Conceptual Model of Preferential Flow Systems in Forested Hillslopes: Evidence of Selforganization. Hydrol. Process. 15: 1675-1692. Sidle, R. C. and H. Ochiai. 2006. "Landslides: Processes, Prediction, and Land Use." Am. Geophysical Union, Water Resour. Monogr. No. 18, AGU, Washington, D.C. 312p. Simon, H. A. 1962. The Architecture of Complexity. Proceedings of the American Philosophical Society, Vol.106. No. 6: 467-482. Simon, H. A. 1973. The Organization of Complex Systems. In H. H. Patee (ed.), "Hierarchy Theory - The Challenge of Complex Systems." 1-27. New York: George Braziller, Inc.  Simon, H.A. 1996. "The Sciences of the Artificial" (3rd ed.). Cambridge, MA: The MIT Press. Simon, H. A. 2002. Near Decomposability and the Speed of Evolution. Industrial and Corporate Change. Vol.11., No. 3.: 587-599.  87  Sole, K V. and J . M . Montoya. 2001. Complexity and Fragility in Ecological Networks, Proc. Roy. Soc. London Ser B, 268: 2039-2045. Spittlehouse DL. 1996. Rainfall Interception in Young and Mature Conifer Forests in Coastal B.C.. Canadian Journal of Soil Science 76: 563.  Strogatz, S. H. 2001. Exploring Complex Networks. Nature. Vol. 410: 268-276. Swift K., M . Jones, and S. Hagerman. 2000. A n Initial Look at Mycorrhizal Fungi and Inoculum Potential in High Elevation Forests. B. C. Journal of Ecosystems and Management Vol. 1, No. 1.: 1 - 6. Ulanowicz, R. E. 1997. "Ecology, the Ascendent Perspective." New York: Columbia University. Vannote, R. L., W. G. Minshall, K.W. Cummins, J.R. Sedell, and C. E. Cushing. 1980. The River Continuum Concept. Canadian Journal of Fisheries and Aquatic Sciences 37: 130-137. Vitousek, P.M., H.A. Mooney, J . Lubchenco, and J . M . Melillo. 1997. Human Domination of Earth's Ecosystems. Science 277:494-499. Wackernagel, M . and W. E. Rees. 1996. "Our Ecological Footprint: Reducing Human Impact on the Earth." Gabriola Island, B.C. and Philadelphia, PA.: New Society Waldrop, M . M . 1992. "Complexity: The Emerging Science at the Edge of Order and Chaos." New York: Simon and Schuster. Walters, C. 1986. "Adaptive Management of Renewable Resources." Macmillan: New York. 88  Waltho, N. and J . Kolasa. 1994. Organization of Instabilities in Multispecies Systems, A Test of Hierarchy Theory. Proc. Nat. Acad. Sci. 91: 1682 - 1685. Waring R. H. and J . F. FranMin. 1979. Evergreen Coniferous Forests of the Pacific Northwest. SCI 204: 1380-1386. Warshall, P. 1998. Modern Landscape Ecology: patterns of infrastructure, patterns of ecostructure,visions of a gentler way. Whole Earth 93: 4-9. Watts, D. J . and S. H. Strogatz. 1998. Collective Dynamics of'Small-World' Networks. Nature 393: 440-442. Weaver, W. 1948. Science and Complexity. American Scientist 36: 536 (Available at http://www.ceptualinstitute.com/genre/weaver/weaver-1947b.htm) Wiens, J . A. 1989. Spatial Scaling in Ecology. Functional Ecology Vol. 3, No. 4. Willinger, W., D. Alderson, J.C. Doyle, and Lun Li. 2004. More "Normal" Than Normal: Scaling Distributions and Complex Systems. In: Ingalls, R. G., M . D. Rossetti, J . S. Smith, and B. A. Peters, editors. Piscataway, N. J.: IEEE. Proceedings of the 2004 Winter Simulation Conference.  Wipfli, M. S., J . S. Richardson, and R. J . Naiman. 2006. Ecological Linkages Between Headwaters and Downstream Ecosystems: Transport of Organic Matter, Invertebrates, and Wood Down Headwater Channels. [In Press} Journal of the American Water Resources Association. Wu, J . 1999. Hierarchy and Scaling: Extrapolating Information Along a Scaling Ladder. Canadian Journal of Remote Sensing Vol. 25, No. 4.  89  Wu, J . and J . L. David. 2002. A Spatially Explicit Hierarchical Approach to Modeling Complex Ecological Systems: Theory and Applications. Ecological Modelling 153: 7-26. Wu, W. and R. C. Sidle. 1995. A Distributed Slope Stability Model for Steep Forested Basins. Water Res. Research Vol. 31, No. 8: 2097-2110. Wuchty, S. 2006. Evolution and Topology in the Yeast Protein Interaction Network. Genome Res. 2004 14: 1310-1314.  9 0  Note: This appendix has not been vetted by the thesis committee in any detailed sense. The appendix is provided in a rough draft format as a background resource to the ideas and theories put forth in the thesis. It is written for the reader with no background in the theory and terminology it refers to. A p p e n d i x fl I n t r o d u c t i o n to ' n o n - l i n e a r '  We live in an age when linear dynamics are well understood and to a large degree mastered. What lies ahead, and what began in the mid-twentieth century, is the challenge of developing a deep understanding of non-linear dynamics. What makes a system linear versus non-linear? Linear system dynamics occur in systems where the components that make up the system are, to a very large extent, independent. This creates a situation where the system is the sum of its parts; effects are additive and as such are significantly predictable. Non-linear dynamics occur in systems where the constituents interact and influence each other; the degree of influence depends on the topology of the network of interactions. The outcomes are not additive and tend to be sensitive to the initial conditions this makes detailed prediction extremely difficult. Scale and organizational architecture play a large role in terms of how these interactions play out. Man-made systems are built on a framework of linear engineering dynamics. Natural systems have co-evolved to be what we would describe as complex nonlinear systems. The fact of the matter is that most of that which we find interesting and challenging in the natural sciences involves, to a very large degree, systems of non-linear interaction. To relate this to the sustainability movement consider the value of bunding a sub-division of solar-powered homes on land that, in a natural state, provides ecological services like fresh water production, fish/ wildlife habitat, and stable hillslopes relatively free from dangerous erosion. If we don't comprehend the role of the various parts of the ecosystem we r u n the risk of building our so-called sustainable sub-division on areas that, once denuded, threaten the viability of the larger ecosystem and eventually the viability of our engineered systems. We can build the most benign cars, but if we continue to pave land without understanding how it dissects ecosystems then we have only come 91  part way to our goal of a higher standard of hving. These considerations are further complicated by the spatial and temporal characteristics of non-linear systems. As well, natural systems have evolved networks of feedback (+/-)  mechanisms  that provide the system with a resiliency that allows perturbations without grinding the system to a halt. Human designed systems are usually very sensitive to even very small perturbations. Consider the effect on your car, driving along at 100 kph, and the alternator belt, the fan belt, or the timing belt breaks. Any of these small linkages breaking will render your car useless until repaired. This example could be extended to a very large number of artificial systems.  71  This work will consider the value of evaluating and modelling ecological systems as systems of constituent parts that interact, sometimes strongly and sometimes weakly, and as such form non-linear systems that are difficult to pigeon-hole with confidence (in terms of prediction). I will consider the difference then between complicated systems and those that are complex; and I will argue that tools are at hand that can aid us in teasing apart an understanding of complex systems that will increase our ability to plan and anticipate with regards to the development of these systems in the face of human intervention. This thesis looks at a specific type of ecological system that is largely still in a 'natural' state yet of interest because human intervention in the form of resource management and/or land use has altered the system to a significant degree. In this instance the stability and resiliency of the system is of interest. These two characteristics are a result of the interactions of many parts. Some of these parts can be grouped together to form landscape structures; structures that provide ecological services that humanity can recognize out of self-interest. These structures can be thought of as a type of ecological infrastructure. The thesis will consider three rather distinct ecological processes. A synthesis of the three procConsider the effect of a computer virus, or the Great Northeast Blackout of 1965; 30 million people left without power. Similarly the effect of the Ice Storm in Quebec 1998 which left many without power for up to a month. 7 1  92  esses at work in a steep rainforest will form the basis of a theoretical model of how the landscape achieves ecological stability along the succession trajectory. The three processes integrate to create a recognizable ecostructure that confers local stability and resilience on a watershed in a PNW setting.. The analogy is, of course, to the concept of infrastructure in a urban or artificial setting. Infrastructures are human engineered systems that provide and distribute goods and services. Ecological infrastructure, here called 'ecostructure', is analogous to a large degree. I add this caveat because, like all analogy the comparison is limited. Engineered infrastructures are not self-organized, and are not self-referential - they are managed by us. Ecostructures are self-organized and are self-referential - that is to say that they possess the ability to respond to stimulus, they are elastic in the sense that the structures are modular and that the modules have some level of biotic componentry. The biotic component gives the system a 'learning' capacity. Knowledge can be stored and expressed via genetic pathways. Analogy is a very old and very effective form of insight and teachmg/modelling. The effectiveness of analogy can be lost if the limited similarity is not always kept in mind. Ecostructure is a term for a concept that groups ecological components into structures that provide recognizable goods and services. This is similar to infrastructure. But infrastructures are man-made engineered systems that do not have a capacity to evolve (learn); they are linear systems. Ecostructures are non-linear and their abihty to retain knowledge is important since this knowledge can be lost for good when the level of genetic variation in a population is reduced through culling. As stated above, the two approaches (the engineering approach and the ecostructural approach) are complementary and serve as examples of the evolution of ecological and scientific thought. The engineering approach is still very grounded in a Newtonian model. To a large extent this model serves engineers very well. The ecostructural approach is based upon a gradual movement away from the  93  Newtonian model in ecology. The Newtonian, model works very well when the system under consideration is decomposable into independent parts: non-linear. Ecological systems are rarely so, and as such ecologists have been forced, in a very K h u n i a n sense, to search for more appropriate models. Ecosystems are essen72  tially made of coupled components. Components that in many cases have coevolved over large temporal scales. This makes for a non-linear system that is unique in terms of its temporal as well as spatial scale parameters. Many ecological, indeed, most ecological problems require scaling at reasonably large scales. This can be true for temporal qualities as well. The explicit inclusion of a time scale is the movement from a three dimensional to a four dimensional model. As one increases scale, complexity increases at a faster than arithmetic rate. How can one manage this scenario? The key is decompose the problem into modules and to recognize the relationships of the modules, the topology of the modules, over different scales and levels of organization. To organize the scale/level issues we will encounter hierarchy theory; to organize and understand the system dynamics of succession and how the different levels of the hierarchy relate architecturally we will use the nascent theory of network dynamics.  Introduction to 'Complexity' This section has two aims, (1)  to introduce the concept of complexity theory, and  theories that aim to organize and manage that complexity in a very straightforward manner, and (2)  to emphasize the significant amount of time that the con-  cepts have had for gestation and development. These are not new concepts, and they have gained significant traction due to their relevancy. As mentioned in the  Thomas S. Kuhn published 'The Structure of Scientific Revolutions' in 1962. It set off a long-run debate about the nature of scientific knowledge and how such knowledge changes. Kuhn proposed that scientific knowledge remained in a dominant paradigm for quite sometime, until that paradigm was at such odds with observations that new ideas were needed to bring together those working at the forefront. Thus Newtonian science was recognized as having limitations and Einstein's work became the dominant model for a new generation of scientists.(See:http://cai bon.cudenver.edu/stc-link/bta^s/kuhn/overview.htm') 7 8  >  94  introduction, systems that are made up of components that influence the behaviour of each other produce non-linear behaviour. To paraphrase a famous essay by Warren Weaver (one of the founders of Communication Theory), written in 1948 (Weaver), it could be said that the 17th, 18th and 19th centuries were a period when science learned and developed the knowledge of working with a few variables; most often two. Via developments in theory and engineering this gave us the radio and telephone (including the infrastructure), the car and plane, the internal combustion engine, and the hydroelectric power plant. The biological sciences, dealing with signincantly more complex issues, had until the 20th century only begun to develop a preliminary stage of the scientific method. The simpler aspects of the life sciences that could have been shoe-horned into a linear framework were simply not the significant issues. As a result, the life sciences "had not yet become highly quantitative or analytical in character." Weaver, a mathematician by training, goes on to say that at the turn of the twentieth century scientists (with mathematicians in the vanguard) went to the other extreme. Rather than be limited by work on two, three, or maybe four variables, scientists used and developed statistical mechanics so as to tackle billions of variables. Weaver labels this level of complexity 'disorganized complexity'. As an example he uses the game a billiards. Science could well explain and predict the motions of one or two balls (with caveats), but introduce ten or fifteen balls and, although the theory still held, the mathematics and confounding effects made description and calculation impossible. Now imagine a table with millions of balls. In this scenario statistical mechanics becomes applicable. You have a very high level of homogeneity and a large sample to drive the Central Limit Theorem. The important subtlety here is in the term disorganized. Disorganized here refers to the level of randomness. The balls are now considered to be independent. Statistics allows us to make generalized statements about the behaviour of the system such as "the system as a whole possesses certain orderly and analyzable average properties". Disorganized complexity. 95  So, at the risk of over simplifying, we have a scenario where research is comfortable with considering two (or a few) variables or a large number of variables: Newtonian mechanics or statistical mechanics. The area left out, the area where the number of variables is more than a few but much less than a very large number, has the troubling though interesting characteristic of exhibiting organization. Weaver labels this level of complexity 'organized complexity'. Many biological, economic, and sociological problems have these characteristics. The constituents of the system are of a number too small to be statistically meanmgful, and more importantly, even if they are plentiful enough, they are not independent of each other. M y economic decisions may affect those close to me. In a great many biochemical situations a few genes acting as switches or a few enzymes acting as catalysts do much of the work while the majority of the genes/enzymes are indifferent to a large portion of cellular dynamics. The genes, the enzymes, and 73  the economic players can influence the behaviour of each other - sometimes weakly and sometimes strongly. Some genes are switches, and upon activating are further stimulated into action until a developmental program is complete (Depew and Weber, 1995). Enzymes can have very specific roles but also act in concert with other enzymes to play relatively rare but important roles other than their principle role (Ravasz et al., 2002). This middle region, then, can prove very elusive since it is too intertwined to work well under statistics and too cumbersome to be addressed with linear mathematics. Nevertheless this area exhibits much of the complexity that we find interesting - Weaver calls this level of complexity 'organized complexity'. Ecosystems exhibit organized complexity. When Weaver wrote his article for American Scientist in 1948 he was writing before there was a significant body of work from which he could point to as offering viable solutions to the problem of organized complexity. Over half a century later there still remains more questions than answers but techniques have evolved and our ability to frame a comLess than 2% of Human DNA is believed to be active in genetic and metabolic roles. 98.5% of DNA is labeled 'junk' as a collective moniker for genes that seem to have no active biological role. 7 3  96  plex issue has definitely come a long way. Hierarchy theory offers a framework from which to understand the role of scale and modularity in complexity. Network theory, though very recent, offers fascinating concepts from which to model and understand the architecture of complexity and self-organization.  Appendix B Introduction to 'Hierarchy Theory' Hierarchy theory offers an organizational framework from which to consider the relational dynamics of what we have referred to as organized complexity (Simon, 1962/73/2002., Salthe 1985., Allen 1992, Rietkerk, 2002).  74  Hierarchy theory  has little in common with the colloquial usage of the term hierarchy. Hierarchy theory refers only to the relational magnitudes between the constituents and levels of a system; there are no pejorative issues of subordination or authority as the common usage of the term would connote. H. A. Simon was motivated to develop hierarchy theory because of his interests in human problem solving as it relates to computer systems and artificial intelligence. In the 1950s Simon was at the forefront of people asking questions like 'how do people solve basic problems?' and 'how can this knowledge be applied to computers so as to create analytical machines?' The answers turned out to be complex. Researchers such as Simon found that the answers involved step-wise, algorithm-like, processes that used logical building blocks to formulate a search for solutions. If one is trying to solve a problem then you don't want to re-invent the wheel while you are at it, so the process involves using what we already know as a starting point. As one proceeds we find what works and what doesn't. A step that works is 'learnt' and incorporated into our solution heuristic. These learnt steps become part of the arsenal to attack problems without starting at the beHierarchy theory is a specific dialect of General Systems theory. There are many books and papers on the subject and some of the key, or classic, works on the topic are listed in the references of this work; the classic works in ecology are by Allen 1992 and Salthe. I have leaned heavily on Simon's contribution as I find his work to be detailed and specific to the theory yet general enough to be applied widely. 7 4  97  ginning each time we hit a dead-end. Consider this in terms of writing code for a computer program: When a programmer is writing a program to accomplish a task he/she builds the program using two basic kinds of instructions: (1) primitive instruction and (2) higher-level instruction (Simon, 1973). The primitive instructions could be routines that will be used in any application of the program: e.g. if a number is entered only consider the absolute value. The programmer would then give this routine a name, say 'absolute'. When writing any subsequent code the programmer need only use the term 'absolute' to denote the primitive instruction; it becomes short-hand for "execute this instruction". This is a rather facile example but it illustrates the concept of building information using sub-routines. We do this all the time in language; we build specialized languages that effectively create dialects . The use of specialized dialects creates 75  a short-hand that allows communication to be simplified among members of a community. The use of sub-routines, or sub-assemblies, is a very primary quality of hierarchies, and one that is pivotal in terms of the growth and evolution of complex systems. It is clarifying to note here that Simon's work with human problem solving served as a rough model for computer simulations of intelligence. Complex systems, such as neural networks, cell assemblies and ecological systems, are magnitudes more complex than these models of learning. The description above of a step-wise heuristic process is at odds with what I have been describing as nonlinear. Non-linear systems are interactive and occurring in parallel - so the subassemblies of a system are potentially interacting in parallel. The point here is  Language is a good example of a hierarchical organization. The English language has its alphabet, 26 letters, no more, no less. This is a low level part of a hierarchy. Letters can be formed into words - the next level up. Words can be utilized according to syntax, and subsequent levels are made up of paragraphs, chapetrs, etc.. Ultimately a book is written. Each level reveals subsequently more meaning. A chapter may be necessary to develop a feeling or a character's role. The character's role or meaning to the story cannot be inferred at the lower levels, it must be built. This is the concept of 'emergent properties' in a hierarchy. 7 5  98  the architectural tool of sub-assemblies and their role in building-up stable systems at a non-intuitive rate.  76  Simon realized that people use these logical or cognitive building blocks when trying to solve problems, otherwise the process would take too long. There is an implicit model of feedback and selection here. A solution or a step is attempted, if it is a failure it is discarded, if it succeeds it is adopted. The determination of success or failure is a feedback and the adoption or discard is a selective process. The concept has strong and persuasive implications for self-organization and evolution. Since at least the 1960s it has been debated whether life sprung from a 'primordial soup' of primary molecules. It has been argued on statistical grounds that the time needed for abiotic primordial molecules to build into biotic complexes would take longer than the time estimated since the Big Bang. Simon and others argue that this is conclusion is incorrect because the calculations ignore the role of hierarchical building blocks. The parable of the watchmakers is a famous example of the concept: Two watchmakers assemble fine watches, each watch containing ten thousand parts. Each watchmaker is interrupted frequently to answer the phone. The first has organized his total assembly operation into a sequence of subassemblies; each subassembly is a stable arrangement of 100 elements, and each watch, a stable arrangement of 100 subassemblies. The second watchmaker has developed no such organization. The average interval between phone interruptions is a time long enough to assemble about 150 elements. An interruption causes any set of elements that does not yet form a stable system to fall apart completely. By the time he has answered about eleven phone calls, thefirstwatchmaker will usually have finished a watch. The second watchmaker will almost never succeed in assembling one - he will suffer the fate of Sisyphus: As often as he rolls the rock up the hill, it will roll down again. It has been argued on information-theoretic grounds - or, what amounts to the same thing, on thermodynamic grounds - that organisms are highly improbable arrangements of matter; so improbable, in fact, that there has hardly been time enough , since the Earth's creation, for them to evolve. The calculation on which this argument is based does not take into account of the hierarchic arrangement of stable subassemblies in the organisms that have actually evolved. It has erroneously used the analogy of the second, unsuccessful watchmaker; and when the  This is exponential growth. Populations, interest rates, and contagious disease are just a few examples of exponential growth that we can readily imagine but tend to over-look the potentially powerful effect of the exponent on the rate of growth. See Bartlett, A. A. S004. 7 8  99  first watchmaker is substituted for him, the times required are reduced to much more plausible magnitudes. (Simon, 1974) Considering hierarchy as an organizational tool in the analysis of complex system dynamics is well received in ecology, as it is in other sciences such as computer science, the social sciences, and physics (Wu, 1999; Rietkerk, 2002; Allen 1992). This is because the systems that we are the most familiar with and interested in: societies, economies, ecologies, are easily identified as being made up of sub-systems: individual people beget various groups (religious, professional, political), which beget particular communities, which in turn beget organizations such as political parties or even the nation state/economy. These organizational sets are analogous to Chinese boxes or Russian dolls which, when you open one, you find another, smaller, doll or box, etc. Physics scales well hierarchically: elementary particles beget atoms, which beget molecules, etc. In a complex system the constituents are organized as a system within yet a larger system (like the doll set) - but unlike the Chinese box analogy the systems need not resemble each other . The nested architecture of systems 77  is so ubiquitous in the world that we often don't notice it. For this same reason it is puzzling that a well known conceptual understanding of hierarchies is not a primary part of a modern education. Nested and Decomposable To this point I have argued that hierarchy theory provides a vehicle from which to organize scale and design issues into a coherent picture. At a first rendering then we can see that hierarchies are explicitly nested. This in itself is still rather trivial. Drilling a little deeper into the theory reveals another very important role that hierarchy theory plays: helping to understand organized complexity i n  If one were considering the geometry of a physical system, say a river system, then the fractal nature of that system would give a closer impression of the self-similar analogy to a Russian doll set. 7 7  100  terms of how the parts of the system, the modules or sub-systems, relate to each other. Hierarchy theory refers to this as the concept of decomposability. Decomposability, the extent to which it is practical and informative to 'decompose' a system, is the nuts and bolts of hierarchy theory. Decomposability will determine where we end one level and begin another. It determines how wide the span of each level will be. This is largely a subjective process; the more rigorous the thought into the cleaving process the more fruitful the analysis. The subjectivity of this process should be comfortable to ecologists and resource managers as subjective terms such as ecosystem, watershed, and forest are central to ecology. Just as these terms require common sense to be effective, hierarchy theory requires the researcher to spend time learning the nuances of the theory so as to apply them with the level of rigour that will produce insight into the problem solving process (See Wu 1999, Kolasa 1996, and Waltho 1994 for applied examples of Hierarchy Theory). I have talked about the ubiquity of organized complexity and some of the limitations inherent to its analysis. Concerning the analysis of a complex system, we can make a first-order approximation of a system's organization by using hierarchy theory. In the analysis of the case study, hierarchy theory forms the firstorder approximation and network theory a second-order (more detailed, quantitative) approximation of the organizational architecture of the system at hand. Again, citing a very early classic article on the topic, from 1962, H. A. Simon describes the concept of a 'hierarchy' as "one of the central structural schemes that the architect of complexity uses." In a non-linear context the larger system is a product (an emergent product) of the smaller systems of which it is made; the larger system being made up of the smaller system(s). Hierarchies, then, are sets of nested systems. The idea of 78  'emergence' is important to hierarchy; the points at which a system of subsets  The Chinese box or Russian doll analogy breaks down here; the analogy only refers to the nestedness of hierarchies. There is no quality of emergence in the analogy. 7 8  101  creates a new, emergent, property is often the point at which one level in a hierarchy transitions to a higher level. Physics presents a simple example of hierarchy: elementary particles combine in specific ratios to create atoms. What species an atom is is dependent on the particular configurations of elementary particles. The atoms combine in, again, specific proportions to create molecules that have various levels of stability. Molecular stability is, to a large degree, a product of the atomic bonds at work. The properties of the atom are dependent on the ratios of the particles, not on any particular particle. The characteristics of the atom are are not readily identifiable simply based on the characteristics of the particles involved. This is even more identifiable in the case of molecules. A water molecule cannot be readily predicted based on knowledge of hydrogen or oxygen. The properties water 'emerge' only when the ratio of hydrogen to oxygen is 2:1. Consider the effect water has on combustion, then consider the combustible quality of either hydrogen or oxygen. In the examples just given the levels would be made up of particles as the pri79  mary level, atoms as the secondary level and molecules as the tertiary level. The relationship between levels is rather obvious in this example but still arbitrary. The relationships between the levels, in terms of energy states, changes by orders of magnitude as you move from particles to atoms to molecules. The energy field required to keep the particles organized into atoms is much greater than the energy bonds required to keep molecules together. This is typical of the relationship between levels in any organizational hierarchy. This is a vertical organization based on relative frequency. This relationship can be seen to hold when 80  As an aside, it is useful to note that in discussing hierarchy and scale issues two types of classification come up often: scale and level. Scale is a metric, a measure, based on measures of space and/or time. Level refers to types of organization. So a large scale is usually one measured in hundreds or thousands of meters or kilometers and small scale is often measures in meters or much less. It is relative to the general area of research but the important thing is that scale is metric (measured). Level refers to classification; organism level (considers the organisms), population level, ecosystem level etc. The term 'plot level research' designates the level that research usually occurs (typically small scale) so that the researcher can maximize control over the situation generating data. 7 9  It is interesting to note that this postulate of hierarchy holds with population dynamics studies that have found that population variability increases, in general, with the length of time populations are sampled. (See Pimm, S. L. and A. Redfern" The Variability of Population Densities" 1988. Nature. Vol. 334.) As one moves up the hierarchy the scale of time is increasing, as is the metric scale. Stability of populations becomes more tenuous as complexity builds. See also Kolasa, 1996. 8 0  102  considering horizontal organization, that is, the relative frequencies at work at a given level of the hierarchy. Interaction among subsystems is less than interaction within a subsystem. A general postulate, then, of hierarchy theory is that intra- versus inter-action will be greater whether considering the frequency or magnitude of interaction vertically or horizontally. In ecology, the interaction between organisms of a similar niche is of a relative high frequency compared to the interactions between populations that make up an ecosystem. So typically the focal level of Organism would create a level in the hierarchy that then gave way to a Population level. In the case study we will be looking at forest ecology both at the level of a tree and at the level of a forest, so as to both aggregate effects and consider the emergent effects of aggregation. At a level that reflects the spatial scale of a tree, maybe a plot several meters by several meters, we will be at a relatively low level in a hierarchy considering a forest. From a research perspective the relative frequency of the tree's physiology in terms of intra-dynamics would outweigh the inter-dynamics between individual trees of a forest. Moving up to a larger spatial and temporal scale, the forest level, the inter-action between trees becomes more important than any one tree (intra-action) in the forest. One can think of the hierarchy as a cone shape with the wide base made up of small scale interactions happening on a relatively high frequency (Fig. A l ) . The top of the cone is representative of large scale processes that occur at a relatively slow frequency. So the cone has two basic dimensions: (1)  Large scale to small scale from top to bottom respectively  (2)  High frequency to low frequency (or short temporal scale to long temporal scale) from bottom to top  103  Figure A l . The Hierarchical l a y e r Cake" for ecological criteria and scale. B is biome; E is ecosystem; P is population; Organism; Community; Landscape. (Schematic adapted from Allen and Hoekstra 1992)  The large scale, low frequency levels are then decomposable into nested levels of increasingly smaller scale, higher frequency constituent sub-systems. Hierarchy Theory and Near-Decomposability ONTD) Consider the hierarchical organization of molecules again. Interactions between atomic particles are much stronger and at a higher frequency than the interactions between the atoms of the molecule. This is so again when comparing the rate of interaction between constituent atoms that make up a molecule and the rate of interaction and strength of interaction between molecules in, for example, a drop of water. This holds, in general, for hierarchical organizations of social interactions. At a university, government, or corporate organization, interactions between members of a department are more numerous (higher frequency) than interactions among departments. Although the depiction of levels is arbitrary, finding cut-off points between levels is crucial to an analysis being insightful. The  104  concept behind near-decomposability (ND) provides a logical framework for delimiting levels so as to create a focal level for analysis. In an analysis one cannot consider all the constituents in great detail, the data would simply be overwhelming and the interaction too complex to render insight. The goal here is, after all, to simplify an organized but complex situation into a manageable scenario. If we accept that the interactions between modules are significantly less than the within interactions then we can treat the interactions between modules as almost autonomous. Another way to consider this situation is to use the term 'holon'. Holon is a term from the earliest days of General Systems Theory that identifies a sub-system as part of a hierarchy but still an entity unto itself: the organs of an organism are obviously parts or modules of the larger system, yet can be treated as whole entities, e.g. a heart, liver, or brain, etc. We can organize sub-systems into holons because we recognize the strong interactions of the holon sub-system as being significant enough to provide a clear delineation from the context that the sub-system operates in. A human heart has no role outside of the body but we can still describe the heart as a distinct organ relative to the surrounding tissues and the role they play in the larger system. It is ultimately a discretionary determination when considering where to stop when denning the heart; is the arterial system part of the heart? Where is the cut-off (literally) for the pulmonary vein, the superior vena cava, or the abdominal aorta? Hierarchy theory would argue that the determination would lie with the (somewhat rough) relative measure of the level of interaction. In the case of the heart we might consider the role of muscle tissue and the heart's role as a pump. Where does the abdominal aorta begin the transition to a muscle tissue? The goal in using ND as a organizing concept is the recognition that complex organized systems can be decomposed into modules that display an almost autonomous role (display a nearly independent functionality) while at the same time contributing to the functionality of the larger system. H. A. Simon used the term 'loose-coupling' to describe this characteristic of complex systems (Simon, 1962). In the hierarchical model, loose coupling allows for the local modification 105  of a module while retaining the over-all function of the larger system. Think of genetic change, whether it be mutation, crossing-over, or sexual selection. The module of the cell has undergone an internal change (DNA) but the output or the result of the change is still within the range of tolerance of the cell. So the cell's contribution to the larger sub-system is effectively unchanged . This is how mu81  tations are maintained or discarded. If a mutation allows the cell to function (the same, better, or only slightly worse) then it will persist. If the mutation's effect is to create a characteristic that causes the cell to function less than can be tolerated (the output of the cell is incommensurate with its context) then that cell will perish. Computer system software can be modified at the level of the module (e.g., a new printer driver) without affecting the neighbouring modules or the larger system in such a way that the larger system is inconvenienced. Hierarchy theorists ar82  gue that this is the primary mechanism for evolution (Simon, 1973; Simon, 2002). A very important aspect of ND (especially for our purposes here) in a hierarchy setting is that when there is a disturbance to the system, what ecologists call a perturbation, the system moves from its local equilibrium. Think of a storm event that causes a landslide in a watershed, a forest fire, or even a timber harvest clear-cut. The modules at the lowest level, those that are operating at the highest frequency (fast growth, short life), are busy fmding a niche to exploit. As a forest ecosystem module, the activity rate of the so-called pioneer species (fast growing 'weeds' and trees) is very high and change is occurring rapidly. At the next level up, perhaps the forest level or the watershed level, spatial and temporal scales are larger and change has not occurred yet in any meaningful way. Change is  The effect is not strictly unchanged per se but the change is well within the capacity of the system to absorb the change or have the system change synchronistically. A mutation could conceivably allow an organism to produce an enzyme capable of breaking down a complex carbohydrate such as cellulose but if the digestive system is not evolving in sync with the new enzyme then the mutation could be redundant. 8 1  In my example of a printer driver the modification allows me to change to a better, faster laser printer without changing my OS or computer. 8 2  106  propagated up as the lower levels equihbrate, giving way to the next phase of forest succession. ND models this activity as the lowest modules having a high level of internal activity without disturbing the levels above; Simon's 'vertical loosecoupling' concept. At the level of pioneer succession the elements are modular, in that we can differentiate between populations of grass species, tree species, invertebrates, etc. The internal dynamics of each are high relative to the dynamics between modules and eventually will average out to create an ecosystem that is amenable to the next phase of succession. Simon likes to use an analogy of a building when discussing ND. The building is separated into floors and on each floor are rooms. Within the rooms are cubicles utilized by graduate students. The building is old and the heating system is terrible. Outside it is freezing, inside each student has his or her own little electrical heater under the desk to keep from freezing. The exterior walls of the building are, in this analogy, perfect insulators. The interior walls are not. The walls of the rooms are adequate insulators. The cubicle separators provide minimal insulation. As long as the electrical heaters under each desk function, the temperature in each cubicle is whatever the student likes. But each cubicle is different, and each room is a different ambient temperature than the next one. The same holds for each floor. One day the power dies. What will happen? With no heaters working, the system (the building in this case) will move to an equilibrium temperature. Due to the differing ability of the walls to insulate this will happen first among cubicles, then within rooms, floors and finally the bunding itself. The cubicles, our lowest level in this hierarchy, will equilibrate first as the cubicle separators are very poor insulators. Once all the cubicles in a room are the same temperature the room will be at one even temperature, in distinction to when each cubicle was slightly differing in temperature. This will continue between rooms until each floor is at an even temperature and on again until the whole building is one temperature. This mind experiment is possible because we said that the exterior walls were perfect insulators. If we were to be asked what is the temperature in each cubicle, the answer would de107  pend on how much time has passed after the power went out. Before the power went out we would have to measure the temperature in each cubicle. Shortly after we could probably get away with just measuring the temperature in each room. After a longer period we could just install one thermometer on each floor, and eventually we could just have one thermometer in the whole building. The point is that the extent and grain of scale will determine the level of detail we will need to study a system. But we can generalize to say that a complex system made up of hierarchically arranged modules will be decomposable into various levels reflecting the temporal and spatial scale at each level. At the lowest levels we will need detailed analysis. As we move up the hierarchy we can assume that in the short r u n the activity within modules will be greater than between modules but that in the long run this will average out and we can gather relevant information that will generate valuable insight into an increasing complex system (as one moves up in scale) by studying the equilibrium at any given level without much concern for the slower dynamics of the levels above. This holds in terms of being able to be less concerned with the faster dynamics of the lower levels - which are busy averaging out to create the level we are focusing on at the time. If this sounds confusing consider trying to help a friend who is suffering from anxiety. You can ignore the effects happening at the cellular level and similarly you can ignore the effects that geo-politics are causing. The most effective solution is to focus on the immediate environment and try to create a calming influence. This is roughly how a psychologist might approach the situation. At the cellular level it is too much information, too complex to be helpful. Similarly, the state of the world is too complex. In most cases, understanding family dynamics, or some other dynamic occurring at the focal level will be the best focus. That is not to say that we would ignore the details at levels above or below. Hierarchy theory can help to recognize what details are relevant and which can be assumed away. Of course, if the problem is physiological at the biochemical level then this approach is not the correct one. In that case we would move down levels until we  108  felt we had a handle on the problem at hand. But the hierarchical approach can provide a context for the direction that research needs to take. So complex systems are organized hierarchically (nested), and the sub-systems and levels can be decomposed according to their relative frequencies. This allows afirstapproximation of a complex situation. A second-order approximation would likely try to understand the dynamics between the levels and the components of a level. Assuming that we are comfortable with our model of how complex systems are roughly organized, can we then ask how are these systems linked? What are the interactive dynamics at work in a complex system? Simon (1974) made the point, in his essay titled "The Organization of Complex Systems", that computer programs and computers in general make excellent experimental systems since they are complex but very amenable to quantification. In terms of bunding a detailed theory of the subject matter of his essay Simon compared the analytical value of digital computer systems to that of Drosophilia to genetics! Writing this in 1974 the author was discussing a topic still very esoteric. It has proved to be incredibly prescient when one considers the role of the digital World Wide Web (WWW) as a vehicle to study the dynamics of network theory (Barabasi, 2002); this in turn has contributed substantially to the development of knowledge about complex adaptive systems (Ibid, 2005).  83  With the hierarchical model in mind, let's move to introducing modern network theory. This will allow the consideration of the way a complex system builds, as well as what its vulnerabilities are. This last point is central to issues of stability. I have talked about how ND systems facilitate the speed and likelihood of complex systems. The modular architecture of complexity allows for the quick response to a system perturbation without requiring the whole system to adjust. Network 84  theory will provide insight into how a system builds resiliency; and resiliency is a precursor to system stability. 8 8  www.nd.edu/~networks/PubUcation%20Categories/publications.htm#ajichorpub 5005  8 4  Recall Simon's parable of the watchmakers. 109  Appendix C Introduction to 'Network Theory' It is difficult not to resort to hyperbole when describing the potential for network theory to remodel the way we think about a good many aspects of our world. Since the Industrial Revolution the machine has served as the dominant model for modern humankind to hang its mental models on. The centuries to come will likely turn to the network as the model of choice. One could counter that the digital domain is a machine domain but that holds only so long as you are thinking locally; from an organizational//topological perspective the domain is that of a network. The World Wide Web (WWW) is probably the most famous network and research on it has been fruitful in terms of discovering just what it means to be organized along networked lines (Barabasi et al., 2000). The network architecture of the WWW is largely organized through a process of self-organization and its measures are fractal in the language of geometry, and 'scale-free' in the language of statistics. Network theory describes organizations that are highly contingent yet have significant degrees of freedom - a good description of 'ecosystem' if I ever heard one. Network theory has its origins in a truly interdisciplinary setting. Mathematically its roots would be traced to modern graph theory and communications theory. Some sociologists in the 1970s, Stanley Milgram in particular, were asking questions about social networks that led to the coining of the phrase "six degrees of separation". By the late 1990s researchers with backgrounds in physics, math, sociology and molecular biology had cobbled together a body of work that would form the early groundbreaking works in network theory. These works described quantitatively how various (WWW, biological, trophic webs) networks are distributed, how they evolve and what are the strength and weakness of such organizations (Watts, 1998; Faloutsos, 1999; Barabasi, 2000; Jordan, 2002).  110  What constitutes a network? Like the term ecosystem, network is a general term that refers more productively to a concept as opposed to a well-defined object. Many networks are well defined: power grid networks, transportation networks, even river networks. But equally many are difficult to define precisely: social networks, trophic networks, and genetic networks. At its simplest, a network is an organization of entities that interact in a coordinated manner through space/ time. Network theory, then, is the theory and study of how networks build, decay and maintain. The study of networks looks for common architectures that will inform our understanding of network dynamics. When we think of networks we are implicitly tliinking 'maps' since many networks are made up of invisible interactions (e.g., social nets, communication nets) and our consideration of them means forming mental models or maps. When we are 'mapping' how parts of a network interact we are talking about topology. Topology as a word, and as a concept, is one of those terms that I referred to above as making the journey from the lexicon of several disciplines to the more common lexicon of the vernacular. I would go so far as to say that we are Uving through a period that could be described as a movement from the topographical to the topological in terms of mental models. Topographical refers to traditional style mapping: Euclidean space. Topological mapping refers to how things relate. It is about distributed relationships. Topographical mapping puts the emphasis on 'nearest neighbour' influences. This means that the relationship between two points or objects (diminishes, in general, with distance. This is of course very true in many cases and it is also an important proviso in hierarchy theory: the distance between levels, in general, will be inversely related in the influence one level has upon another. As a subtext to this section on networks let's take a minute to introduce topology, it is an important concept.  111  Topology  Topology, strictly speaking, is a branch of mathematics that has its roots in the study of geometry. In topology the study becomes qualitative in nature, as opposed to strictly quantitative. Mathematical topology asks "Is it connected, or can it be broken up?" A classic example of a topological map is the London Underground (subway map). In any subway the maps show the stops as they are linked on a particular line, and where the different lines meet. The distance between the stops, on the map, is not indicative of the actual distance between the stops. These common bus and subway maps are not scaled to distance. In 1776 Leonhard Euler wrote a famous paper called The Seven Bridges of Konigsberg that described the problem of trying to cross the seven bridges without using any bridge more than once (Fig. A2). In this situation the distance between the bridges or the lengths of the bridges is not relevant. The only important information is how they are connected.  Figure A2. The Seven Bridges of Konigsberg - Euler wrote a proof stating that it was impossible to cross each bridge only once. (Source: Wikipedia.org)  In an introduction to Topology, this paper is would be described as a problem that does not depend on measurement. Euler's contribution to math was to frame the 112  question as a graph (network) with the bridges as nodes and the paths between them as the vertices of the graph. Thus was born graph theory, the root of modern network theory in that graph theory provides the mathematical foundation for the quantitative and topological study of networks. Although topology's roots are mathematical its conceptual insights are used widely. In architecture it refers to spatial qualities that cannot be described through topographical (coordinate space) references: for example, the interactions of social and economic characteristics. Topology has allowed the digital Geographical Information System (GIS) to leap ahead of Computer-Aided Design (CAD) programs in its power of modelling. CAD programs are what architects and engineers use to draw and model their work a digital version of a blue print. A GIS program is very similar to a CAD program, but it has a property that makes it able to go farther in the modelling process. Using the concept of topology, a GIS allows the modeller to record how parts of the model relate to each other. In GIS topology refers to the logical relationships between spatial objects. A GIS uses topology to understand networks (e.g. the lines in a GIS describing streets are related and form networks), and similar logic allows a GIS to code connectivity, conductivity, and associativity. These properties greatly enhance spatial analysis. The road example above is exemplar of connectivity, directional flow (e.g., a river, urban water-works infrastructure) is an example of conductivity, and the ability to organize data according to groups that have logical relationships such as green-spaces that form riparian areas within a modelled setting is a form of associativity. In this last example a GIS map allows a user, with very little experience, to query a map for riparian area. The GIS data base might link any green space to any riverine space to create a riparian category. Consider the WWW (the Internet). In the web, distance is essentially meaningless. One of the most influential researchers in network theory is physicist Albert-Laszlo Barabasi. In the late 1990s Barabasi realized that although the web 113  was fast becoming a very important and. primary part of modern communication, no-one knew much about what the web looked (mapped) like or how it ebbed or flowed in terms of size and organization. With his grad students he began a research program to study the World Wide Web network. Their work, along with 85  others such as sociologists Watts and Strogatz (1998) and Faloutsos (1999) began to define the modern theory of networks. I say 'modern' because network theory (as Graph Theory) had existed at least since the time of Euler and his work on the bridges. Networks were resurrected, after a long period of inactivity, again in the post-war period (1950s). But all of this work focused on networks that were essentially static; it could not account for very large networks, or with the dynamics of networks gaining and losing nodes as the network developed. It would take very modern research methods to open networks up to rigorous scrutiny. As I mentioned before, H. A. Simon was prescient when he wrote about the value of the digital realm for studying complexity. The work of Barabasi and others on the WWW and on intercellular networks, as well as on ecological food webs, has opened a vast area of research that will likely have a large effect on research concerning organized complexity (Barabasi, 2005). Networks - A Nascent Theory Networks have long been recognized in many areas: networks of business/ economy, trophic networks in ecosystems, and, of course, neurological networks (and biological networks in general). But until the late 1990s the study of these types of networks remained modelled as largely static networks; that is, networks that already exist and neither grow nor atrophy. The change that came about in the late 1990s was a result of propitious circumstances. Computational power was wide-spread, essentially on every desktop in any university, and large data sets were becoming the norm in some areas: computer sciences, molecular biology, and among ecologists studying trophic structures. Researchers began to ask questions that essentially required understanding how these networks functioned, how they came to be, how they grew, senesced, and most importantly, how 8 5  See Albert, R., H. Jeong, and A.-L. Barabasi. 1999. Diameter of the World Wide Web. Nature 401: 130-131. 114  they were topologically mapped. One of these early researchers, Reka Albert, asks her students to consider these questions when beginning a study of networks: r  Why study networks? ^  It is increasingly recognized that complex systems cannot be described in a reductionist view. Understanding the behavior of such systems starts with understanding the topology of the corresponding network.  ^  Topological information is fundamental in constructing realistic models for the function of the network.  Jfr  Network topology related questions: /  How can we quantitatively describe large networks?  /  How did networks get to be the way they are?  /  What are the consequences of a specific network organization?  (Reka Albert: http://www.phys.psu.edu/~ralbert/phys597 06/introduction.pdf) By 2000 it was becoming apparent that complex networks had general properties that could be construed as a network architecture. Networks are made up of nodes and links, or more technically,vertices and edges. The vertices are the sites (the population) of a network. In a trophic net, the vertices would be the organisms that make up the network. In the WWW the networks are made up of a hierarchy of routers, sub-routers and the computers or terminals sited in individual homes or institutions. These are all connected (the edges of a graph) via the communication infrastructure that is telephone wires, satellites and cable or fibre-optic lines. The linkages, how resources move through a trophic net, how data files move through the WWW, are the edges. How many edges come into or  115  out of a node describes a nodes 'degree' . A node with a high degree is very well 86  connected; that is, it has a large number of edges. A network can be organized along a continuum, from a completely randomized structure to a completely regular (homogeneous) structure. Prior to this resurgence in network studies, networks would have been conceived of as homogeneously laid out.  F i g u r e A 3 . N e t w o r k s w i r e d along a continuum; f r o m a r e g u l a r lattice to a completely r a n d o m i z e d lattice. P r o b a b i l i t y (p) i s t h e l i k e l i h o o d o f a v e r t e x b e i n g c o n n e c t e d Watts and Strogatz  to a n o t h e r v e r t e x r a n d o m l y . ( S c h e m a t i c a d a p t e d  from  1998)  Small Worlds  In Fig. A3, the graph on the left is wired regularly. That is, a node is connected to its nearest neighbour and that neighbour's nearest neighbour. The graph on the right is wired randomly, and the graph in the middle is wired somewhere in between. To get from the vertex at 12 o'clock to the vertex at 6 o'clock, on the lefthand graph, would take a minimum of 5 steps and a maximum of 10 steps. Using the right-hand graph, it could be accomplished in 2 or 3 steps. The middle graph Technically speaking, degree has an 'in-ness' and an 'out-ness'. Indegrees link other nodes to the link, outdegrees link a node to other nodes. Think of hyperlinks on a web based document. Hyperlinks leading to a site are the indegrees and the hyperlinks on the site leading to other sites are the outdegrees. In and out denotes direction in the topological sense discussed above.  86  116  would need 4 steps. This example is from a 1998 paper by Watts and Strogatz that is now considered a classic paper on the topic of'small-world' networks. The authors took the term from the idea that a graph (a network of connections) with many vertices arranged regularly can be very inefficient if your aim is to get anywhere that is not roughly nearest neighbour. It will require many steps. This is a trivial point if the network is small and the number of nodes few. But many networks that are interesting are large, with many (millions, billions) nodes. On the other hand, in the random graph setting it might be inefficient if your aim was to move to a nearest neighbour. Network theorists refer to this as path length: the number of edges needed to traverse to get from one point to another on the graph. The other common metric is the clustering coefficient: the ratio of actual links or edges relative to all the possible links. A regular graph (network) is thought to exhibit high clustering (nearest neighbours are very important) and high path length. A random graph is thought to have a low path length and low clustering, this reflects the randomized structure of the graph. Think of the diagram above Fig. A3 as a continuum from regular to randomly wired. Now imagine that we can 'tune' the network with a dial, altering how the nodes are linked. A small turn changes a few links from regular to random. The surprising outcome is the effect this has on path length (it drops precipitously) and clustering (it drops much slower). A Small World organization tends to reduce over-all path length, while at the same time maintaniing clustering. It accomplishes this by wiring some of the nodes to distant nodes; connections that are non-existent in a regular graph setting. In a completely random graph these 'small world' connections arise through randomized connecting. The problem is that this can lead to a network that is inefficient at connecting nearestneighbours. The graph ends up clustering with some areas of the graph dense with connections while others are barren. Watts and Strogatz, considering social networks, realized that social connections tended to have qualities of both these graphs, regular and random. In terms of regular qualities, social connections are strongly influenced by proximity, nearest neighbours tend to be acquaintances if  117  not close friends. But in their research, many people they interviewed reported that they had used 'weak ties', casual acquaintances, as bridges to connecting to jobs. This is the 6 degrees of separation phenomenon. Clustering i n a network tends to form around strong associations but weak associations are important to occasionally connect disparate parts of the network. To quantify this clustering effect Watts and Strogatz came up with a ratio of actual node connections to possible node connections. A clustering coefficient. A n example of a friendship network works well for insight. In your circle of friends you could all know each other and have many links, say 10. Or more realistically, some of your friends will know each other and some will not, so the number of friendship links around you might be 6. In this case the clustering coefficient of your friendship network would be 0.60. A clustering coefficient measures the relative number of connections in a network. The popular idea of "six degrees of separation" reflects the way society tends to be organized as a large network where most people know relatively few others well. But if we include casual friends and acquaintances, then the network grows with your close friends forming a cluster around you (since some of your friends will be friends of each other as well), and those clusters being connected by weak links among acquaintances. The idea behind 'six degrees of separation' is that you may know someone (either well or only casually) with fajnily in northern India, and so you could conceivably pass a note from yourself (say, in Moose Jaw, Sask.) to the Dali Lama in northern India via a friend or acquaintance (i.e. hopefully your friend of a friend in India knows a monk at Dharamsala!). Watts and Strogatz (1998) coined the term 'small-worlds' to describe the effect on a network of being somewhere between a regular network (homogeneous) and a random network. Their paper considers small-worlds as a possible model for contagious disease vectors. The more highly connected a network, the potentially faster the disease will spread (of course the degree and manner of contagiousness will matter here). When considering contagion nearest-neighbour is very relevant, but equally important, especially in terms of the potential to spread, 118  will be the non-regular, non-random, small world links that connect disparate regions of the network. Barabasi et. al. (2002) look at the network of co-authorship amongst scientists. The science community tends to be highly clustered due to another concept in network theory called preferential attachment. This creates a power law distribution in the pubhshing world of science. Most papers are cited infrequently with a relative few being cited a large amount. This occurs because important papers tend to spawn further research in that area and the original work is then cited over and over again. Anyone writing in the area will cite the classic papers. This further reinforces the clustering topology of scientific publications. Time plays a role here but the small-world architecture, combined with characteristic clus87  tering and preferential attachment, creates a network that exhibits the typical traits of a dynamic network. Early research looking into the layout of the WWW discovered, to many peoples surprise (mostly, I assume, because no-one had really thought about it), that the WWW was best mapped as a topological network, rather than a topographical one. Equally surprising was the realization that it was distributed as a power law: power law distributions are part of a family of distributions that include, for example: Zipf and Pareto distributions . These last two distributions approximate 88  the so called 80:20 Rule. In the case of economist Wilfed Pareto's work in the late 1900s, he found that in most societies the wealth distribution was characterized as 20% of the population owning 80% of the national wealth. Zipf found a similar distribution held for words in a language - 20% of words in a language make up 80 % of the word use in a given text that he sampled. More generally, power law distributions describe a situation where many small events dominate with a few very large events occurring occasionally.  8 7  As time goes on a paper's relevancy declines in general.  8 8  Other terms are fat-tailed distributions, long-tailed distributions, and heavy-tail distributions 119  will be the non-regular, non-random, small world links that connect disparate regions of the network. Barabasi et. al. (2002) look at the network of co-authorship amongst scientists. The science community tends to be highly clustered due to another concept in network theory called preferential attachment. This creates a power law distribution in the pubhshing world of science. Most papers are cited infrequently with a relative few being cited a large amount. This occurs because important papers tend to spawn further research in that area and the original work is then cited over and over again. Anyone writing in the area will cite the classic papers. This further reinforces the clustering topology of scientific publications. Time plays a role here but the small-world architecture, combined with characteristic clus87  tering and preferential attachment, creates a network that exhibits the typical traits of a dynamic network. Early research looking into the layout of the WWW discovered, to many peoples surprise (mostly, I assume, because no-one had really thought about it), that the WWW was best mapped as a topological network, rather than a topographical one. Equally surprising was the realization that it was distributed as a power law: power law distributions are part of a family of distributions that include, for example: Zipf and Pareto distributions . These last two distributions approximate 88  the so called 80:20 Rule. In the case of economist Wilfed Pareto's work in the late 1900s, he found that in most societies the wealth distribution was characterized as 20% of the population owning 80% of the national wealth. Zipf found a similar distribution held for words in a language - 20% of words in a language make up 80 % of the word use in a given text that he sampled. More generally, power law distributions describe a situation where many small events dominate with a few very large events occurring occasionally.  8 7  As time goes on a paper's relevancy declines in general.  8 8  Other terms are fat-tailed distributions, long-tailed distributions, and heavy-tail distributions  119  I n the case of the W W W , m o s t sites h a v e few edges, o r links, while arelative  few  h a v e av a s t n u m b e r o f l i n k s .C o n s i d e r s o m e o n e ' s p e r s o n a l w e b p a g e c o m p a r e d t o A m a z o n . c o m or Google.com: Systems as diverse as genetic networks or the world wide web are best described as networks with complex topology. A common property of many large networks is that the vertex connectivities follow a scale-free power-law distribution. This feature is found to be a consequence of the two generic mechanisms that networks expand continuously by the addition of new vertices, and new vertices attach preferentially to already well connected sites. A model based on these two ingredients reproduces the observed stationary scale-free distributions, indicating that the development of large networks is governed by robust self-organizing phenomena that go beyond the particulars of the individual systems.  (Barabasi and Albert, 1999)  *  Hormal Diatrilsaitioix tm, ^ f v t  1t,y  * ••  -  • .  Ii  ,.Nodes u # m y  . . . . . .  Nodes with few ua*a  eluiteraroupdl « . S  * 4 '•  '*>*  ^:  Nodes (but*) wtox WB^ssSxtp of Hides  SroMgWSF *$ aodtei  ar.  § *» »* • # m %  J£*  •f  >v  f - •  Figure  A4.  Networks and Distributions. (Schematic adapted from Barabasi,  2002)  I n F i g . A4, a n o r m a l d i s t r i b u t i o n d e s c r i b e s t h e m a p o f n a t i o n a l h i g h w a y s i n A m e r ica. H i g h w a y s connect nearest neighbours. W e don't build h i g h w a y s that go  f r o m  Seattle to M i a m i without connections all along the way. T h e e c o n o m i c s a n d  the  p r a c t i c a l n e e d s of d r i v e r s dictate t h a t an a t i o n a l h i g h w a y n e t w o r k b e laid  120  out  regularly. The map on the right depicts the routes of flights in America. Airports tend to he laid out as regional hubs. Big population centres draw airports, and those hubs then become stop-over sites for smaller fights. As well, flights can easily justify routes that connect Seattle to Miami if the demand is there. So airports tend to exist as a few very large airports and many more smaller airports. Flights happen to connect disparate parts of the continent according to population - or preferential attachment. Airline flights are distributed as a Power Law distribution. Barabasi, and others, use the term 'scale-free' to describe a power law distribution. Power laws are scalar distributions. They describe sample sets or populations of data that are extremely variable. So variable in fact that they have no meajimgful mean or average variance; therefore the data has no particular scale - it is scale invariant. These terms have physical constraints of course. Flights, although power law distributed, cannot be of a outlandishly long nature, extending the tail of the distribution. Likewise the short flights will never connect, say, bus stops. Power laws take the form: P (X>x) ~ cx~ where c and a are non-negative. a  Roughly speaking, the tail of the power law falls asymptotically according to the exponent a. Considering a network of nodes and links, the exponent in the power law measures how fast a node acquires new links. Power laws are characterized by the property that they look the same regardless of scale. In this sense power laws reflect the fractal nature of many complex systems (Fig. A5).  121  Polgflon or Mormal  Power Law  Random Oraph  Preferential Attachment  F i g u r e A5. The graph on the left is a random graph; each node has an equal probability of connecting to another node, its distribution is Poisson or Normal. The graph on the right is 'scale-free', its distribution is a Power Law. Its distribution is influenced by weighted probabilities that reflect the preferential attachment environment.  Again, to use the WWW as a model: the web can naturally be thought of as a graph. Pages on the web are the vertices of the graph and the links within a page, the hyperlinks, are the directed edges. Links have direction, they can be links out of a page and as well links can lead to the page from other pages. Consider a page, a research paper in pdf format, that has been created and it causes quite a stir in the science community. The page's links that are directed out are finite (unless the page is edited over time) and represent the hyperlinks to citations or some such role. The mcoming links, the number of other pages referring to that page, is undetermined and can grow potentially exponentially over time if the paper is cited often.  Preferential Attachment  122  In 2003, Michael Mitzenmacher wrote a review of the debate in computer science about the distribution of file sizes on the Net: are they power law distributed or lognormal? The paper is a wealth of insight mostly because it offers up a very clear and historical perspective on the nuances of these two distributions. Most [Power Law] models are variations of the following theme. Let us start with a single page, with a link to itself. At each time step, a new page appears, with outdegree 1. With probability cx<l, the link for the new page points to a page chosen uniformly at random. With probability 1- a, the new page points to [a] page chosen proportionally to the indegree of the page. This model exemplifies what is often called preferential attachment [my bold]; new objects tend to attach to popular objects. In the case of the Web graph, new links tend to go to pages that already have links. (Mitzenmacher, 2003)  The recognition of the importance of these distributions goes back to the latter 19th century and has many advocates in various disciplines ranging from Yule (1925) in Statistics and Biology, Herbert Simon in Organizational Science (1955), to Benoit Mandelbrot (same general period) in Mathematics. All were interested in the distribution of events that seemed well outside of the more ubiquitous Normal or Gaussian distributions that dominate statistics. Indeed, some have called into question the ubiquity of the Gaussian distribution, and have deemed the more exotic Power Law family of distributions to be 'More "Normal" Than Normal'. (Willinger, 2004) This is a view that Mandelbrot has advocated 89  for the last 40 years. (Ibid., Mandelbrot, 2004) Power laws are central to Mandelbrot's work with fractals. Fractals are an iterative process that build on itself. The degree to which a fractal is self-similar is dependent on how sensitive the process is to environmental influence. Computer generated fractals, although fascinating, are the least interesting example of fractal processes because they are perfectly self-similar. The whole process is generated internally with no sensitivity to environmental perturbation. Images of river networks and deltas form beautiful fractal imagery but they are not perfectly self-similar as the water course has to contend with, among many things, bed-rock, imbricating rock for-  For a 1 page synopsis of Mandelbrot's contribution to mathematics, hydrology, and finance see: http://www.math.yale.edu/mandelbrot/web pdfs/ip earth.pdf 8 9  123  iriations, variable flow and the son-entraining effects of large biota like trees (forests). So, power law distributions characterize self-organized networks such as the WWW and metabolic networks in many organisms (Ravasz, 2002). The scale-free (very high variation) topology of such networks owes it's distribution to the preferential attachment mechanism (Fig. A5). This mechanism, articulated in models of the Web, biochemistry, etc., can be generalized to apply to any kind of preferential attachment. (Mitzenmacher, 2003). Recall in the preceding discussion of Hierarchy Theory that the presence of stable subassemblies (the Blind Watchmaker example) greatly increased the likelihood of a complex assembly occurring - this is the hierarchical version of preferential attachment. Hierarchy Theory and Network Theory, then, have many over-lapping characteristics and indeed recent research has led to the use of a hierarchical model being wedded to a scale-free network model (Ravasz, 2002).  Complex (Hierarchical) Network Models At this point the 'introduction to ...' part of the thesis can be concluded. M y goal has been to clarify what I think is a good working definition of complexity. I have used a very early definition articulated by one of the founders of modern information theory, Warren Weaver - organized complexity. Hierarchy Theory, also having its roots about mid-point in the 20th century, provides a first-order model of how to organize an insightful study of a complex system. The area of interest becomes the focal point, or the focal level, in the hierarchy and we consider the level above and below in terms of setting the parameters for study. In a hierarchical perspective we explicitly recognize the modular nature of a complex system and we utilize the near-decomposable mechanics of modules in a hierarchy to determine what information is central to our study. The principles of neardecomposability encourage us not to over analyze the study. This is the central mechanism we use to decomplexify the complex system.  124  Once we have the hierarchical model of the case study in place we can draw on Network Theory to make informed assumptions about the dynamics at work that are the key determinants of hillslope stability in the watershed above the Resort Municipality of Whistler. Figure A6 is a model of cellular metabolism networks. Researchers found that these very complex networks displayed both high levels of clustering and scalefree distribution. Normally these two characteristics would be antithetical - clustering supposes lower levels of variability and scale-free distributions supposes low levels of clustering. A. ) A schematic illustration (left most) of a scale-free network, whose degree distribution follows a power law. In such a network, a few highly connected nodes, or hubs (blue circles), play an important role in keeping the whole network together. A typical configuration (right) of a scale-free network with 256 nodes is also shown, obtained using a scale-free model, which requires the addition of a new node at each time step such that existing nodes with higher degrees of connectivity have a higher chance of being linked to the new nodes. [Preferential Attachment] B. ) Schematic illustration (left) of a manifestly modular network made of four highly interlinked modules connected to each other by a few links. This intuitive topology does not have a scale-free degree distribution, as most of its nodes have a similar number of links, and hubs are absent. A standard clustering algorithm uncovers the networks inherent modularity (right) by partitioning a modular network of N= 256 nodes into the four isolated structures built into the system. C. ) The hierarchical network (left) has a scale-free topology with embedded modularity. The hierarchical levels are represented in increasing order from blue to green to red. Standard clustering algorithms (right) are less successful in uncovering the model's underlying modularity.  Figure A 6 . Hierarchical Modularity (Adapted from Ravasz et. al., 2002) 125  A hierarchical model that models the clustering as levels in the hierarchy and a scale-free distribution as the over-all network connguration worked very well to align the statistical parameters the researchers were getting. I propose a similar model to explain the emergence of stable ecological entities that we will call 'ecostructures'. In this case a mesh of ecological structures modelled as a Hebbian network in a four dimensional space. Hebb, a Canadian researcher at McGill during the mid-twentieth century, studied the neurological basis of learning. He proposed a model of elastic synaptic dynamics. As a child learns certain paths of synaptic activity fire over and over  again (such as the sight of a mother, food, or the feel of a wet diaper) and are reinforced, much like a trail is created by walking through the grass over and over again. This model of networked paths strengthening is the basis of neural network mathematical modelling. Figure A6, then, is the primary model I propose for the topological model of ecostructure. The various levels of the network are the modules of the hierarchy/ ecosystem. They have much higher relative rates of frequency within the module or level than amongst levels or modules. The levels do, however, have a lower frequency of interaction and this is modelled as the edges linking the various levels. Hebb theorized that memories, dreams, and even ideas were the emergent property of neural paths forming dense, over-lapping"areas of topologically related events . Similarly, ecostructures build (evolve) over time and the emergent 90  properties are the goods and services we so highly value but may fail to recognize due, in part, to their distributed quality.  That is why, the theory goes, in a dream you may experience feelings and images that are linked disparately in the brain (smells, temperature, fear) but occur together in terms of the event that originally created the signals within the brain. 9 0  126  

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
http://iiif.library.ubc.ca/presentation/dsp.831.1-0100921/manifest

Comment

Related Items