Open Collections

UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Energy performance assessment for existing multi unit residential buildings Gamalath, Isuru Madhushan 2017

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata

Download

Media
24-ubc_2017_September_Gamalath_Isuru.pdf [ 2.43MB ]
Metadata
JSON: 24-1.0348244.json
JSON-LD: 24-1.0348244-ld.json
RDF/XML (Pretty): 24-1.0348244-rdf.xml
RDF/JSON: 24-1.0348244-rdf.json
Turtle: 24-1.0348244-turtle.txt
N-Triples: 24-1.0348244-rdf-ntriples.txt
Original Record: 24-1.0348244-source.json
Full Text
24-1.0348244-fulltext.txt
Citation
24-1.0348244.ris

Full Text

i   ENERGY PERFORMANCE ASSESSMENT FOR EXISTING MULTI UNIT RESIDENTIAL BUILDINGS by   Isuru Madhushan Gamalath B.Sc. Eng. (Hons), University of Moratuwa, 2013 MSc, University of Moratuwa, 2015  A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF  MASTER OF APPLIED SCIENCE  in  THE COLLEGE OF GRADUATE STUDIES (Civil Engineering) THE UNIVERSITY OF BRITISH COLUMBIA (Okanagan) June 2017  © Isuru Madhushan Gamalath, 2017i  The undersigned certify that they have read, and recommend to the College of Graduate Studies for acceptance, a thesis entitled: Energy performance assessment for existing multi unit residential buildings submitted by Isuru Madhushan Gamalath in partial fulfilment of the requirements of the degree of Master of Applied Science in Civil Engineering.   Dr. Kasun Hewage, School of Engineering (Okanagan Campus)  Supervisor, Professor   Dr. Rehan Sadiq, School of Engineering (Okanagan Campus) Supervisory Committee Member, Professor   Dr. Zheng Liu, School of Engineering (Okanagan Campus) Supervisory Committee Member, Professor      Dr. Yang Cao, School of Engineering (Okanagan Campus) University Examiner, Professor    09-June-2017        ii  Abstract Climate change is a major challenge in today’s world. Energy use is directly correlated to greenhouse gas emissions, resulting in climate change. As the residential sector is a major energy consumer, improving the energy performance of the residential building stock is imperative in mitigating this issue. Evaluation of building energy performance, life cycle impacts, and economic burdens of building energy use can facilitate improved decision making in operations of existing building stock. Hence, as the primary objective of this study, a life cycle thinking-based energy assessment tool was developed for multi-unit residential buildings (MURBs). A comprehensive review of popular building energy rating systems revealed the need to incorporate life cycle thinking in evaluating building energy performance. Further, based on a comprehensive review it was identified that current rating systems do not consider the uncertainty and vagueness associated with data used for performance assessments. Most of the existing energy rating systems focus only on energy consumption when assigning the rating. Energy rating systems rarely consider the factors affecting energy use and the impacts of energy use in assigning their score/rating for the building. An assessment tool with indicators representing the impacts of energy use and factors affecting operational energy use of buildings was developed to address the identified issues.  A questionnaire survey was conducted to obtain expert views on the proposed assessment tool from professionals associated with MURBs. MURB owners, managers, designers, engineers, researchers, and government and other external stakeholders were the target audience of this survey. Feedback from this survey was used to refine the proposed tool and determine weights for indicators.  In the proposed method, fuzzy set theory was used to consider the uncertainties and vagueness associated with qualitative and quantitative assessments of the identified indicator data. Fuzzy synthetic evaluation was used to aggregate the indicator value. The proposed approach extends the current body of knowledge on building energy ratings by integrating asset performance and operational performance through lifecycle thinking. A case study was conducted to demonstrate the application of the energy assessment tool. A java-based web tool was developed to assist the proposed assessment process.   iii  Preface   A paper titled “Development of a life cycle thinking based energy assessment tool for existing multi-unit residential buildings” is in development, to be published in the Journal of Clean Technologies and Environmental Policy under the supervision of Dr. Kasun Hewage. That paper incorporates the contents of chapters 2, 3, and 4 of this thesis.    iv  Table of Content  Abstract ........................................................................................................................................... ii Preface............................................................................................................................................ iii Table of Content ............................................................................................................................ iv List of Figures .............................................................................................................................. viii List of Tables ................................................................................................................................. ix List of Equations ............................................................................................................................. x Abbreviations ................................................................................................................................. xi Acknowledgements ....................................................................................................................... xii Chapter 1: Introduction ................................................................................................................... 1 1.1 Background ........................................................................................................................... 1 1.2 Problem Statement ................................................................................................................ 3 1.3 Definitions and Scope of the Study ....................................................................................... 4 1.4 Research Objectives .............................................................................................................. 5 1.5 Research Methodology Overview ......................................................................................... 5  1.6 Thesis Organization ............................................................................................................. 9 Chapter 2: Literature Review ........................................................................................................ 11 2.1 Overview ............................................................................................................................. 11 2.2 Building Energy Rating Systems ........................................................................................ 12 2.2.1 Energy ratings in Canada ............................................................................................. 12 2.2.2 Energy ratings in US .................................................................................................... 13 2.2.3 Energy ratings in Australia .......................................................................................... 14 2.2.4 Energy ratings in Europe ............................................................................................. 14 2.3 Sustainable Building Rating Systems ................................................................................. 19 v  2.3.1 Sustainable building rating systems in Canada ............................................................ 19 2.3.2 Sustainable building rating systems in US................................................................... 20 2.3.3 Sustainable building rating systems in Australia ......................................................... 20 2.3.4 Sustainable building rating systems in Europe and Asia ............................................. 20 2.4 Challenges and Limitations of Existing Rating Systems .................................................... 25 2.4.1 Monitoring energy performance .................................................................................. 25 2.4.2 Benchmarks.................................................................................................................. 26 2.4.3 Life cycle perspective .................................................................................................. 26 2.5 Life Cycle Assessment (LCA) ............................................................................................ 27 2.6 Fuzzy Sets ........................................................................................................................... 29 2.7 Fuzzy Synthetic Evaluation (FSE) ...................................................................................... 31 2.8 Modified Digital Logic (MDL) ........................................................................................... 33 Chapter 3: Performance Indicator Identification and Prioritization ............................................. 35 3.1 Overview ............................................................................................................................. 35 3.2 Methodology ....................................................................................................................... 36 3.2.1 Performance indicator identification ............................................................................ 36 3.2.2 Expert consultation ...................................................................................................... 37 3.3 Performance Indicator Identification .................................................................................. 40 3.3.2 Operational rating ........................................................................................................ 43 3.3.3 Asset rating .................................................................................................................. 45 3.4 Performance Indicator Prioritization................................................................................... 45 3.4.1 Importance of identified performance indicators ......................................................... 46 3.4.2 Weights for performance indicator categories ............................................................. 50 3.5 Discussion ........................................................................................................................... 52 3.6 Summary ............................................................................................................................. 53 vi  Chapter 4: Building Energy Performance Assessment Framework ............................................. 55 4.1 Background ......................................................................................................................... 55 4.1.1 Expected information from an energy rating tool ........................................................ 56 4.1.2 Need for a new tool ...................................................................................................... 57 4.2 Overview of the Proposed Assessment Tool ...................................................................... 58 4.3 Scope of the Proposed Energy Performance Assessment Tool .......................................... 58 4.4 Energy Performance Assessment Methodology ................................................................. 60 4.5 Life Cycle Assessment in the Proposed Assessment Tool ................................................. 61 4.6 Benchmarks......................................................................................................................... 63 4.7 Aggregate Indicators ........................................................................................................... 67 4.7.1 Indicator assessment .................................................................................................... 67 4.7.2 Category weights ......................................................................................................... 70 4.7.3 Fuzzy rules ................................................................................................................... 70 4.8 Summary ............................................................................................................................. 71 Chapter 5: Web Tool and Case Study ........................................................................................... 73 5.1 Web Tool ............................................................................................................................ 73 5.1.1 Web tool development ................................................................................................. 73 5.1.2 Web tool implementation ............................................................................................. 74 5.2 Case Study .......................................................................................................................... 78 5.2.1 Project information ...................................................................................................... 78 5.2.2 Energy consumption and LCA simulation ................................................................... 79 5.2.3 Energy performance assessment .................................................................................. 80 5.2.4 Discussion .................................................................................................................... 82 5.3 Summary ............................................................................................................................. 83 Chapter 6: Conclusions and Recommendations ........................................................................... 84 vii  6.1 Conclusions ......................................................................................................................... 84 6.2 Originality and Contributions ............................................................................................. 85 6.3 Limitations .......................................................................................................................... 86 6.4 Future Research .................................................................................................................. 87 References ..................................................................................................................................... 88 Appendix A: Energy Consumption Data ...................................................................................... 99 Appendix B: Questionnaire......................................................................................................... 101 Appendix C: Ethics Approval and TCPS 2 Certification ........................................................... 111 Appendix D: Modified Digital Logic Analysis .......................................................................... 115 Appendix E: Energy Simulations................................................................................................ 120 Appendix F: LCA Analysis ........................................................................................................ 122    viii  List of Figures Figure 1:1: GHG Emission Projections Canada  ............................................................................ 1 Figure 1:2: Research Methodology ................................................................................................. 8 Figure 1:3: Thesis Organization.................................................................................................... 10 Figure 2:1: Life Cycle Assessment (ISO 14040, 2006) ................................................................ 28 Figure 2:2: Triangular Fuzzy Number .......................................................................................... 30 Figure 2:3: Hierarchical Process ................................................................................................... 32 Figure 3:1: Overview of the energy rating method ....................................................................... 35 Figure 3:2: Respondents' Experience ............................................................................................ 46 Figure 4:1: Methodology of the energy rating system framework ............................................... 61 Figure 4:2: LCA system boundary ................................................................................................ 62 Figure 4:3: Fuzzy Membership Functions .................................................................................... 67 Figure 5:1: Welcome Screen ......................................................................................................... 74 Figure 5:2: Additional information ............................................................................................... 75 Figure 5:3: Sign up/ Sign in .......................................................................................................... 76 Figure 5:4: Building Information .................................................................................................. 76 Figure 5:5: Assessment Data ........................................................................................................ 77 Figure 5:6: Purcell Residence UBC Okanagan ............................................................................. 78 Figure 5:7: Design Builder Model of the Purcell Building ........................................................... 79   ix  List of Tables Table 2:1: Review of Energy Rating Systems .............................................................................. 16 Table 2:2: Sustainable Building Rating Systems .......................................................................... 22 Table 2:3: Modified Digital Logic ................................................................................................ 34 Table 3:1: Performance Indicators ................................................................................................ 41 Table 3:2: Rating Criteria for Indicators ....................................................................................... 46 Table 3:3: Prioritizing Indicators .................................................................................................. 48 Table 3:4: Weights Operational and Asset Rating ........................................................................ 51 Table 3:5: Weights for Categories of Operational Rating ............................................................ 51 Table 3:6: Weights for Categories of Asset Rating ...................................................................... 52 Table 4:1: Use of Energy Rating .................................................................................................. 55 Table 4:2: Expected Information from an Energy Rating Tool .................................................... 56 Table 4:3: Energy Conversion Factors ......................................................................................... 63 Table 4:4: Average Energy Consumption..................................................................................... 63 Table 4:5: Percentage Energy Consumption................................................................................. 64 Table 4:6: Performance Ratio and Performance Score ................................................................. 64 Table 4:7: Reference Tables from Energy Code 2011 .................................................................. 65 Table 4:8: Benchmarks ................................................................................................................. 66 Table 4:9: Weights of Performance Indicators ............................................................................. 69 Table 4:10: Scenario based weighting for operational performance ............................................ 70 Table 4:11: Fuzzy rules for building energy rating ...................................................................... 71 Table 5:1: Energy Performance Assessment of the Purcell Residence* ...................................... 80 Table 5:2: Fuzzy Vector for Sub Category ................................................................................... 81 Table 5:3: Scenario Analysis (Fuzzy Vector for Operational Rating) .......................................... 81 Table 5:4: Defuzzification ............................................................................................................ 82 Table 5:5: Purcell Residence Ratings ........................................................................................... 82    x  List of Equations Equation 2:1: Fuzzy Set ................................................................................................................ 30 Equation 2:2: Fuzzy Vector .......................................................................................................... 32 Equation 2:3: Defuzzification ....................................................................................................... 33 Equation 3:1: Sample Size Determination .................................................................................... 38 Equation 4:1: Performance Ratio .................................................................................................. 64 Equation 4:2: Fuzzy Membership Function Very Poor ................................................................ 68 Equation 4:3: Fuzzy Membership Function Poor ......................................................................... 68 Equation 4:4: Fuzzy Membership Function Average ................................................................... 68 Equation 4:5: Fuzzy Membership Function Good ........................................................................ 68 Equation 4:6: Fuzzy Membership Function Very Good ............................................................... 68   xi  Abbreviations BEAM Building Environmental Assessment Method BOMA Building Owners and Managers Association BOMA BEST Building Owners and Managers Association Building Environmental Standards BRE Building Research Establishment BREEAM Building Research Establishment Environmental Assessment Method CASBEE Comprehensive Assessment System for Built Environment Efficiency CIRS Centre for Interactive Research on Sustainability DGNB German Sustainable Building Council EPC Energy Performance Certificate  EPI Energy Performance Indicator EU European Union  GHG Greenhouse Gas HERS Home Energy Rating System ISO International Standards Organisation LCA Life Cycle Assessment LCC Life Cycle Costs LCI Life Cycle Inventory LCIA Life Cycle Impact Assessment LEED Leadership in Energy and Environmental Design MDL Modified Digital Logic MURB Multi-Unit Residential Buildings NABERS National Australian Built Environment Rating System PDF Portable Document Format PI Performance Indicators PICS Pacific Institute for Climate Solutions REAP Residential Environmental Assessment Program UBC University of British Columbia xii  Acknowledgements  I would like to acknowledge many who supported me to complete this master’s research successfully. First of all, I would like to thank my supervisor, Dr. Kasun Hewage, for his unfailing support, supervision, and motivation throughout my studies at University of British Columbia (UBC) Okanagan. His supervision and advice was immensely helpful to the successful completion of my studies. I really appreciate his understanding and patience during the time I faced difficulties during my studies.  Further, I would like to thank Dr. Rehan Sadiq for his comments to improve the research, and for providing guidance every time I needed it. My sincere appreciation goes to Mr. Rajeev Ruparathna for his continuous support throughout this study by providing guidance and feedback to improve this study and the thesis. Further, I would like to acknowledge Mr. Piyaruwan Perera and Ms. Hirushie Karunathilake for their support throughout this period. Their help and encouragement was helpful in successful completion of my studies. Further, I would like to thank Mr. Gyan Kumar Shrestha, Mr. Haibo Feng, and other colleagues of the Project Lifecycle Management Laboratory for their support and company.  I sincerely appreciate the financial support given by the Pacific Institute for Climate Solutions (PICS) for this study. Further, I acknowledge the support given by Dr. Raymond Cole and Ms. Angelique Pilon of the Centre for Interactive Research on Sustainability (CIRS) in coordinating this study under PICS’ Energy Efficiency in BC’s Built Environment project and providing feedback to improve this study.  I appreciate the support given by UBC Okanagan Energy Systems Manager Mr. Colin Richardson for his support during various stages of this study. Thanks goes to Natural Resources Canada, Mr. Jordan Fisher from FRESCo - Building Efficiency and Manager, and Energy Management of BC Housing, Mr. Bill MacKinnon for their support in providing data related to this study.  Further I would like to thank all the professionals related to Multi Unit Residential Buildings (MURB) who supported this study by participating in the survey, giving feedback on the proposed assessment tool, and offering encouragement.  xiii  Thanks goes to the Graduate Studies Administrative Assistant of the School of Engineering, Ms. Shannon Hohl, for all her support throughout that made studies progress smoothly. This acknowledgement extends to Ms. Teija Wakeman and Ms. Angela Perry. I would like to thank Dr. Lukas Bichler and all the staff and faculty of the College of Graduate Studies for their support. Further I would like to thank Ms. Jenica Frisque and Mr. Sam Carroll for their support during my stay in Kelowna in various capacities and helping through the difficulties I faced during this time.  Last but not least I would like to thank my parents and my brother for their unending support throughout my journey. They are the founding stone and strength of my achievements.    xiv        To My Family                 1  Chapter 1: Introduction 1.1 Background  The Paris Agreement (COP21) where 194 countries committed to actions and investments for a low carbon and sustainable future is a significant achievement in addressing climate change (European Commission, 2016; United Nations Framework Convention on Climate Change, 2016). Canada is committed to a 17% reduction of Greenhouse Gas (GHG) emissions by 2020 and a 30% reduction by 2030, from the 2005 emission levels (Canada's Action on Climate Change, 2013; Environment and Climate Change Canada, 2016a). Figure 1:1 shows past GHG emissions and GHG projections for Canada. Based on all the scenarios in Figure 1:1, GHG emissions are expected to rise over the next few years. From these projections, it is evident that more aggressive approaches are needed for Canada to achieve its GHG targets.  Energy use is a main contributor of GHG emissions and negative environmental impacts (Chung, Tohno, & Shim, 2009; El-Fadel, Chedid, Zeinati, & Hmaidan, 2003; Hillman & Ramaswami, 2010; Liu et al., 2012). Therefore, energy consumption should be considered in addressing environmental challenges such as climate change. The residential sector accounts for 17% of domestic energy use and 14% of the GHG emissions in Canada (Natural Resources Canada, 2016h). Energy efficient improvements in homes have saved $12 billion in energy costs and 27.9 Mt of GHG emissions between 1990 and 2013 (Natural Resources Canada, 2016h).  5005506006507007508008509002005 2010 2015 2020 2025 2030Mt CO2 eqYearHistorical emissions Reference scenario High emissions scenarioLow emissions scenario Canada's targetsFigure 1:1: GHG Emission Projections Canada (Environment and Climate Change Canada, 2016b) 2  However, the residential sector is expected to have a 6% increase in GHG emissions from 2012 to 2020 (Environment Canada, 2014). This increase in GHG emissions will be a hurdle in achieving Canada’s GHG emission targets. Therefore, energy consumption in the residential sector must be closely scrutinized in assessing possible avenues of reducing GHG emissions.  Between 1990 and 2010 Canada saw a 6% increase in residential energy use despite the improvement of energy efficient technologies, mainly because of the 35% increase in the number of households during this period (Natural Resources Canada, 2013). The increase in the number of households was due to population growth (23%), and fewer occupants per household: 2.8 in 1986 and 2.5 in 2011 (Natural Resources Canada, 2013; Statistics Canada, 1994, 2013). In Canada, the urban population grew from 69% to 82% of the total population between 1960 and 2015 (The World Bank, 2016). Further, in Canada the urban population growth (7%) was higher than the overall population growth (5.8%) from 2006 to 2011 (Statistics Canada, 2011). This urban population growth has led to urban densification, creating a great demand for housing in urban areas.  High demand for housing in urban areas has made Multi-Unit Residential Buildings (MURBs) popular due to limited land availability and increasing housing prices (Statistics Canada, 2016). In 2012, MURB construction exceeded the number of single family detached houses, based on the number of building permits issued (Statistics Canada, 2016). More than 50% of the total residential construction planned in Canada’s three largest metropolitan areas are MURB (Statistics Canada, 2016). This growth in MURBs highlights the importance of assessing and improving the overall energy performance of MURBs, as energy and building codes focus only on energy efficiency and not on the environmental impacts of energy use. Further, performance monitoring is critical to ensure that the building is performing as expected (Grussing, 2013).  Different approaches such as new technologies, energy performance standards, and energy certification/ratings have been used to enhance the energy performance of buildings (Natural Resources Canada, 2016g). The use of energy rating systems has been voluntary in North America, though mandatory energy performance standards exist for new buildings. The main purpose of an energy rating system is to provide necessary information to building owners to improve the building energy performance of a building (Natural Resources Canada, 2016g). Canada has well-established energy rating systems, such as EnerGuide, R-2000, and 3  Energy Star. These rating systems have helped to increase energy performance of the residential sector by introducing improvements such as retrofits, resulting in significant energy savings (Natural Resources Canada, 2016g).  1.2 Problem Statement A critical review of popular building energy rating systems such as Energy Star and EnerGuide, and sustainable building rating systems such LEED and BOMA BEST was conducted to identify the limitations of existing practises. Major criticisms of ratings systems are their complexity, high cost, and time taken for assessment  (Indian Green Building Council, 2015; Namini, Preece, Tahmasebi, & Shakouri, 2014). Energy rating systems focus mainly on energy consumption (e.g. Energy Star, R2000, HERS, NatHERS). These rating systems do not assess the wide range of impacts caused by residential energy use. Even though the amount of energy used is directly correlated, it is not the only factor affecting the environmental burden caused by energy use (Mosteiro-Romero et al., 2014). Some energy rating systems (e.g. EnerGuide) assess GHG emissions but are not used to determine the rating. Emissions are given as additional information in these rating systems. Therefore, the rating does not reflect the impacts of energy use.  Several rating systems focus on energy consumption based on standard operation assumptions, such as number of occupants and thermostat settings for existing buildings (e.g. HERS, NatHERS, and EnerGuide). Evaluating performance based on standard conditions is ideal for comparing different buildings and to assess the performance of new buildings. However, total building energy consumption depends on user behaviour, deterioration of assets, use of appliances, etc., and these factors should be noted for a better evaluation, especially in the case of existing buildings. Climatic conditions should be considered in developing benchmarks as they have a significant effect on heating and cooling energy demand (Eto, 1988; Li, Yang, & Lam, 2012; Pérez-Lombard, Ortiz, & Pout, 2008; Sarak & Satman, 2003). Some rating systems, such as BOMA BEST, use total energy use to compare with pre-defined benchmark levels. However, this rating system adopts the same benchmarks for every location in Canada. Therefore, the impact of climate (which depends on location) on energy consumption is not properly assessed in this energy rating system. 4  Most of the rating systems focus only on either operational rating or asset rating and neglect information that could be generated from the other method. Since operational ratings focus on the overall energy consumption and asset ratings focus on the performance of energy related assets, both can be used to generate different information that is useful to make recommendations to improve the energy performance. Further, the review of energy rating systems revealed that energy rating systems overlook the condition of individual assets in the rating.  Energy assessments are associated with uncertainties in the data used for assessment (Corrado & Mechri, 2009; de Wit & Augenbroe, 2002); however, none of the popular rating systems consider these uncertainties in their assessment. Further, rating systems overlook important qualitative data that could help generate recommendations to improve energy performance, and instead focus mainly on quantitative data. The issues identified above raised the following research questions:  What performance indicators should be considered to determine the building energy performance?  What are the ways to make the energy performance assessment process more efficient?  How can the uncertainty associated with energy performance assessments be incorporated in the assessment process? 1.3  Definitions and Scope of the Study Terminology related to  building energy certifications have been used with vague and inconstant  meaning (Perez-Lombard, Ortiz, Gonzelez, & Maestre, 2009). In this study building energy performance assessment refers to assessing the performance against the benchmarks for performance indicators.  This study focuses only on energy consumption of the operational phase of a MURB. Therefore, all the performance indicators used in the study including life cycle impact indicators are assessed based on the operational energy.  Performance indicators on asset rating focus on their impact to operational energy consumption.  5  1.4 Research Objectives  The overall objective of this research is to develop a life cycle thinking based energy performance assessment method for MURBs. This tool assesses the energy performance of a building based on energy consumption, environmental impacts, economic impacts, and asset condition. The approach proposed in this study is expected to support MURB owners and managers in operational decision making with regards to maintenance, repair, and renovation planning. Following are the sub-objectives of this research.  1. Identify limitations and challenges in popular energy rating systems. 2. Develop a life cycle thinking based framework to assess energy performance of existing MURBs. 3. Develop a web-based tool to facilitate the assessment process of the proposed assessment method.  4. Demonstrate the proposed energy performance assessment method through a case study. The proposed framework and web-based tool are the main outcomes of this study. A comprehensive energy assessment tool focusing on energy, environmental, and economic performance will help in assessing the overall impact of building energy use. Hence, this will help building owners and managers to make informed decisions to improve energy performance. 1.5 Research Methodology Overview This section gives an overview of the methodologies adopted to achieve the research objectives. Detailed methodology of each section is discussed within the relevant chapter. Figure 1:2 shows the four-phase methodology adopted to achieve the objectives of this research.  An extensive literature review was carried out as the first phase of the study. Popular energy rating systems such as Energy Star, EnerGuide, and HERS were reviewed to identify the current practise in energy ratings. Sustainable building rating systems were reviewed to identify their focus on energy consumption and the scope of energy performance assessment in building rating systems. A comprehensive literature review was carried out to identify other research that has been done in this research area. The Compendex Engineering Village database was used for this literature 6  review. Based on this review, the first sub-objective of identifying limitations of existing rating systems and challenges was achieved.   As the second phase, a framework for the new tool was developed. This framework was developed to overcome the limitations and challenges identified in the first phase. The proposed method incorporates an operational energy rating and assets management approach, to have a complete understanding of the energy performance of a building.  As the initial step of the second phase, performance indicators for the new assessment tool were identified and refined. The indicators selected to assess operational energy ratings focus on energy consumption, economic impacts, and operational and maintenance practices.  Performance of the individual asset is focused in asset rating. The new approach to incorporate both operational rating and asset management allows the assessment tool user to identify different interventions (retrofits and operational changes) to enhance the energy performance of a building.  Methodologies to analyse and aggregate data were also selected during the second phase. Fuzzy set theory was used to incorporate data uncertainty and vagueness in qualitative assessments of building energy performance. Fuzzy membership functions were developed to categorise performance into very good, good, average, poor, and very poor. Fuzzy synthetic evaluation was used to aggregate indicators and different criteria. Fuzzy rules were developed to combine operational and asset ratings. The scope of the LCA for the proposed tool was defined to recommend more environmentally conscious improvements to existing buildings. LCA software (Gabi and Athena) was selected to assess life cycle impacts of energy use and retrofits. In the second phase, stakeholders related to MURBs were consulted through a questionnaire survey and interviews. MURB owners and managers, designers, engineers, researchers, and government and other stakeholders were the target audience of this survey. Residents were not included in this consultation as the tool focuses on building-level energy use rather than individual apartments. These stakeholder consultations focused on the information they expect from an energy assessment tool and their views on the identified performance indicators. The questionnaire was designed to obtain weights for the indicators and categories that were used in the proposed tool. Modified digital logic (MDL) method was used to determine the weights based on stakeholder responses.  7  The third phase of the study focused on developing the energy performance assessment tool. In this phase, data collection was carried out to define benchmarks for the identified performance indicators. Data to develop the benchmarks for energy consumption focused on location parameters. BC specific energy consumption data were collected with the support of Natural Resources Canada, BC Housing, and FRESCo - Building Efficiency. Published literature and energy codes were used to define the benchmarks for other performance indicators. Further, in this phase a java based web tool was developed to assist the assessment process.  As the fourth phase, a case study was conducted to demonstrate the process of the proposed energy assessment tool. The Purcell residence of the UBC Okanagan campus was selected for this case study, and data for the study were collected with the support of campus Facilities Management. Results were discussed with the Energy Systems Manager of the campus to demonstrate the results.    Detailed methodologies are presented in Chapters 3,4 and 5.  8  Phase 1:s  :Literature review Critical review of popular energy rating systems Critical review of published literatureit t  i riti l r i  f l r r  r ti  s st s riti l r i  f lis  lit r t rPhase 2: s  : Identify performance indicators Operational rating: energy, environmental and economic performance Asset rating : asset performanceI tif  f  i i t s r ti l r ti : r , ir t l  i  rf r ss t r ti  : ss t rf rIdentify method to integrate data Fuzzy setsI tif  t  t  i t t  t  s tsIdentify method to aggregate indicators Fuzzy synthetic evaluationI tif  t  t  t  i i t s  s t ti  l tiPhase 3:s  :Data collection for benchmarks Energy Star portfolio manager BC Housing data FRESCo - Building Efficiencyt  ll ti  f  s r  t r rtf li  r  si  t  - il i  ffi iIdentify the limitations of existing energy rating systemsIdentify challenges for energy rating systems Stakeholder consultation Online survey Interviewst l  s lt ti li  s r I t r i sDevelop the framework for energy assessment tool Develop the web based toolPhase 4: s  : Data collection for case study UBC-O energy consumption data Energy sytem manager UBCOt  ll ti  f  s  st -  r  s ti  t r  s t  r  Demonstrate the proposed energy assessment tool Identify the scope of life cycle assessment Energy use, RetrofitsI tif  t  s  f lif  l  ss ss t r  s , tr fits Figure 1:2 Research Methodology 9   1.6 Thesis Organization This thesis consists of six chapters, the contents of which are shown in Figure 1:3. Chapter 1 provides the introduction to the research. This chapter focuses on identifying the research gaps in energy assessment tools and the importance of focusing on MURBs. Based on the identified research gaps, the objectives of this research are formulated and discussed in this chapter. An overview of the research methodology to achieve these objectives is discussed as the last section of the chapter with the thesis organization.  Chapter 2 provides a comprehensive literature review, which was conducted to identify limitations and challenges of the energy assessment tool. This chapter discusses the existing energy rating systems and sustainable building rating systems to evaluate the current practices and limitations. Challenges of energy performance assessments are discussed in this chapter by reviewing the other research carried out in this area.  Chapter 3 discusses the indicator identification and prioritization. It then discusses the identified performance indicators and consultation conducted to obtain expert opinions. Questionnaire design, sample size determination, and results of the questionnaire survey and interviews are also discussed in this chapter. Inputs from this consultation were used to identify stakeholder concerns and define weights for the selected performance indicators. Chapter 4 discuss the proposed framework. This includes the scope of the proposed tool and analysis techniques used for the rating system. Data flow of the proposed rating systems is also discussed in this section. Further, benchmarks for performance indicators are discussed in this chapter. Chapter 5 discusses the development of the web-based tool for the proposed energy assessment tool and case study, including the technology used and the outlook of the rating web pages. The case study discussed in this chapter is based on the Purcell Residence of the UBC Okanagan, which was used to demonstrate the proposed building energy performance assessment methodology.  Chapter 6 discuss the conclusions, contribution, and limitations of the proposed tool, and future research. 10  Chapter 1: IntroductionBackground, Problem statementResearch objectives, Methodology overviewt  : t tir , r l  t t tr  j ti , t l  r iChapter 2: Literature ReviewIdentify limitations of existing  energy rating systems  Identify challenges of energy performance assessments Identify analysis methods  t  : it t  iI tif  li it ti  f i ti   r  r ti  t   I tif  ll  f r  rf r  t  I tif  l i  t   Chapter 3: Indicator Identification and Prioritization Identify indicators, Questionnaire survey Analysis of responsest  : i t  tifi ti   i iti ti  I tif  i i t r , ti ir  r  l i  f rChapter 4: Building Energy Performance Assessment Framework Scope of assessment, Assessment methodology, Benchmarks, Aggregation of indicatorst  : il i   f  t   f t, t t l , r , r ti  f i i t rChapter 5: Web Based Tool and Case StudyDevelop a web based tool to facilitate the assessment process of the proposed toolProject information, Energy simulation LCA analysis, Energy Performance assessmentt  :   l   tl     t l t  f ilit t  t  t r  f t  r  t lr j t i f r ti , r  i l ti   l i , r  rf r  tChapter 6: ConclusionConclusion, Contribution and Limitationst  : l il i , tri ti   i it ti Figure 1:3: Thesis Organization  11  Chapter 2: Literature Review 2.1 Overview Building energy rating/certifications were introduced as a result of increased oil prices in the early 1990s, as buildings were consuming 40% of all energy used (Berardi, 2015; Pérez-Lombard, Ortiz, González, & Maestre, 2009; Pérez-Lombard et al., 2008). Since then, energy certifications have become a successful tool for promoting energy efficient products and comparing competitive products (Casals, 2006). Energy rating/certification schemes provide information to buyers, occupants, and property managers to improve the energy performance of buildings (International Energy Agency, 2010; Sustainable Energy Authority of Ireland, 2014). Further, these rating systems help in achieving emission reduction targets and making policy decisions related to building management (International Energy Agency, 2010).  EN 15603 (British Standards Institution, 2008) identifies two types of energy rating systems based on the method of data collection: measured/operational rating and calculated rating. The measured or operational rating is based on metered data, which represents actual energy use, while the calculated rating is based on energy simulations. The British Standards Institution (2008) identifies two types of calculated energy rating systems: asset rating and operational rating.  Energy related assets of a residential building, such as the building envelope, HVAC system components, and lighting system, have a significant impact on the operational energy efficiency and performance of a building (U.S. Department of Energy, 2016). The asset rating focuses on those assets’ impact on energy demand for major functions, such as heating and cooling, based on standardized indoor and occupancy conditions (Hernandez & Kenny, 2011a; Perez-Lombard et al., 2009). In asset rating, energy demand is assessed using energy simulations based on the structure and assets such as the HVAC and lighting systems of the building (Hernandez & Kenny, 2011a; International Energy Agency, 2010; Perez-Lombard et al., 2009; US Department of Energy, 2016). Hence, the asset rating helps to identify the changes needed to improve the energy performance of a building based on the condition of assets, such as the condition of building components and the efficiency of installed equipment. Further, since the asset rating is based on standard conditions, it is useful for comparing performance of buildings. However, this may not be representative of the actual operational energy 12  due to the assumptions made in energy simulations, such as user behaviour and degradation of assets.  The operational rating is based on the total operational energy, which is affected by factors such as occupant behaviour, operating schedule, electric appliances, and HVAC system performance (Hernandez & Kenny, 2011a; Perez-Lombard et al., 2009). Unlike the asset rating, the operational rating of a building captures the energy based on actual performance (International Energy Agency, 2010). The operational rating takes into account the total energy use affected by occupancy patterns of the residents, maintenance procedures, etc. (Lewry et al., 2013). This approach delivers a comprehensive view of energy use and can be used to assess impacts of a MURB’s energy use in the operational phase. However, a rating based on total operational energy is not the ideal rating to compare relative performances of buildings, as occupant behaviour and maintenance procedures can be different from building to building.  2.2 Building Energy Rating Systems  Building energy rating systems focus only on the energy performance of a building. These rating systems have different purposes for their rating and adopt different methodologies to evaluate the performance of a building. Hence popular rating systems were reviewed to identify their scope of assessment and adopted methodologies. Building energy rating systems have different specifications for different types of buildings. Since the focus of this study is existing MURBs, specifications for MURBs, homes, and existing buildings were given priority.   2.2.1 Energy ratings in Canada EnerGuide for Homes is the official rating for energy efficiency in Canada (Natural Resources Canada, 2016a). This energy rating system is available to both new and existing buildings. EnerGuide uses the HOT 2000 software package for energy simulations. HOT 2000 compares the actual design of a house with a house of minimum requirements of the energy code under standard operating conditions. The main purpose of EnerGuide is to compare houses rather than to assess total energy use. Though EnerGuide is a useful tool for comparing a building’s performance, actual energy use can be significantly different due to behavioural and other unaccounted-for factors. In addition to energy performance, EnerGuide estimates GHG emissions generated by energy use; however, it is not used to rate a building. This rating system can be used to evaluate different types 13  of housing—single family housing, multi-family housing etc.—but it is limited to low-rise buildings, mainly due to software limitations. Currently the rating system is being updated to a Gigajoules per year scale from 0-100 scale (Natural Resources Canada, 2016b).  Energy Star, operated by Natural Resources Canada, is a home energy evaluation system. It focuses solely on new homes. A label can be achieved either by energy simulation or by meeting the requirements in the prescribed method. This rating system only issues a label stating that the house satisfies the requirements of the Energy Star label; it does not rate on a scale as in EnerGuide. Energy Star portfolio manager is another program that is being operated by Natural Resources Canada as a benchmarking program (Natural Resources Canada, 2015). Even though portfolio manager estimates GHG emissions, it does not currently have established energy use intensities for multi-family housing or single family homes (Natural Resources Canada, 2016d).   R-2000 home is another Canadian label that focuses on energy use as well as clean air and environmental features (Natural Resources Canada, 2016i). In addition to meeting energy efficiency requirements, the house should be built by a licensed builder to obtain the R-2000 label. This is to ensure that construction of the house meets the specified standards. Further, R-2000 homeowners receive an EnerGuide rating for their homes.  2.2.2 Energy ratings in US Energy Star for Multifamily Housing in the United States is significantly different than the Energy Star label in Canada. It gives an overall score on a 0-100 scale and uses the total measured energy for analysis. Instead of using energy simulation models as in many rating systems, Energy Star US uses a regression equation. This regression equation was developed with measured data in residential buildings. The key factors considered in the regression analysis are number of bedrooms per unit, high-rise/low-rise building types, number of units per 1000 square feet, total heating degree days, and total cooling degree days. Further, this assessment considers the source energy use rather than the site energy use. Source energy accounts for losses that occur during transmission of energy to the site. The score for the building is assigned based on the cumulative distribution of performance of the existing housing stock (Energy Star, 2014a, 2014b). The main criticism of using total energy use is that it cannot be used to compare buildings, as total energy is affected by human behaviour. However, in assigning the score this rating system considers the 14  performance of the existing building stock. This is a preferable rating system to assess the impact on natural resources and environment, as it accounts for total measured energy use. At present, this rating system only focuses on energy use and does not assess other impacts of energy use such as GHG emissions.  Home Energy Rating System (HERS) and ASHRAE Building Energy Quotient are two popular building energy-rating systems in North America. These energy-rating systems are used in various green building rating systems to assess the energy performance. HERS compares the actual design performance with a default building, which is built according to the 2004 International Energy Conservation Code using energy simulation (The Residential Energy Services Network, 2013). Therefore, this rating system also has limitations inherent to calculated ratings. ASHRAE Building Energy Quotient uses simulated energy for new designs and measured energy for buildings in operation (ASHRE, 2015). This rating system uses the benchmarks used by Energy Star USA.  2.2.3 Energy ratings in Australia Nationwide House Energy Rating Scheme (NatHERS) is a rating system that focuses mainly on energy performance of residential buildings in Australia. It also uses the technique of comparing the design with a building that meets minimum building code requirements by modeling the building using accredited software. NatHERS focuses only on the asset rating of the building, as the main purpose of this rating system is to compare buildings based on energy performance (Department of Industry Innovation and Science, 2016).   2.2.4 Energy ratings in Europe In the European Union (EU), the energy performance certificate (EPC) is a mandatory requirement for designated building types. Even though European countries have their own methodology of assessment, they are commonly governed by directive 2010/31/EU and directive 2012/27/EU of the European parliament and council. These directives outline the basic methodology to be adopted by each member state, such as basic framework, use of asset rating or measured energy, buildings types to be excluded from mandatory requirement of EPC and maximum validity period of an EPC, metering requirements, details of the distribution requirements, etc. (EU, 2010, 2012).  Based on these directives, member states try to adopt the A-G (seven colour) rating scale in their EPCs.  15  Following the directives of the EU, England and Wales follow an EPC based on energy simulation where a building is operated on standard operating conditions. This simulation mainly focuses on heating, lighting, and hot water. This EPC rates the building on a scale of A to G. Though it is not used for the rating of the building, EPC provides information on the GHG emissions generated by energy use (Department for Communities and Local Government, 2014). In England and Wales EPCs are valid for a maximum of ten years, provided that there has been no significant change in the building within this period, and EPCs should be produced when selling or renting the home (Department for Communities and Local Government, 2014).    Table 2:1 summarizes the building energy rating systems reviewed in this study. This table highlights the focus and scope of the rating system, methods used to estimate the energy use, impacts considered, and rating/labelling method used.    Review of Energy Rating Systems 16  Table 2:1: Review of Energy Rating Systems Country Rating system Focus areas Energy use and its impacts Rating/ labeling method Other Operational Energy Embodied energy Emissions   Focus Estimation method Canada EnerGuide (for homes) (Natural Resources Canada, 2016b, 2016c, 2016f) Energy Use Building envelope, heating and cooling systems and equipment and renewable energy are considered  Simulated energy  No Greenhouse gas emission is calculated but rating is based on energy consumption only Gigajoules per year scale, compares the design to a house of existing standard.  If on site renewable energy is created, it is deducted from the annual energy consumption 0.-100 scale compares the design to a house of existing standard.  ENERGY STAR for new homes (Natural Resources Canada, 2016e) Energy use Heating and cooling systems, windows, patio doors and skylights, walls and ceilings, airtightness, electrical savings Simulated energy for performance approach and meeting the stated requirements for prescriptive approach No No Certify that the building is meeting energy star labelling requirements  R 2000 (Natural Resources Canada, 2012) Building envelope requirements, mechanical systems, energy performance targets, indoor air quality, water conservation and environmental features Space heating and domestic hot water Simulated energy No No Certify that the building is meeting R2000 standard    Review of Energy Rating Systems 17  USA ENERGY STAR for Multifamily Housing (Energy Star, 2014a, 2014b) Energy Use Total energy use Measured energy No Not considered Assigning the score based on the cumulative distribution of the performance of existing houses  HERS (The Residential Energy Services Network, 2013) Energy Use Exterior walls, floors, ceilings and roofs attics, foundations and crawlspaces, Windows and doors, vents and ductwork, HVAC system, water heating system, thermostat. air leakages, leakage in the heating and cooling distribution system Simulated energy No - Assign score based on the percentage deviation from the reference building  ASHRAE Building Energy Quotient (In operation) (ASHRE, 2015)  Energy use Total energy use Based on metered energy (simulated energy for as designed)  No No Rate based on the energy intensities from Energy Star Portfolio Manager  Australia NatHERS (Department of Industry Innovation and Science, 2016) Energy Use Heating and cooling based on standard occupancy assumptions Simulated energy  No  No Rated on a 0-10star scale (stars are assigned based on the energy demand of the building)   Review of Energy Rating Systems 18  England and Wales Energy performance certificate (Department for Communities and Local Government, 2014) Energy Use Lighting heating and hot water based on standard occupancy assumptions Simulated energy No  CO2 emissions are estimated and rated in anther A-G scale Rated on A-G scale      19  2.3 Sustainable Building Rating Systems Unlike building energy rating systems focusing only on energy performance, sustainable building rating systems focus on different aspects of built environment, such as water use, indoor air quality, and energy use. Sustainable building rating systems were reviewed to study the scope of energy rating in these assessments and the methods they have adopted. A summary of the review of sustainable building rating systems is provided in Table 2:2.  2.3.1 Sustainable building rating systems in Canada Leadership in Energy and Environmental Design (LEED) Canada for homes is a green building rating system that focuses on energy use, while also taking into consideration many other aspects as shown in Table 2:2. In LEED rating, either EnerGuide, HERS, or a prescriptive path can be used to achieve credits assigned for energy (Canada Green Building Council, 2012). Additional clauses are given to consider air conditioning refrigerant management, which has not been considered in these rating systems. Refrigerants management is considered to minimize contribution to ozone depletion and global warming (Canada Green Building Council, 2012). However, LEED doesn’t assess the environmental impacts caused by energy use.  Building Owners and Managers Association Building Environmental Standards (BOMA BEST) is an environmental certification that is developed for existing buildings. This certification uses metred energy instead of simulated energy (BOMA BEST, 2016). Unlike in many other rating systems, BOMA BEST uses total building area for energy intensity calculations, instead of heated floor area. This rating system uses one set of benchmarks for Canada to assign the scores irrespective of climate of the location (BOMA BEST, 2016). This approach could be unfair for buildings located in unfavourable climatic conditions. Most of the other rating systems account for location in their assessment. For MURBs, BOMA BEST’s weight on energy demand is 35%. Therefore, use of a common benchmark might not have a significant impact on the overall score. On the positive side, BOMA BEST represents the impact of total energy use by considering all factors, such as building properties and behavioural factors. BOMA BEST is identified by Green Globes as the standard for evaluating the performance of existing buildings.   20  2.3.2 Sustainable building rating systems in US Green Globes is a green building assessment tool with different paths to assess energy use. Different paths in Green Globes describe the use of different energy rating systems, such as Energy Star, ASHRAE, ANSI/GBI, and ASHRAE Building Energy Quotient to achieve scores assigned for energy performance. One path described in Green Globes for new construction Version 1.4 assess the GHG emissions, and have a score for emission criteria (Green Building Initiative, 2015). This path uses benchmark data based on the US Energy Star portfolio manager, and hence it can only be applied to building types that are covered by Energy Star. Estimating the operational energy separately and giving additional points to some features affecting energy use, such as building envelop and HVAC system, will lead to double counting effects from those features. 2.3.3 Sustainable building rating systems in Australia National Australian Built Environment Rating System (NABERS) is an Australian Green Building rating system that focuses on energy efficiency, water usage, waste management, and indoor environment quality of a building, and compares them with peers and rates on a six-star rating scale (Office of Environment and Heritage, n.d.). For energy efficacy, NABERS focuses on total energy and hence uses measured energy for existing buildings. In addition to energy efficiency assessment, this rating system assesses GHG emissions.  Green Star, developed by the Green Building Council of Australia, is another widely used green building rating system. Green Star doesn’t use operational energy consumption to assess energy performance, but it does determine the impacts of energy use such as GHG emissions. Further, peak electricity demand, which has significant impact on the electricity supply system, is assessed in this rating system (GBCA, 2011).  2.3.4 Sustainable building rating systems in Europe and Asia Building Research Establishment Environmental Assessment Method (BREEAM) is a green building rating system developed by the Building Research Establishment (BRE) UK. It pays attention to various aspects such as health and wellbeing, transport, water, and materials in addition to energy use, as shown in Table 2:2. The BREEAM rating system allocates credits based on measured energy use and analyses GHG emissions from energy use (BRE, 2016).   21  German Sustainable Building Council (DGNB) System considers 50 sustainability criteria in its assessment, focusing on quality, ecology, economy, socio-cultural aspects, technology, process work flows, and site (German Sustainable Building Council, 2016). The DGNB System uses non-renewable primary energy demand, total primary energy demand, and proportion of renewable primary energy to assess life cycle assessment of primary energy use. Further, this rating system uses comprehensive LCA to assess environmental impacts of building. Building Environmental Assessment Method (BEAM) Plus for existing buildings in Hong Kong analyses the energy use for buildings based on the measured energy for rating sustainable buildings (HKGBC, 2016). Even though emissions from energy use are not analysed, credits are given based on the peak energy demand, use of renewable energy, and for self-improvement, considering the previous years’ energy consumption. Environmental quality of the building and Environmental load reduction of the building are the main criteria of Comprehensive Assessment System for Built Environment Efficiency (CASBEE) for new buildings in Japan (JSBC & IBEC, 2014). CASBEE analyses the energy use based on simulated energy, and assesses the GHG potential by comparing with a standard building and assigning credits for energy use. In addition to energy simulations, operating conditions are considered in the assessment of CASBEE.  Table 2:2 summarizes the sustainable building energy rating systems reviewed in this study. This table highlights the focus and scope of the rating system, methods used to estimate the energy use, impacts considered, and rating/labelling method used.  Sustainable Building Rating Systems 22  Table 2:2: Sustainable Building Rating Systems Country    Rating system Focus areas Energy use and its impacts Rating/ labeling method Other Operational Energy Embodied energy Emissions Focus Estimation method  Canada                  LEED Canada for homes 2009 (Canada Green Building Council, 2012) Innovation and design process, Location and linkages, Sustainable sites, Water efficiency, Energy and atmosphere, Material and resources, Indoor environmental quality, Awareness and education Building envelope, heating and cooling systems and equipment and renewable energy are considered Based on EnerGuide, HERS or prescriptive path Material types are specified in material and resources  Not Considered Assign credits based on meeting the stated targets Refrigerant is considered under energy and atmosphere. Various other green buildings features are considered under the other sections mentioned in focus areas  BOMA BEST (Multi-Unit Residential Buildings) (BOMA BEST, 2016) Energy, Water, Waste & Site, Emissions & Effluents, Indoor Environment, and Environmental Management System Total energy use Measured energy No Ozone depletion from substances (not energy) Assigning scores based meeting the stated targets Ozone depletion from substances is assed based on qualitative questions USA Green globes for new construction  Version 1.4 (Green Building Initiative, 2015) Energy, Water, Resources, Emissions, Indoor Environment, Environmental Management Energy Performance, Energy Demand, Metering, Measurement and Verification, Building Envelope, Lighting, HVAC Based on ENERGY STAR, ASHRAE,  ANSI/GBI, ASHRAE Analysis using a LCA tool or prescriptive materials can be used.  CO2 included in path C, as an addition to ANSI/GBI 01-2010 Assigning scores based meeting the stated targets  Sustainable Building Rating Systems 23  Systems and Controls, Energy Efficient Equipment and Measures, Renewable Sources of Energy, Transportation Building Energy Quotient  Australia NABERS (Office of Environment and Heritage, n.d.) Energy efficiency, water usage, waste management and indoor environment quality Total energy use Measured energy No GHG Based on the compared benchmark building and final rating on a six-star scale  Green Star (GBCA, 2011) Energy Use, Management, Indoor environment quality, Energy, Transport, Water, Materials, Land use and ecology, emissions, innovations  Impacts of the operational energy is considered but not the total energy consumed (GHG emission, Peak electricity demand) Measured energy No (when rating the performance) GHG    UK BREEAM in use (BRE, 2016) Asset Performance, Building Management, Occupier Management are considered under following sections;   Health and Wellbeing, Energy, Transport, Water, Materials, Waste, Land Use and Ecology, Pollution,  Total Energy Measured Energy  Yes CO2 emissions are estimated Credits are awarded by comparing the total CO2 produced from energy consumption.    Sustainable Building Rating Systems 24  Germany  DGNB System (German Sustainable Building Council, 2016) Environmental Quality, Economic Quality, Sociocultural and Functional Quality, Technical Quality, Process Quality, Site Quality Non-renewable primary energy demand  Total primary energy demand Proportion of renewable primary energy    Labeled Bronze, Silver, Gold Platinum according to the performance   Hong Kong   BEAM Plus Existing Building (HKGBC, 2016) Management, Site Aspects, Materials and Waste Aspects, Energy Use, Water Use, Indoor Environmental Quality, Innovations and Additions Total energy Measured energy Credits are granted for specified materials  No Credit grated based on the position of the cumulative distribution of similar type of buildings Credits are given for benchmarking, self-improvement and peak energy. Bonus credits given for using renewable energy  Japan CASBEE for Building (New Construction) (JSBC & IBEC, 2014) Environmental quality of the building, Environmental load reduction of the building  Total Energy Simulated energy Levels are assigned based on the materials used Same ratio used to estimate energy efficiency is considered to estimate GHG at efficiency of operation stage  Levels are assigned based on the ratio of the designed building and standard building    25  2.4 Challenges and Limitations of Existing Rating Systems Based on the above review, it is evident that in many energy rating systems, the main focus is on comparing the energy performance of buildings (Department for Communities and Local Government, 2014; Department of Industry Innovation and Science, 2016; Natural Resources Canada, 2016b; The Residential Energy Services Network, 2013). Asset rating, which focuses on energy demand based on standard operational and occupancy conditions, is useful in comparing different buildings. Further, asset rating is the only method available to assess energy performance of new construction where operational data is not obtainable.  Measured energy is the most reliable estimate of actual energy use and accounts for all factors affecting energy use. However, energy rating systems such as EnerGuide use energy simulations to rate existing houses, while Energy Star for multifamily housing in USA and the ASHRAE building energy quotient use measured energy. Measured energy incorporates factors such as human behaviour, which has a significant impact on energy use but is difficult to model accurately in energy simulations. Hence, measured energy is more useful to assess the energy performance and impacts of total energy use. Using measured energy for existing buildings reduces discrepancies in simulations, which enhances the reliability of the rating tool. Further, measured energy is important if more comprehensive approaches, such as operational changes, are to be considered for reducing energy use, not limiting to energy efficient technologies. 2.4.1 Monitoring energy performance The building and its components related to thermal performance are highly vulnerable to degradation, due to continuous use and external factors such as environment. Therefore, continuous monitoring is essential to maintain performance levels (Li, Han, & Xu, 2014).  Attention should be paid to asset degradation in determining the validity period of a building rating, even when significant changes have not been made in a building.  If continuous monitoring is to be practised, measured energy use should be used. Total annual measured energy could be easily compared with benchmark energy performances, rather than performing energy simulations. Using regression models to estimate the reference/benchmark energy consumption or to have established benchmarks for different conditions will facilitate the  26  continuous monitoring process. Unlike energy simulation, using regression models/established benchmarks is less resource intensive once they are established. However, extensive resources are needed to develop benchmarks/regression models. 2.4.2 Benchmarks  Heating and cooling energy is the most significant contributor for energy use in a building (Balaras et al., 2007; Balaras, Droutsa, Dascalaki, & Kontoyiannidis, 2005; Pérez-Lombard et al., 2008). Energy demand for cooling and heating is linked to the climate condition of the location of the building. Therefore, in a rating system the reference/benchmarking energy consumption should consider the climate condition to fairly assess the energy consumption of a building. Most of the reviewed rating systems, irrespective of their focus areas, pay attention to the climatic condition of the region. If simulation software is used, it should be developed for a specific location while regression equations can have variables such as cooling and heating degree days to account for climate condition. If pre-established benchmarks are to be used, they should be developed for each climatic zone/location rather than having a single benchmark for a wide range of climatic conditions, as in BOMA BEST.  2.4.3 Life cycle perspective  From the review of popular energy rating systems, it is evident that energy-rating systems tend to focus only on energy performance to rate a building. However, the literature identifies different impacts of energy use, such as environmental and economic, which should be considered for a more comprehensive evaluation (Al-Ghamdi & Bilec, 2014, 2015; Lützkendorf, Foliente, Balouktsi, & Houlihan Wiberg, 2014; Mosteiro-Romero et al., 2014; Thiers & Peuportier, 2012; Zmeureanu, Fazio, DePani, & Calla, 1999). The review indicates that sustainable building rating systems should try to incorporate some of these impacts in their assessment. Factors that should be used for a comprehensive energy performance assessment are discussed in detail under indicator identification of chapter 3. It is observed that energy rating systems do not consider the life cycle impacts of energy use. To assess the impacts of energy use, and to meet sustainable targets, life cycle thinking can be extremely helpful. Therefore, this is an important aspect that should be covered in the energy  27  performance assessment of buildings (Balaras et al., 2005; Hernandez & Kenny, 2011b; Rossi, Marique, Glaumann, & Reiter, 2012).  Most of the detailed LCA that have been carried out for buildings focus on low energy buildings. LCA are rarely conducted for traditional buildings, which constitute a vast majority of the building stock (Cabeza, Rincón, Vilariño, Pérez, & Castell, 2014). One reason for this limited use of LCA is that it is a complicated process. However, if simplifications such as considering renewable energy resources as zero emissions technologies, and not considering significantly small energy demands are adopted in LCA analysis, reasonably accurate results can be achieved with less effort (Kellenberger & Althaus, 2009; Malmqvist et al., 2011; Zabalza Bribián, Aranda Usón, & Scarpellini, 2009). For energy rating systems, reasonably accurate results can be obtained by focusing on the operational energy and identifying the sources of energy (Mosteiro-Romero et al., 2014).  Studies show that embodied energy is comparatively less than the operational energy usage in a building (Ramesh, Prakash, & Shukla, 2010). However, embodied energy is an important part of buildings’ life cycle energy use and cannot be ignored, especially with nearly zero energy buildings where operational energy use is significantly less (Lützkendorf et al., 2014; Ramesh et al., 2010; Verbeeck & Hens, 2010). Therefore, several studies show that embodied energy should be given more attention in rating systems. Otherwise, zero-energy buildings will have an unfair advantage despite low-energy buildings having less impact on the environment from a life cycle perspective (Giordano, Serra, Tortalla, Valentini, & Aghemo, 2015; Ramesh et al., 2010). Energy rating systems are used as a tool to identify potential improvements to a building. Hence, use of embodied energy in a rating system for existing buildings can be questioned, as the embodied energy of a building cannot be reduced after construction. 2.5 Life Cycle Assessment (LCA) From construction to demolition, buildings consume a large amount of resources including construction material, energy, and water (Crawford, 2011). Use of these resources results in various discharges such as GHG, heavy metal, dust, and radioactive waste from the built environment. Having a life cycle perspective of a product/process is important to have a better understanding of its impacts (de Bruijn, van Duin, & Huijbregts, 2002; Klöpffer, 1997) and LCA  28  is an important tool to have a understanding of  impacts of energy use (Mosteiro-Romero et al., 2014; Spath, Mann, & Kerr, 1999).  Life cycle assessment is a methodology used to analyse inputs and outputs from the system and their potential impacts during life cycle stages of a product/process (USEPA, 2014). Life cycle stages of a product include stages from material extraction to demolition of the product. However, which life cycle stages to be considered in an assessment will vary based on the scope of the assessment.   LCA has gained popularity over the last several decades due to its ability to analyse  environmental impacts of products and processes (AL-Nassar et al., 2016; ISO 14040, 2006). The International Standards Organisation (ISO) has published a guideline to standardised LCA as it is commonly used with different scopes and objectives (ISO 14040, 2006). ISO Standard 14040 defines the four basic steps of a LCA study: goal and scope definition, life cycle inventory (LCI), life cycle impact assessment (LCIA), and interpretation.            The first step, goal and scope definition, focuses on identifying the objectives of the study and the scope that determines the life cycle stages to be included in the assessment. Based on the scope of an LCA, it can be a cradle to grave, cradle to cradle, or gate to gate study (ISO 14040, 2006). Goal and Scope Definition Life Cycle Inventory Life Cycle Impact Assessment Interpretation Figure 2:1: Life Cycle Assessment (ISO 14040, 2006)  29  During the LCI step, data related to inputs and outputs of the product/process, such as materials energy, are collected. Based on the data collected during LCI, environmental impact indicators are quantified in the LCIA. These indictors include global warming potential, human health impacts, and resource depletion. Different LCA software (Athena, SimaPro, etc.) and databases (GaBi, Ecoinvent, etc.) are now commonly used to facilitate LCIA (Frischknecht & Rebitzer, 2005; Gong, Nie, Wang, & Zuo, 2006; Gu, Wennersten, & Assefa, 2006; Mosteiro-Romero et al., 2014; Narita, Nakahara, Morimoto, Aoki, & Suda, 2004). Interpretation is the step where experts make necessary recommendations about the needed improvements/changes to the product/process based on its environmental performance.  For the proposed tool, a cradle to gate LCA boundary is established for energy retrofits and operational energy. Therefore, impacts from generation of energy to its delivery to the building site are considered.  2.6 Fuzzy Sets  Fuzzy sets are one of the most used methods to handle uncertainty and vagueness associated with data (Reza, Sadiq, & Hewage, 2013; Sadiq & Rodriguez, 2004; Zimmermann, 2010). This theory is used in many fields, ranging from artificial intelligence, medicine, and robotics to management and civil and environmental engineering (Blockley, 1979; Brown & Yao, 1983; Reza et al., 2013; Sadiq & Rodriguez, 2004; Zimmermann, 2010).  Fuzzy sets were introduced by Zadeh (1965), who described fuzzy sets as “a class of objects with a continuum of grades of membership.” Membership value can be a value from 0 to 1. The fuzzy set concept facilitates the use of linguistic terms with uncertainties. Partial membership can be used to assign linguistic terms partially, where the boundary of such terms are not crisp.  According to Pedrycz, Ekel, & Parreiras (2010) this can be explained with the following situation: A temperature of 20°C can be identified as comfortable, whereas 30°C and 10°C can be identified as warm and cold, respectively. In this situation, how can a temperature of 12°C, 15°C, or 17°C be categorised as cold or comfortable? Similarly, how can we categorise temperatures of 22°C, 25°C, or 27°C? In these cases, we can use the fuzzy partial membership concept to associate each temperature with a category.   30  Equation 2:1 presents the general definition of the fuzzy sets. The fuzzy set ?̃? is denoted as a set of ordered pairs in a universe of X, where 𝑥 denotes the objects of X. The membership function,𝜇?̃? (𝑥) maps 𝑥 values to ?̃? in the interval 0 to 1.    ?̃? = {(𝑥, 𝜇?̃? (𝑥)| 𝑥 ∈ 𝑋)} Equation 2: 1: Fuzzy Set (Zadeh, 1965) Fuzzy numbers are named based on the shape of the function. Triangular fuzzy number (TFN), trapezoidal fuzzy number, S- fuzzy number, Gaussian fuzzy number, and exponential-like fuzzy number are some examples for fuzzy numbers. A TFN, which is the simplest form of a fuzzy number, is explained by Figure 2.2 and the function below. The TFN shown in Figure 2.2 has a membership (µ) of 0, if X ≤ X1 or X ≥ X3. Membership equals one when X ≥ X2. Membership between X1- X2, and X2 - X2 has linear functions as shown in Figure 2:2.  Figure 2:2: Triangular Fuzzy Number 𝜇 ={    0 𝑖𝑓 𝑥 ≤ 𝑋11𝑋2 − 𝑋1𝑥 −𝑋1𝑋2 − 𝑋1𝑖𝑓 𝑋1 < 𝑥 ≤ 𝑋2−1𝑋3 − 𝑋2𝑥 +𝑋3𝑋3 − 𝑋2𝑖𝑓 𝑋2 < 𝑥 < 𝑋30 𝑖𝑓 𝑥 ≥ 𝑋3 }     The ability to transform qualitative information to quantitative information using fuzzy sets allows categorical information such as poor, satisfactory, good, and average performance of assets to be used in the rating system (Pedrycz et al., 2010). However, sometimes it is not possible to assign the evaluation of performance directly into one category, as it could fall between two categories. The partial membership concept in fuzzy set theory helps in such cases (Pedrycz et al., 2010; Zadeh, 1965).    31  2.7 Fuzzy Synthetic Evaluation (FSE) Assessments are vague and ambiguous, especially when linguistic terms are used in assessment as described in Section 2.6.1. Hence evaluation of these objects and processes is uncertain and fuzzy sets are used to incorporate these ambiguities in assessment. This concept of fuzzy sets is used in fuzzy synthetic evaluation (FSE) to analyse data where multiple objectives are involved (Lu, Lo, & Hu, 1999; Mu, Cheng, Chohr, & Peng, 2014; Sadiq & Rodriguez, 2004; Zhao, Hwang, & Gao, 2016).  For the proposed assessment tool, indicators are assessed based on linguistic terms or numerical values based on the nature of the indicator. Linguistic assessment is more effective where some indicators are too complex to assess numerically (Ross, 2005). Since the proposed assessment tool has multiple objectives, FSE was selected to aggregate the performance indicators of the proposed energy assessment tool. FSE involves four main steps: defining fuzzy membership functions, defining weights for indicators and criteria, aggregation, and defuzzification (Lu et al., 1999; Sadiq & Rodriguez, 2004; Y. Wang, Kuckelkorn, Zhao, Mu, & Li, 2016).  As the first step, fuzzy membership functions can be defined as discussed in Section 2.6. Then the weights are defined for the indicators used in the assessment. For this proposed tool, weights are defined based on the survey carried out among the stakeholders of MURBs.  Next step is the aggregation of indicators. This is explained below by considering the three main criteria (energy performance, environmental performance, and economic performance) identified under the operational rating of the proposed assessment tool. The performance level of the building is measured using a linguistic scale (i.e. excellent, good, fair, poor). After data collection, fuzzy numbers can be calculated for each identifying indicator. “R” represents the fuzzy relationship between performance factors and level.  𝑅 =𝐸𝑛𝑒𝑟𝑔𝑦 𝑃𝑒𝑟𝑓𝑜𝑟𝑚𝑎𝑛𝑒𝐸𝑛𝑣𝑖𝑟𝑜𝑛𝑚𝑒𝑛𝑡𝑎𝑙 𝑃𝑒𝑟𝑓𝑜𝑟𝑚𝑎𝑛𝑐𝑒𝐸𝑐𝑜𝑛𝑜𝑚𝑖𝑐 𝑃𝑒𝑟𝑓𝑜𝑟𝑚𝑎𝑛𝑐𝑒[𝑉𝑒𝑟𝑦 𝐺𝑜𝑜𝑑 𝐺𝑜𝑜𝑑 𝐴𝑣𝑒𝑟𝑎𝑔𝑒 𝑃𝑜𝑜𝑟 𝑉𝑒𝑟𝑦 𝑃𝑜𝑜𝑟0.8 0.2 0.0 0.0 0.00.0 0.3 0.7 0.0 0.00.4 0.6 0.0 0.0 0.0]   32  Matrix “w” represents the weight matrix for the three criteria.  𝑤 = [𝑤1 𝑤2 𝑤3] Fuzzy vector (e) is defined as the cross product of matrix “w” and matrix “R”.  𝑒 = 𝑤×𝑅 Equation 2: 2: Fuzzy Vector In a hierarchical process, calculation starts at the lowest level. As for Figure 2:3, calculations start at level 4. Performance indicators (PIs) corresponding to a subcategory of level 3 are aggregated using the process described above. This is done to all PIs to aggregate them to their corresponding sub-category. Then Equation 2: 2 is repeated to aggregate sub-categories (Figure 2.3 level 3) to their respective category (Figure 2.3 level 2). Similarly, this process is repeated to aggregate categories of level 2 to obtain the overall performance indicated in level 1 of Figure 2.3.  Figure 2:3: Hierarchical Process Level 4Level 3Level 2Level 1Overall PerformanceCategory 1Subcategory 1.1Performance Indicator 1.1.1Performance Indicator 1.1.2Subcategory 1.2Performance Indicator 1.2.1Category 2Subcategory 2.1Performance Indicator 2.1.1Subcategory 2.1Performance Indicator 2.2.1Performance Indicator 2.2.2 33  Once the overall fuzzy vector is obtained, several methods can be used for defuzzification. The simplest method is to select the highest value in the overall vector as the overall rating, and designate that as the overall performance level. For example, consider the overall vector shown in Equation 2:3. In this case the outcome will be considered as “Average”.  𝑒 = [𝑉𝑒𝑟𝑦 𝐺𝑜𝑜𝑑 𝐺𝑜𝑜𝑑 𝐴𝑣𝑒𝑟𝑎𝑔𝑒 𝑃𝑜𝑜𝑟 𝑉𝑒𝑟𝑦 𝑃𝑜𝑜𝑟0.1 0.4 0.7 0.2 0.0] Equation 2:3: Defuzzification  Another method used for defuzzification is to assign numerical values to linguistic terms, such as very good=4, good=3, average=2, poor=1 and very poor=0. Then each numerical value is multiplied by the respective score in the overall fuzzy vector and they are added to get the overall score. Then based on this value and the numerical value assigned for linguistic rating, the overall rating is selected.  Weights of the indicators are a significant factor affecting the result of the analysis. In this study, weights for indicators were calculated based on a survey carried out among stakeholders of MURBs. Modified Digital Logic (MDL) proposed by Dehghan-Manshadi et al. (2007) was used for the analysis of survey responses to determine weights.  2.8 Modified Digital Logic (MDL) There are a wide range of operational research methods to assist in determining weights in multi-criteria decision making. These include Analytical Hierarchy Process (AHP), Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS), Elimination and Choice Expressing Reality (ELECTRE), and Digital Logic (Dehghan-Manshadi et al., 2007; Hwang & Yoon, 1981; R. W. Saaty, 1987; Song & Kang, 2016).  The digital logic (DL) method was selected for this study as it uses a simple method compared to other techniques such as analytical hierarchical process (AHP). In DL, the only decision the responder has to make is a binary decision: which indicator is relatively important compared to the other in a pairwise comparison. In AHP this comparison is made on a scale large (R. W. Saaty, 1987; T. L. Saaty, 2008; Song & Kang, 2016). Even though using such a scale gives more accurate weight, it reduces the precision of the survey as each responder will indicate a different score in the scale. Therefore, DL is selected to determine the weights from survey responses.   34  In traditional DL, the only decision to be made is whether criterion 1 is more important or criterion 2 is more important compared to the other (Dehghan-Manshadi et al., 2007; Findik & Turan, 2012). MDL provides another option where the responder can indicate both criterion are equally important (Dehghan-Manshadi et al., 2007). Therefore, MDL is helpful for having a better understanding of the relative importance of these criteria and determining weights. As the first step in MDL, a pairwise comparison of all indicators is performed as shown in Table 2.3. In this pairwise comparison, the preferred indicator is assigned a value of 3 while the other indictor is assigned a value of 1 (e.g. in comparison between indicator 1 and 2, indicator 2 is preferred over indicator 1). If both indicators are of equal preference, a value of 2 is assigned for both indicators (e.g. comparison of indicator 2 and indicator 3). After all comparisons are made row-wise, addition is performed to calculate the number of positive decisions for each indicator. Then, the number of positive decisions for each indicator is divided by the total number of positive decisions, to determine the weight of the indicator.   Table 2:3: Modified Digital Logic Indicator Number Indicator Comparison Positive Decisions Weight 1-2 1-3 1-4 1-5 2-3 2-4 2-5 3-4 3-5 4-5 1 Remaining service life 1 1 2 2       6 0.15 2 Condition of the thermal characteristics of the building including the air permeability 3    2 2 3    10 0.25 3 Efficiency of heating installation and hot water supply, including their insulation method  3   2   3 3  11 0.275 4 Efficiency of the air-conditioning installation where installed   2   2  1  3 8 0.2 5 Efficiency of artificial built-in lighting    2   1  1 1 5 0.125             40 1     35  Chapter 3: Performance Indicator Identification and Prioritization 3.1 Overview  A framework for an energy assessment tool was developed to address the limitations and challenges identified during the comprehensive review of popular rating systems and other published literature. Based on the review of existing energy rating systems, it was identified that most of them pay attention only to energy consumption. To overcome this issue and consider a wide range of energy-use related aspects, an indicator-based assessment tool was proposed in this study. This indicator-based approach is used in sustainable building rating systems to consider different aspects of building management and their impacts.  Figure 3: 1 shows the overview of the proposed energy assessment tool. Building energy performance is assessed by considering the condition of main energy components (asset rating) and operational performance (operational rating) of a building. The combined outcome of the asset rating and operational rating is used to define the building’s energy performance.   Figure 3: 1: Overview of the energy rating method The asset rating focuses on the operational condition individual energy performance. Hence indicators used for asset rating will help to identify if any improvements are needed. The operational rating focuses on energy performance, environmental performance, and economic Building Energy Performance AssesmentOperational RatingOperational Energy PerformanceEnvironmental PerformanceEconomic PerformanceAsset RatingAsset Performance 36  performance. Indicators identified (Section 3.2) in the sub-categories of the operational rating focus on operational energy use, operational and maintenance practices, ecological and human health impacts, and economic impacts of energy use.  A combination of both the asset rating and the operational rating enables the building owners/managers to adopt different approaches to improve the energy performance of the building. A poor asset rating indicates that retrofits should be considered to improve the energy performance. A building with a poor operational rating but a good asset rating indicates that operational changes are the most important factor to improve energy performance.  It is important to have expert opinions and feedback on the proposed framework to make necessary adjustments. Therefore, as the next step experts in the field were consulted to prioritise the indicators identified during the literature review.    A questionnaire survey was conducted to obtain expert opinion, as survey research is a commonly used systematic method to help decision making (Lavrakas, 2008). Surveys have been widely used to identify people’s opinion for several decades (Clifford, Cope, Gillespie, & French, 2016; Trochim, William, M & Donnelly, James, 2008). Perception of risk, environmental concerns, energy use, and access to employment are some of the areas in which survey research has been used (Andersen, Toftum, Andersen, & Olesen, 2009; Clifford et al., 2016). 3.2 Methodology This section discusses the methodology used to identify indicators and determine their weights for the assessment.  3.2.1 Performance indicator identification A comprehensive literature review was conducted to identify the performance indicators (PI) for assessing energy performance and impacts of energy use. The purpose of the review was to identify limitations and challenges of existing rating systems, identify PIs, identify assessment processes, and identify suitable benchmarks. The literature contained three components: i. Review popular energy rating systems ii. Review sustainable building rating systems  iii. Review published literature on building energy rating  37  Assessment procedures, challenges, and limitations of existing energy rating systems are discussed in Chapter 2. This chapter focuses on the PIs used to assess energy performance.  3.2.2 Expert consultation Advances in the internet have encouraged its use for contacting survey respondents over traditional methods, such as postal mail surveys (Couper, Traugott, & Lamias, 2001; Hoonakker & Carayon, 2009; Kaplowitz, Hadlock, & Levine, 2004; Wright, 2006). Access to distant responders, low cost, speed, ease of administration, higher flexibility, and higher response quality are advantages of internet-based surveys (Hoonakker & Carayon, 2009; Wright, 2006). However, disadvantages of internet-based surveys include sampling issues, low response rates, computer security issues, and lack of anonymity (Hoonakker & Carayon, 2009; Wright, 2006).   Considering the merits of internet-based surveys, email was selected as the primary method to contact potential respondents. Email addresses of potential respondents were found via web search of relevant institutions, contacts of researchers and professionals, and contact information of publications. The email indicated that respondents had three options for responding if they chose to: i. Complete the web-based survey ii. Complete the editable portable document format (PDF) of the questionnaire iii. Be interviewed by the researchers All three methods used the same questions in different platforms. The web survey was developed using the UBC hosted version of FluidSurveys to ensure the security of data. The web platform is a user-friendly, easily accessible platform and respondents could respond anonymously. More than 85% percent of respondents used the web survey to respond to the questionnaire.  Another option was to complete the PDF of the questionnaire and email it to the researchers. This option was given as some responders feel more comfortable working in an offline environment. This was the second most preferred method responders used. The interview option was offered if any responders preferred discussion with the researchers. This was the least selected option by the respondents. Only respondents in the Okanagan were contacted for interviews.  38  3.2.2.1 Target audience The survey targeted four main groups: building owners and managers, building and energy system designers/engineers, researchers, and government and external stakeholders. The potential respondents for this survey were selected based on a web search of relevant institutes and company web sites. Building owners/managers, designers/engineers, and government and external stakeholders were selected from BC only. However, due to the limited number of researchers in this area, potential respondents to the survey for academia were selected from across Canada.  3.2.2.2 Sample size determination Sample size plays a vital role in survey research as results from the survey are usually generalised for the population (Bartlett, Kotrlik, & Higgins, 2001; Welkowitz, Cohen, & Lea, 2012). Appropriate sample size is important to reduce bias in responses (Aday & Cornelius, 2006; Bartlett et al., 2001; Welkowitz et al., 2012).  Aday & Cornelius (2006) discuss two methods for determining sample size, based on the design of the survey: descriptive design based on sample distribution, and analytical or experimental design based on power analysis. For this study, the method (Equation 3:1) based on sample distribution was selected to determine the sample size. 𝑛 =𝑍1−𝛼22 ∗ 𝑃(1 − 𝑃)𝑑2 Equation 3: 1: Sample Size Determination  n= Sample size 𝑍1−𝛼 2= Z score of confidence level of α P= Estimated portion d= Desired precision For a 95% confidence level with 0.2 desired precision and equal chance for each response (P=0.5), determined sample (based on Equation 3:1) size is 25.  39  The response rate for questionnaires varies based on the subject and method of communication: postal mail or e-mail (Hoonakker & Carayon, 2009). Review of response rates reveals that  response rate for email surveys can vary from 5 %-70% (Hoonakker & Carayon, 2009; Kaplowitz et al., 2004). However, the response rate generally lies between 10-15% for external surveys (Fryrear, 2015). Therefore, 250 emails were sent assuming a response rate of 10%.   3.2.2.3 Questionnaire design Questionnaire design is one of the most important parts of the survey process as it plays a vital role in getting accurate responses and increasing response rates (Leung, 2001; Pew Research Center, 2017). The wording of the questions, the length of the questionnaire, and the arrangement of questions play a vital role in this regard (Leung, 2001; Pew Research Center, 2017).   The objective of the study determines the structure of the questionnaire (Brace, 2004). The survey in this study focused on understanding the current practice of energy rating in MURBs, stakeholder expectations of a rating tool, and obtaining comments on the proposed rating tool. Further, questions were designed to obtain the weights for the indicators of the proposed tool. Appendix B shows the questionnaire used for this survey. This is a six-page questionnaire in print format and a five-page questionnaire on the web platform. This questionnaire adopted the question arrangement suggested by Leung (2001) to  start from simple, closed questions to difficult, open questions.   Section 1: Responder experience, expected information and current practices  Section 2: Rating the performance indicators and relative importance of indicators and indicator categories  Section 3: Open ended questions on suggestions for energy rating systems The type of questions asked determine the possible themes that can be discussed in the questionnaire, as well as the time needed to answer the questionnaire and analysis of the results (Brace, 2004; Leung, 2001; Pew Research Center, 2017). As shown in Appendix B this questionnaire uses a wide range of question types to obtain responders’ opinions. Closed questions (choice of options, differential scales, checklists) and open questions (open-ended question) were used to obtain the information needed.  40  Piloting the questionnaire is needed as questioners are rarely in their best format at the beginning (Brace, 2004; Leung, 2001; Pew Research Center, 2017). The initial questionnaire was sent to several external people from technical and non-technical backgrounds to test face validity. Based on their comments the questionnaire was modified so to make it easy to comprehend. Changes included change of wording, adding descriptions, and changing the order of questions.  Researchers have a responsibility to protect research subjects from potential threats (Jong, Hibben, & Pennell, 2016). Therefore, ethics guidelines are developed to ensure that participants are treated with respect and consideration (Jong et al., 2016; UBC Office of Research Ethics, 2015). The tested questionnaire was submitted for approval to the UBC Okanagan behavioural research ethics board to ensure conformity with ethical standards. (See Appendix C) After obtaining ethics approval the questionnaire was sent to potential respondents via email with the recruitment letter. If they wished to participate, respondents had the option to answer the questionnaire directly via the UBC-hosted Fluid Survey web tool, the editable pdf document, or via an interview. Responses received via the editable pdf and interviews were entered as web entries for data handling purposes.  3.3 Performance Indicator Identification  The proposed energy assessment tool focuses only on the operational phase as it targets existing MURBs. The proposed tool has two major components: an operational rating and an asset rating (Figure 3.1). The indicator based approach was selected for this new energy assessment tool as it takes into account multiple criteria. Table 3:1 lists the key performance indicators (PI) identified from a comprehensive review of popular energy rating and sustainable building rating systems and white literature.  Performance Indicators 41  Table 3:1: Performance Indicators Category  Performance Indicator Reference Operational Rating Energy performance EN-1 Annual energy consumption (BRE Global, 2012; El shenawy & Zmeureanu, 2013; Energy star, 2015; Green Building Council Denmark, n.d.; HKGBC, 2010; Institute for Building Efficiency, 2013; Srinivasan, Ingwersen, Trucco, Ries, & Campbell, 2014; Vijayan & Kumar, 2005; Vučićević, Jovanović, Afgan, & Turanjanin, 2014) EN-2 Renewable energy consumption  (Canada Green Building Council, 2009; Green Building Council Denmark, n.d.; Green Building Initiative, 2014; HKGBC, 2010; Institute for Building Efficiency, 2013; Namini et al., 2014) EN-3 Peak demand (Green Building Council of Australia, 2015; HKGBC, 2010) EN-4 Availability of sub-metering  (Fischer, 2008; Green Building Initiative, 2014; Namini et al., 2014) EN-5 Energy recovery ventilation system (Green Building Initiative, 2014) EN-6 Availability of combined heat and ventilation (Green Building Initiative, 2014) EN-7 Energy-efficient operating procedures (Canada Green Building Council, 2009) EN-8 Availability of energy monitoring (Green Building Initiative, 2014) EN-9 Trained staff for building management (Green Building Initiative, 2014) EN-10 Availability of maintenance schedules (Green Building Initiative, 2014) Environmental Performance EV-1 Water depletion  (Al-Ghamdi & Bilec, 2015)  EV-2 Global warming potential  (Al-Ghamdi & Bilec, 2014, 2015; German Sustainable Building Council, 2016; Hossaini, Reza, Akhtar, Sadiq, & Hewage, 2015; Hu, Shiue, Chuang, & Xu, 2013; Kim & Todorovic, 2013; Mosteiro-Romero et al., 2014; Mwasha, Williams, & Iwaro, 2011) EV-3 Ozone depletion potential  (Al-Ghamdi & Bilec, 2014; German Sustainable Building Council, 2016;  Performance Indicators 42  Hossaini et al., 2015; Hu et al., 2013; Mosteiro-Romero et al., 2014) EV-4 Nutrification/ eutrophication potential  (Al-Ghamdi & Bilec, 2014; German Sustainable Building Council, 2016; Hossaini et al., 2015; Hu et al., 2013; Mosteiro-Romero et al., 2014) EV-5 Heavy metal  (Hu et al., 2013) EV-6 Smog potential  (Al-Ghamdi & Bilec, 2014; German Sustainable Building Council, 2016; Hossaini et al., 2015; Hu et al., 2013) EV-7 Acidification potential  (Al-Ghamdi & Bilec, 2014; German Sustainable Building Council, 2016; Hossaini et al., 2015; Hu et al., 2013; Mosteiro-Romero et al., 2014) EV-8 Radioactive waste/Eco-toxicity  (Al-Ghamdi & Bilec, 2014; Hossaini et al., 2015) EV-9 Habitat Alteration (Hossaini et al., 2015) EV-10 Human health respiratory effects potential  (Al-Ghamdi & Bilec, 2015; Hossaini et al., 2015) EV-11 Carcinogens  (Al-Ghamdi & Bilec, 2014; Reza, Sadiq, & Hewage, 2014) Economic performance EC-1 Operational costs (Borchers, Duke, & Parsons, 2007; Fischer, 2008; Kamali & Hewage, 2015) EC-2 Maintenance costs of energy system and retrofits (Kamali & Hewage, 2015) EC-3 Savings generated from retrofits (Kamali & Hewage, 2015)(Gram-Hanssen, 2014) Asset Rating     Asset Performance AS-1 Remaining service life of HVAC system (Canada Green Building Council, 2012; CIBSE Certification, 2016; Green Building Initiative, 2015) AS-2 Condition of the thermal characteristics of the building including the air permeability. (Canada Green Building Council, 2012; CIBSE Certification, 2016; German Sustainable Building Council, 2016; Green Building Initiative, 2015) AS-3 Efficiency of the heating installation and hot water supply, including their insulation method. (Canada Green Building Council, 2012; CIBSE Certification, 2016)  Performance Indicators 43  AS-4 Efficiency of the air-conditioning installation where installed. (Canada Green Building Council, 2012; CIBSE Certification, 2016; Green Building Initiative, 2015) AS-5 Efficiency of the artificial built-in lighting (Canada Green Building Council, 2012; CIBSE Certification, 2016; Green Building Initiative, 2015) 3.3.2 Operational rating Energy performance, environmental performance, and economic performance are the sub-categories under operational rating. These sub-categories are assessed based on operational conditions, and hence are categorised as operational rating.  3.3.2.1 Energy performance The energy performance category focuses on annual energy consumption, renewable energy consumption, etc., as shown in Table 3:1. These are based on measured energy use in MURBs. In this rating system, total energy consumption is considered to determine energy intensity, to assess annual energy consumption (EN-1). EnerGuide reduces the renewable energy consumption from the total energy consumption in assessing annual energy performance (Natural Resources Canada, 2016b). This overlooks potential impacts caused by renewable energy sources. Further, deducting the energy from the renewable energy sources may give an unfair advantage to renewable energy users over low energy consuming buildings. Therefore, two separate indicators were proposed to assess these aspects separately (EN-1 and EN-2). EN-3 focuses on the time of day with the highest demand for energy, and the demand at that time. This becomes an important factor in high energy demanding environments, as the distrusting the energy demand throughout the day aids in the effective use of the energy supply infrastructure, such as the electricity grid. Several sustainable rating systems uses this indicator for this reason (Green Building Council of Australia, 2015; HKGBC, 2010).  EN4, EN-7, EN-8, EN-9, and EN-10 assess the operational and maintenance practices of the building. These PIs were selected as operational and maintenance practices as they have an effect on how energy is consumed and awareness of the energy use of the building (Green Building Initiative, 2014). EN5 and EN6 focus on the availability of energy efficient, state of the art  44  technology in the building. These energy efficient technologies will help to minimize the energy use of the building without compromising indoor environmental quality.  3.3.2.2 Environmental performance The environmental performance indicators identify the potential environmental burden of energy use. Environmental impacts are assessed based on the life cycle environmental impacts of the building’s operational energy use and energy retrofits that are implemented to improve energy performance. Table 3:1 recognises eleven environmental performance indicators such as global warming potential, release of emissions and effluents, and resource depletion (EV-1 – EV-11). Identified indicators are directly associated with operational energy use and are dependent on the amount of energy used by the building and the energy source (Mosteiro-Romero et al., 2014). Even though global warming is the most discussed environmental impact, it is not the only environmental impact associated with energy use (Al-Ghamdi & Bilec, 2015). Therefore, it is important to recognise all the environmental impacts associated with energy use. The proposed energy assessment tool tries to incorporate a wide range of environmental impacts and they are listed under environmental performance in Table 3:1. 3.3.2.3 Economic performance Economic performance indicators reflect the financial impacts of the building’s energy system. These indicators were selected to evaluate the economic burden of the choice of different energy sources and retrofit options. Life cycle costs (LCC) of retrofits and operational energy are assessed by the proposed economic performance indicators (EC-1- EC-3).  Operational cost (EC-1) depends on the source of energy as utility bills are the primary operational cost. This is an important factor as energy bills are a major concern of residents (Borchers et al., 2007). EC-2 pays attention to the maintenance cost of the energy system and energy retrofits. However, studies show that it is not only the cost of energy retrofits but also what happens after the retrofits are installed that are important to consumers (Gram-Hanssen, 2014). Therefore, EC-3 was selected to assess the savings generated from retrofits. The combination of these economic PIs gives a life cycle perspective of the energy related retrofits.   45  3.3.3 Asset rating The indicators related to the asset rating assess the performance of energy related assets. Assessing these indicators helps to identify whether any improvements are needed in energy related assets. Improving the performance of energy related assets will ensure efficient use of the assets and will improve energy performance. Table 3:1 represents the five indicators selected under this category (AS-1-AS-5).  Heating and cooling are the predominant energy consumers in residential buildings, especially in cold climates (Balaras et al., 2007, 2005; Pérez-Lombard et al., 2008). Therefore, heating and cooling should be given attention to improve energy performance. AS-1, AS-2, AS-3, and AS-4 assess the condition of the HVAC system and other assets, such as the building envelop and the water heating system, which affect the heating and cooling energy demand of the building. AS-5 assesses the efficiency of the lighting as the lighting technology has improved rapidly to increase its efficiency.  3.4 Performance Indicator Prioritization A total of 32 completed surveys were received from all three modes that were available for participating the survey: the Fluid Survey tool, the editable pdf, and interviews. Twelve of these completed surveys are from designers and engineers, eight are from researchers, six are from government and other stakeholders, and six are from building owners and managers. At the initial stage, the lowest response was from the building owners and managers. Hence, more interviews were organised targeting building owners and managers. Experts with considerable experience have participated in this questionnaire survey. Figure 3:2 shows the distribution of expertise of the respondents.   46   Figure 3: 2: Respondents' Experience  3.4.1 Importance of identified performance indicators The questionnaire survey was designed to obtain responders’ opinions on the importance of indicators. Table 3:2 shows the rating criteria used to assess indicators. Table 3:3 summarises the results of the survey based on stakeholder groups. The overall indicator assessment is calculated based on the average of different stakeholder groups.  Table 3:2: Rating Criteria for Indicators Rating Description Very high (5) Must be included, indicator is highly relevant and highly important for energy consumption or impacts of energy use High (4) Highly relevant and of average importance for energy consumption or impacts of energy use Average (3) Average relevance and average importance for energy consumption or impacts of energy use Low (2) Indicator has low relevance and importance for energy consumption or impacts of energy use Very low (1) Seems to be irrelevant for energy consumption or impacts of energy use Table 3:3 shows that the importance of each indicator differs with the stakeholder group. However, for a stakeholder group the importance tends to lie closely in the rating criteria. For example, EN-1 (annual energy consumption) is recognised as an indicator with very high importance by 100% 1-3 years7%4-5 years19%6-10 years29%10+ years45%Respondents' Experience 47  of designers and government and external stakeholders. Of MURB owners and managers, 67% state that it is an indictor with very high importance, 17% state it is an indicator with high importance, and another 17% state it is an indicator with average importance. However, none of the owners or managers state that EN-1 is an indictor with low importance. Therefore, the results of the survey are reliable.  Based on the results (Table 3:3), annual energy consumption (EN-1) is the most important indicator for energy performance, followed by energy-efficient operating procedures (EN-7). Global warming potential (EV-2), human health respiratory effects potential (EV-10), and carcinogens (EV-11) are the most important environmental performance indicators. All economic and asset performance indicators have equal importance based on the overall assessment.  Prioritizing Indicators 48  Table 3:3: Prioritizing Indicators  Category    Indicator Rating of Indicators* (% of responses)  Researchers Designer/ Engineer MURB Owner/Manager Government / External stakeholder Overall Final  Operational Rating 5 4 3 2 1 5 4 3 2 1 5 4 3 2 1 5 4 3 2 1 5 4 3 2  1  Energy performance EN-1 38 50 0 13 0 100 0 0 0 0 67 17 17 0 0 100 0 0 0 0 76 17 4 3 0 5 EN-2 63 38 0 0 0 40 0 40 20 0 17 33 33 17 0 25 25 25 0 25 36 24 25 9 6 4 EN-3 25 50 25 0 0 60 20 20 0 0 0 17 67 17 0 25 50 25 0 0 28 34 34 4 0 4 EN-4 38 50 13 0 0 10 40 20 30 0 0 17 67 17 0 0 75 25 0 0 12 45 31 12 0 4 EN-5 0 25 63 13 0 30 50 10 10 0 50 17 0 33 0 50 50 0 0 0 33 35 18 14 0 4 EN-6 50 38 13 0 0 0 60 20 10 10 0 17 17 67 0 0 50 25 25 0 13 41 19 25 3 3 EN-7 50 50 0 0 0 20 50 20 10 0 0 67 33 0 0 100 0 0 0 0 43 42 13 3 0 4 EN-8 38 25 38 0 0 40 30 30 0 0 0 67 33 0 0 75 25 0 0 0 38 37 25 0 0 4 EN-9 38 13 38 13 0 40 20 30 10 0 17 50 17 17 0 50 50 0 0 0 36 33 21 10 0 4 EN-10 0 63 25 13 0 10 30 40 10 10 0 50 50 0 0 50 50 0 0 0 15 48 29 6 3 4 Environmental Performance EV-1 25 25 50 0 0 20 10 50 20 0 0 83 17 0 0 75 0 25 0 0 30 30 35 5 0 4 EV-2 13 0 75 13 0 30 20 30 20 0 33 50 0 0 17 75 25 0 0 0 38 24 26 8 4 4 EV-3 0 13 63 13 13 0 20 40 40 0 0 83 0 0 17 50 0 25 25 0 13 29 32 19 7 3 EV-4 0 0 63 25 13 0 0 20 60 20 0 67 17 0 17 50 25 0 0 25 13 23 25 21 19 3 EV-5 0 38 38 13 13 0 0 20 50 30 0 67 17 0 17 25 50 0 25 0 6 39 19 22 15 3 EV-6 13 0 50 25 13 0 0 30 60 10 0 67 17 0 17 50 25 25 0 0 16 23 30 21 10 3 EV-7 0 0 63 25 13 0 0 30 60 10 0 67 17 0 17 25 0 50 0 25 6 17 40 21 16 3 EV-8 0 0 63 25 13 0 0 30 60 10 0 67 0 17 17 25 25 0 25 25 6 23 23 32 16 3 EV-9 25 25 38 0 13 0 0 30 60 10 0 67 0 17 17 25 25 0 25 25 13 29 17 25 16 3 Prioritizing Indicators 49  EV-10 13 25 38 25 0 10 50 20 20 0 33 50 0 0 17 75 25 0 0 0 33 38 14 11 4 4 EV-11 50 50 0 0 0 20 10 10 50 10 33 50 0 0 17 75 25 0 0 0 45 34 3 13 7 4 Economic performance EC-1 38 38 25 0 0 60 30 10 0 0 67 0 33 0 0 100 0 0 0 0 66 17 17 0 0 4 EC-2 38 13 38 13 0 50 30 20 0 0 67 0 33 0 0 100 0 0 0 0 64 11 23 3 0 4 EC-3 38 13 38 13 0 60 30 10 0 0 83 0 17 0 0 100 0 0 0 0 70 11 16 3 0 4  Asset rating                          Asset Performance AS-1 50 50 0 0 0 50 30 10 10 0 17 83 0 0 0 100 0 0 0 0 54 41 3 3 0 4 AS-2 50 50 0 0 0 40 50 10 0 0 17 67 17 0 0 75 25 0 0 0 45 48 7 0 0 4 AS-3 50 50 0 0 0 60 40 0 0 0 17 67 17 0 0 100 0 0 0 0 57 39 4 0 0 5 AS-4 38 38 13 0 13 50 50 0 0 0 0 83 17 0 0 75 25 0 0 0 41 49 7 0 3 4 AS-5 38 38 13 0 13 40 40 20 0 0 17 67 17 0 0 75 25 0 0 0 42 42 12 0 3 4  * Refer Table 3.2 to rating criteria 50  3.4.2 Weights for performance indicator categories The weights for indicators were calculated based on the modified digital logic (MDL) proposed by Dehghan-Manshadi et al. (2007). The digital logic (DL) method was selected for this study as it uses a simple method, compared to other techniques such as analytical hierarchical process (AHP). In DL, the only decision the responder has to make is which indicator is relatively important compared to another in a pairwise comparison. In AHP this comparison is made on a scale of 1-9, 1-7 etc. (T. L. Saaty, 2008; Song & Kang, 2016). Using such a scale would reduce the precision of the survey as each responder will indicate a different score in the scale. Therefore, DL is more appropriate for this study.  In traditional DL the only decision to be made is whether criterion 1 is more important or criterion two is more important compared to the other (Dehghan-Manshadi et al., 2007; Findik & Turan, 2012). MDL provides another option where the responder can indicate both criterion are equally important (Dehghan-Manshadi et al., 2007). Therefore, MDL is helpful for having a better understanding of the relative importance of these criteria. The questionnaire did not focus on determining weights for operational rating indicators, as the experts consulted in this survey did not have expert knowledge to determine the relative importance of those indicators. However, a question was provided to identify the importance of indicators, which helped respondents have an idea of each category.  In this study, each responder was analysed individually to determine the weights they gave for each category. Then weights were calculated within each stakeholder group, treating every responder equally. Overall weights were determined giving all stakeholder groups the same weight.  Table 3:4 shows the weight for the two major categories of the performance assessment: the operational rating and the asset rating. All stakeholder groups in the survey identify operational rating as more important compared to asset rating. Researchers, MURB owners, and government and external stakeholders gave a 3:1 importance to operational rating over asset rating. Designers and engineers gave less importance to operational rating compared to other stakeholders. Indicators identified under operational rating the proposed rating system covered energy  51  performance, environmental performance and economic performance. Since operational rating covers a wide range of issues a higher weight can be expected from the stakeholders.  Table 3:4: Weights Operational and Asset Rating  Researchers Designer/ Engineer MURB Owner/ Manager Government / External stakeholder Overall Operational rating 0.719 0.583 0.708 0.750 0.690 Asset rating 0.281 0.417 0.292 0.250 0.310  Table 3:5 discuss weights for the main categories considered under operational rating. All stakeholder groups except designers and engineers identify energy performance as the most important performance category under operational rating. Economic performance is the most important category for designers and engineers. However, environmental performance has the least weight among all stakeholder groups. Table 3:5: Weights for Categories of Operational Rating  Researchers Designer/ Engineer MURB Owner/ Manager Government / External stakeholder Overall Energy performance 0.449 0.371 0.438 0.500 0.439 Environmental Performance 0.269 0.207 0.281 0.222 0.245 Economic performance 0.282 0.422 0.281 0.278 0.316  Indicators under energy performance covers aspects from annual energy consumption to operational practices. Since annual energy consumption is the main concern and affect wide rage of issues stakeholders have given a higher weight to the energy performance. Every energy consumption related decision has financial implications. This has led to the second highest weight to economic performance.  However, since designers are worried about the financial implications of their design, they have given a higher weight to economic performance over other two categories. Even though all the environmental performance received the lowest weight, it can be noted that weight for environmental performance is not negligible compared to other to categories. That implies stakeholders are aware of possible environmental implications.   52  Table 3:6 shows that all indicators considered under asset rating received relatively equal importance from all stakeholder groups. However, remaining service life received the lowest weight from all stakeholder groups. The relative importance of the other indicators differs from each stakeholder group. Table 3:6: Weights for Categories of Asset Rating  Researchers Designer/ Engineer MURB Owner/ Manager Government / External stakeholder Overall Remaining service life 0.147 0.163 0.196 0.117 0.155 Condition of the thermal characteristics of the building including the air permeability 0.225 0.225 0.238 0.200 0.222 Efficiency of heating installation and hot water supply, including their insulation method 0.238 0.233 0.225 0.242 0.234 Efficiency of the air-conditioning installation where installed 0.219 0.198 0.133 0.175 0.181 Efficiency of artificial built-in lighting 0.172 0.183 0.208 0.267 0.207 3.5 Discussion  The literature review indicated that there are a wide range of factors affecting energy use and the different impacts of energy use. Performance indicators identified in Table 3:1 summarise a wide range of concerns related to energy performance.  The survey revealed that the relative importance given to each factor varies depending on the stakeholder group. However, all stakeholder groups identified annual energy consumption as an important performance indicator (Table 3:3). Further, the use of this indicator in many energy rating and sustainable building rating systems emphasizes the importance of the indicator. Even though other indictors identified under energy performance have lesser importance than annual energy consumption, more than 50% of the survey respondents identified those indicators as very high or high importance. Water depletion, global warming potential, human health respiratory effects potential, and carcinogens have the highest overall importance among environmental performance indicators.  53  All the other environmental performance indicators had less than 50% of survey respondents identifying them as indicators with very high or high importance. However, none of the indicators had more than 50% of survey respondents identifying them as indicators with low or very low importance.  The majority of survey respondents identified all economic performance indicators and asset performance indicators as having very high or high importance. More than 90% of all stakeholder groups identified all asset performance indicators as having very high or high importance.  In regards to the two main criteria of the performance assessment, operational rating and asset rating, operational rating obtained an overall weight of 0.69 and asset rating got 0.31 based on survey responses (Table 3:4). Two criteria were identified based on the information that can be generated to improve the energy performance of a building. However, since asset condition has an effect on the operational performance, having a relatively low weight on asset rating is justifiable.   Energy performance obtained the highest weight (0.439) under operational rating (Table 3.5), followed by economic performance (0.316) and environmental performance (0.245). Even though the survey didn’t try to estimate the weight for each indictor under these sub-categories to avoid complexity in the questionnaire, the importance of indicators (table 3.3) by the survey respondents and literature was used to determine a relative importance of indicators. AS-3 (efficiency of heating installation and hot water supply, including their insulation method) obtained the highest weight among asset performance indicators (0.324). However, all asset performance obtained relatively equal weight based on the survey responses (Table 3.6).   3.6 Summary Based on the literature review, two criteria were identified to assess the energy performance of a building: operational rating and asset rating. Energy performance, environmental performance, and economic performance were identified as sub-categories of the operational rating to consider a wide range of impacts of energy use. Asset condition was assessed under asset rating. A questionnaire survey was conducted among different stakeholder groups to prioritise these indicators. Based on the survey results, operational rating obtained an overall weight of 0.69, while asset rating attained 0.31. Sub-categories under operational rating—energy performance,  54  environmental performance, and economic performance—got weights of 0.439, 0.245, and 0.245 respectively. Each asset performance indicator achieved relatively equal weight based on the survey: AS-1 (0.155), AS-2 (0.222), AS-3 (0.234), AS-4 (0.181), and AS-5 (0.207). The stated preferences by stakeholders and weights calculated using the relative importance of different indicators and indicator categories were used to aggregate indicators and assess the overall building energy performance and is discussed in Section 4.7.     55  Chapter 4: Building Energy Performance Assessment Framework  4.1 Background Energy rating systems are a popular tool used to promote energy efficient buildings and increase awareness of energy efficiency (Natural Resources Canada, 2016g, 2016h; Pérez-Lombard et al., 2009). The questionnaire survey targeting the experts in MURBs (discussed in Section 3.2.2.) tried to identify the current use of different rating systems and expected information from a rating system. Based on the results of the survey, Energy Star was the most popular energy rating system among the stakeholders, followed by EnerGuide and R-2000 (Table 4.1). A total of 32.5% of the survey respondents indicated using “other” rating systems, however that doesn’t necessarily mean that they use different rating systems. Rather respondents describe the tools they use to assess energy performance. Designers and engineers use in-house developed tools to assess energy performance based on National Energy Code of Canada for Buildings and ASHRAE 90.1. Further, they use tools such as eQUEST, IESVE, energy plus, and Passive House Planning Package for energy simulations. In addition to energy rating systems, green building rating systems such as LEED, Built Green, and Residential Environmental Assessment Program (REAP) are used to assess building performance. Table 4:1:Use of Energy Rating Energy Rating % use/ recommend to use of rating system* EnerGuide 15.5 R2000 12.5 Energy Star 31.5 Other 31.5 None 15.5 Don’t know 3 * Addition of % is greater than 100% as some responders use more than one rating system  56  4.1.1 Expected information from an energy rating tool Table 4:2 summarises the results of the expected information from an energy rating tool. These results are summarised according to stakeholder category. Based on the responses, comparing energy performance with other buildings is the most important factor across all stakeholder groups except MURB owners and managers. MURB owners and managers are more interested in the cost savings that can be generated from improvements. A full 100% of MURB owners identify cost saving criteria as very important or important.  More than 85% of researchers and government stakeholders agree that environmental impacts are very important or important. Even though this is less for designers and building owners, more than 65% state that environmental impacts are very important or important.  Table 4:2:Expected Information from an Energy Rating Tool  Very Important Important Not Important Researchers    Energy performance compared to similar buildings 88% 13% 0% Environmental impacts from energy use 50% 50% 0% Total energy consumed to construct the building 13% 50% 38% Potential energy saving strategies (operational phase) 63% 38% 0% Cost savings that can be generated from energy savings 38% 63% 0% Designer/ Engineer    Energy performance compared to similar buildings 100% 0% 0% Environmental impacts from energy use 25% 42% 33% Total energy consumed to construct the building 8% 67% 25% Potential energy saving strategies (operational phase) 42% 58% 0% Cost savings that can be generated from energy savings 67% 25% 8% MURB Owner/Manager    Energy performance compared to similar buildings 50% 33% 17% Environmental impacts from energy use 50% 17% 33% Total energy consumed to construct the building 0% 50% 50% Potential energy saving strategies (operational phase) 67% 17% 17% Cost savings that can be generated from energy savings 83% 17% 0% Government / External stakeholder    Energy performance compared to similar buildings 83% 17% 0%  57  Environmental impacts from energy use 50% 33% 17% Total energy consumed to construct the building 17% 83% 0% Potential energy saving strategies (operational phase) 100% 0% 0% Cost savings that can be generated from energy savings 83% 17% 0%  Total energy consumed to construct a building is the least important information expected from a rating tool by all stakeholder groups. MURB owners/managers showed the least interest in this criterion and none indicted this as very important. More than 80% of all stakeholder groups stated than energy saving strategies are important, while 100% of government/external stakeholders indicated that it is very important.   Operational cost (non-energy), detailed energy use breakdown, and capital cost for improvements are other pieces of information that responders stated they expect from an energy rating tool. In addition to the above information, researchers are also interested in the detailed methodology used by the rating system and the adjustability of the rating system for different conditions such as weather.  4.1.2 Need for a new tool Review of the existing energy rating systems revealed that they do not fully satisfy the needs of the stakeholder groups discussed in Section 4.1.1. Therefore, a new tool is needed to address the identified needs of stakeholders. The literature review identified that they do not incorporate life cycle thinking or uncertainties of data associated with assessment. Furthermore, resource intensiveness and complicated evaluations are major shortcomings of existing rating systems.  A new energy assessment tool was developed to address the above gaps in existing practise. The literature review and the results from the survey were used to determine the scope of the assessment and methodologies for the assessment process. The rest of this chapter discusses the proposed energy performance assessment tool.    58  4.2 Overview of the Proposed Assessment Tool Based on the comprehensive literature review it was identified that existing energy rating systems do not provide a comprehensive assessment of energy use and its related impacts. A new assessment tool was proposed to address stakeholder concerns and assess energy related impacts identified from literature review and the expert consultation.   Most of the energy rating systems provide a single indicator-based assessment (Section 2.4). However, studies show that performance assessment should move towards a more comprehensive approach rather than a single indicator assessment (Binder, Feola, & Steinberger, 2010; Gasparatos, El-Haram, & Horner, 2008; Kaufmann & Cleveland, 1995). Therefore, a multi-indicator based approach was selected for this new energy assessment tool. Indicator-based assessment tools are increasingly being used as they take into account multiple criteria (Binder et al., 2010; Gasparatos et al., 2008; Hemphill, Berry, & McGreal, 2004; Juwana, Muttil, & Perera, 2012). This is mainly because an indicator based approach helps to aggregate different concerns and allows further analysis (where necessary) of concerns by introducing sub-categories (Juwana et al., 2012).   Table 3:1 lists the key performance indicators (PI) identified based on the literature review. These indicators are categorised into operational rating and asset rating (Figure 3.1). Operational rating is sub-categorised into energy performance and environmental performance, while asset rating focuses solely on asset performance (Figure 3.1). A combination of both asset rating and operational rating enables the building owners/managers to adopt different approaches to improving the energy performance of the building. Poor asset rating indicates that retrofits should be considered to improve energy performance. A building with a poor operational rating and a good asset rating indicates that operational changes are the most important factor to improving energy performance.  4.3 Scope of the Proposed Energy Performance Assessment Tool This framework is defined to assess energy performance of MURBs at the building level. The building was selected as the unit of assessment because of the nature of energy consumption in MURBs. Energy consumption of common spaces such as corridors, laundry, elevators, etc. will  59  be ignored if the individual apartment’s energy consumption is considered. Hence it will not reflect the total energy consumption. Further, there exists a wide range of energy metering in these buildings. Some MURBs have energy meters for both electricity and natural gas at the individual apartment level, while some have electricity meters at the apartment level and natural gas meters at the building level. Some buildings have both meters at the apartment level.  This assessment tool focuses only on energy consumption during operation of the building as it targets existing MURBs. Energy consumption during initial stages of the building life cycle, such as at construction, were not considered in the assessment as they cannot be changed during the operational stage. Further, energy consumption related to the demolition stage of the building is not considered in this assessment as it depends on the method of initial construction and cannot be changed during the operational stage. However, when developing an assessment tool for new construction these life cycle stages should be considered. If they are not, net-zero and low-energy buildings will have an unfair advantage over traditional buildings, as net-zero and low-energy buildings consume less energy during operational stages but consume more energy for construction, and advanced materials consume more energy in manufacturing (Giordano et al., 2015; Lützkendorf et al., 2014; Ramesh et al., 2010; Verbeeck & Hens, 2010).  Following are the unique features of the proposed energy assessment tool.   Current energy rating systems fall into the categories of either asset rating or operational rating. The proposed approach combined both these approaches.   Current energy rating systems overlook the life cycle impacts of energy use. The proposed approach provides a more comprehensive review of the building energy performance by incorporating lifecycle thinking.  Use of fuzzy set theory will allow the use of qualitative data and incorporate uncertainties associated with data, in the assessment process. In the existing energy ratings systems, only quantitative data can be considered. This approach will allow a more wide-ranging framework for assessing energy performance and its impacts.   Inclusion of cost criteria will assist decision will help to address stakeholder concerns and have a more inclusive assessment.  Indictor based approach used in the proposed tool will facilitate the modification of this tool to be used for other building types.  60  4.4 Energy Performance Assessment Methodology The methodology of the energy assessment tool proposed in this study is shown in Error! Reference source not found.. Data for operational rating will be obtained from utility bills, estimates, contracts, and BOQs by consulting the building owners/managers. For asset rating, the condition of key building components will be assessed by direct observation and through expert consultation.  PIs for operational rating and asset rating will be calculated by comparing the observed values against benchmarks. These benchmarks were determined by literature review and collecting data from different organisations such as Natural Resources Canada and BC Housing. PIs are aggregated using fuzzy synthetic evaluation (FSE) as discussed in Section 2.6.2 to determine the performance of each category (i.e. environmental, economic, and energy performance). These categories will be aggregated again using FSE to determine the overall operational rating. Operational and asset ratings are combined using fuzzy rules to determine the overall energy performance of the building (Section 4.7). An unsatisfactory score in the energy rating system will flag the need for retrofits or operational changes. If the asset rating score is unsatisfactory while the operational rating is adequate, investing in energy related assets is needed to improve the energy performance. On the other hand, an unsatisfactory operational score with adequate asset conditions may reflect the need for better management policies and improved occupancy patterns.    Acceptable benchmarks were defined through expert consultation, literature review, and secondary data from Natural Resources Canada to the local conditions in the region and building characteristics. Benchmarks for PIs were defined for five performance categories (i.e. very good, good, average, poor, very poor). Developed benchmarks are discussed in Section 4.6.    61   Figure 4: 1: Methodology of the energy rating system framework 4.5 Life Cycle Assessment in the Proposed Assessment Tool The goal of this LCA is to assess the environmental impacts of energy use during the operation of the building. Construction and demolition stages of the building are not considered in this assessment tool as the focus in this study is on existing buildings. Construction and demolition stages are not assessed because assessing the construction stage will not help in making recommendations to enhance the energy performance of existing buildings. Moreover, it is difficult to collect accurate data for the construction stage as many of the related documents are not available. Since the demolition stage directly depends on the construction materials, it is not useful to evaluate that stage in the LCA.  62  Figure 4:2 illustrates the LCA system boundary of the proposed assessment methodology. Impacts from operational energy use and energy retrofits are considered in determining impacts. Energy retrofits are introduced to reduce energy consumption. Reduction in energy consumption will reduce impacts, but life cycle burdens of retrofits may exceed the apparent benefit achieved by reducing energy consumption. Hence it is important to consider both energy use and introduced retrofits in assessing the total environmental burden. Life cycle impacts of energy sources such as electricity and natural gas are considered from their point of generation to the arrival on site (building). Impacts of energy retrofits are considered from material extraction to transport to the site in this LCA.   Figure 4: 2: LCA system boundary Impacts of operational energy sources are not limited to energy consumed within the building, but also due to energy loss during transmission from point of generation. Therefore, to assess these impacts, source energy consumption is considered rather than site energy consumption. Site energy is the energy measured at the location of the end user (building) and source energy is the energy produced at the point of generation. Published source energy conversion factors were used to calculate source energy consumption to consider the losses in transmission.  Source energy factors for this study were found from the following sources: The Energy Consumption and Conservation in Mid and High Rise Residential Buildings in British Columbia  63  (RDHBuilding Engineering Ltd, 2012) and Energy Star Portfolio Manager (Energy Star, 2013). Table 4:3 below shows the source energy factors used in this study.  Similarly, the material extraction, manufacturing and assembly processes, installation, operation, and disposal of energy retrofits lead to life cycle impacts. These life cycle stages are considered in the assessment tool to have a comprehensive understanding of the environmental burden caused by the retrofits of a building.  Table 4:3: Energy Conversion Factors Source Site to source conversion factors Electricity 1.11 Natural gas 1.03 Oil 1.03 Wood and wood pellets 1 Propane 1 Solar 1 Geothermal 1 Bio Mass 1 4.6 Benchmarks Energy consumption data were collected via communications with Natural Resources Canada Energy Star Portfolio, BC Housing, and Fisher Resource Efficiency Solutions Company to establish energy consumption benchmarks.  Table 4:4 shows the benchmark data used for energy intensity. Table 5:5 shows the percentage of energy consumption based on the energy sources.  Table 4:4: Average Energy Consumption City Mean Electricity Use (GJ/m2) Burnaby 0.77 Vancouver 0.88 Victoria 0.54 All British-Columbia 0.77   64  Table 4:5:Percentage Energy Consumption City  Electricity Usage % Natural Gas Usage % Burnaby 37 63 Vancouver 34.5 65.5 Victoria 60 40 All British-Columbia 41.5 58.5  For environmental PIs observed values were determined using the actual energy sources and the amount of energy from those sources. Provincial/city distribution (Table 4:5) was used to estimate the benchmark values for environmental PIs. Total actual energy consumed by the building was assumed to have the same distribution as provincial/city averages. The calculated energy from different sources based on provincial/city distributions was used to assess environmental PIs benchmark for the building under consideration.  Based on the above benchmark values, the performance ratio for each quantitative indicator is calculated.  Performance ratio𝑖 =𝑂𝑏𝑠𝑟𝑣𝑒𝑟𝑒𝑑 𝑝𝑒𝑟𝑓𝑜𝑟𝑚𝑎𝑛𝑐𝑒 𝑓𝑜𝑟 𝑡ℎ𝑒 𝑏𝑢𝑖𝑙𝑑𝑖𝑛𝑔𝐵𝑒𝑐ℎ𝑚𝑎𝑟𝑘 𝑓𝑜𝑟 𝑡ℎ𝑒 𝑝𝑒𝑟𝑓𝑜𝑟𝑚𝑎𝑛𝑐𝑒 𝑖𝑛𝑑𝑖𝑐𝑎𝑡𝑜𝑟𝑖  Equation 4:1: Performance Ratio Scores for each indicator were assigned based on the ratio of the actual building’s performance and benchmark (Equation 4:1). Table 4:6 shows the performance score based on the performance ratio. These scores were then used for the calculation process discussed in Section 4.7. Table 4:6: Performance Ratio and Performance Score Performance Ratio Score Linguistic Term >=2 0 Very poor 1.9 5 Very poor 1.8 10 Very poor 1.7 15 Very poor-Poor 1.6 20 Very poor-Poor 1.5 25 Very poor-Poor 1.4 30 Poor 1.3 35 Poor-Average  65  1.2 40 Poor-Average 1.1 45 Poor-Average 1 50 Average 0.9 55 Average-Good 0.8 60 Average-Good 0.7 65 Average-Good 0.6 70 Good 0.5 75 Good-Very good 0.4 80 Good-Very good 0.3 85 Good-Very good 0.2 90 Very good  0.1 95 Very good 0 100 Very good  Asset rating benchmarks (for AS-1 to AS-5) were established based on the National Energy Code of Canada for Buildings: 2011 (Canadian Commission on Building and Fire Codes., 2011). Table 4:7 contains the main tables used as benchmarks to assess the condition of assets. The values stipulated in the energy code are compared with what the building used, and a performance score is assigned. If the observed value is equal to the value stipulated in the code, a score of 50 is assigned, while if the observed value performed better than the stipulated value, a score greater than 50 is assigned, and vice versa (Table 4:8).  Table 4:7:Reference Tables from Energy Code 2011 Subject Energy Code Division Table Lighting power allowances for general building exterior applications B Part 4 4.2.3.1.D Unitary and packaged HVAC equipment performance requirements B Part 5 5.2.12.1 Service water heating equipment performance standard B Part 4 6.2.2.1 Qualitative indicators such as availability of sub meters, availability of maintenance procedures, and trained staff were categorised as very good, good, average, poor, or very poor based on the expert knowledge (Table 4:8). Performance under quantitative indicators was also converted to these categories based on the defined fuzzy membership functions (Section 4.7.1).    66  Table 4:8: Benchmarks Category Indicator Very Good Performance Good Performance Average Performance  Poor Performance  Very Poor Performance  Energy performance EN-1 100-70 90-50 70-30 50-10 30-0 EN-2 100-70 90-50 70-30 50-10 30-0 EN-3 Low Moderate High EN-4 Yes No EN-5 Yes No EN-6 Yes No EN-7 Yes No EN-8 Yes No EN-9 In house and contracted maintenance crew In house maintenance team In-house technician Only building manager No EN-10 Yes No Environmental performance EV-1 100-70 90-50 70-30 50-10 30-0 EV-2 100-70 90-50 70-30 50-10 30-0 EV-3 100-70 90-50 70-30 50-10 30-0 EV-4 100-70 90-50 70-30 50-10 30-0 EV-5 100-70 90-50 70-30 50-10 30-0 EV-6 100-70 90-50 70-30 50-10 30-0 EV-7 100-70 90-50 70-30 50-10 30-0 EV-8 100-70 90-50 70-30 50-10 30-0 EV-9 100-70 90-50 70-30 50-10 30-0 EV-10 100-70 90-50 70-30 50-10 30-0 EV-11 100-70 90-50 70-30 50-10 30-0 Economic performance EC-1 Very low Low Medium High  Very high EC-2 Very low Low Medium High  Very high EC-3 Very high High Medium Low  Very low Asset rating  AS-1 Based on the % remaining life compared to design life (Figure 4:3 to convert % linguistic terms) AS-2 Insulation and air permeability is 50% better than the building code Insulation and air permeability is 20% better than the building code Building code defined insulation and air permeability Moderate air permeability and heat loss High air permeability and heat loss AS-3 Efficiency and insulation is 50% better than the energy code Efficiency and insulation is 20% better than the energy code Energy code defined efficiency and insulation Efficiency and insulation is moderate but lower than energy code definition  Low efficiency and insulation  67  AS-4 Efficiency is 50% better than the energy code Efficiency is 20% better than the energy code Energy code defined efficiency  Efficiency is moderate but lower than energy code definition  Low efficiency AS-5 Efficiency is 50% better than the energy code Efficiency is 20% better than the energy code Energy code defined efficiency Efficiency is moderate but lower than energy code definition  Low efficiency 4.7 Aggregate Indicators 4.7.1 Indicator assessment As discussed in Section 2.6.1, fuzzy sets were used in determining the value for each indicator. Five categories (very good, good, average, poor, very poor) were used in the performance assessment of each indicator (Bates & Young, 2003; Kawamura & Miyamoto, 2003; L.-X. Wang & Mendel, 1992). Fuzzy membership functions were developed for each category to be used in the assessment. These membership functions are shown in Figure 4: 3 and Equations 4:2-4:6.   Figure 4:3:Fuzzy Membership Functions    00.10.20.30.40.50.60.70.80.910 20 40 60 80 100Membership ValuePerformance RatingVery Poor Poor Average Good Very Good 68  For very poor performance 𝑉𝑒𝑟𝑦 𝑃𝑜𝑜𝑟 = {1 𝑖𝑓 𝑥 ≤ 10−0.05𝑥 + 1.5 𝑖𝑓 10 < 𝑥 ≤ 300 𝑖𝑓 𝑥 ≥ 𝑋3} Equation 4:2:Fuzzy Membership Function Very Poor For poor performance 𝑃𝑜𝑜𝑟 = {0 𝑖𝑓 𝑥 ≤ 100.05𝑥 − 0.5 𝑖𝑓 10 < 𝑥 ≤ 30−0.05𝑥 + 2.5 𝑖𝑓 30 < 𝑥 < 500 𝑖𝑓 𝑥 ≥ 50} Equation 4: 3: Fuzzy Membership Function Poor For average performance 𝐴𝑣𝑒𝑟𝑎𝑔𝑒 = {0 𝑖𝑓 𝑥 ≤ 300.05𝑥 − 1.5 𝑖𝑓 30 < 𝑥 ≤ 50−0.05𝑥 + 3.5 𝑖𝑓 50 < 𝑥 < 700 𝑖𝑓 𝑥 ≥ 70} Equation 4:4:Fuzzy Membership Function Average For good performance 𝐺𝑜𝑜𝑑 = {0 𝑖𝑓 𝑥 ≤ 500.05𝑥 − 2.5 𝑖𝑓 50 < 𝑥 ≤ 70−0.05𝑥 + 4.5 𝑖𝑓 70 < 𝑥 < 900 𝑖𝑓 𝑥 ≥ 90} Equation 4: 5: Fuzzy Membership Function Good For very good performance 𝑉𝑒𝑟𝑦 𝐺𝑜𝑜𝑑 = {0 𝑖𝑓 𝑥 ≤ 700.05𝑥 − 3.5 𝑖𝑓 70 < 𝑥 < 901 𝑖𝑓 𝑥 ≥ 90} Equation 4: 6: Fuzzy Membership Function Very Good Three scenarios (neutral, environmental, and stakeholder view) were considered in aggregating sub-categories of energy rating, as discussed in Section 4.7.2. Weights for individual PIs remained constant in all three scenarios. Based on the literature review and expert consultation (Section 3.4)  69  it was identified that annual energy consumption (EN-1) is the most important PI in energy performance. Therefore, it was assigned a weight of 50% within the energy performance category while all other PIs in the energy performance category were treated equally (See Table 4:9).  Further, it was identified from expert consultation (Section 3.4) and literature review that global warming potential (EV-2) is significantly important compared to other PIs. Therefore, EV-2 was assigned a weight of 50% while others were assigned equal weights (See Table 4:9). Weights of the indicators for PIs in asset rating were determined based on expert consultation (See Table 4:9). Table 4:9: Weights of Performance Indicators Indicator Weight Energy performance EN-1 50.00% EN-2 5.56% EN-3 5.56% EN-4 5.56% EN-5 5.56% EN-6 5.56% EN-7 5.56% EN-8 5.56% EN-9 5.56% EN-10 5.56% Environmental performance  EV-1 5.00% EV-2 50.00% EV-3 5.00% EV-4 5.00% EV-5 5.00% EV-6 5.00% EV-7 5.00% EV-8 5.00% EV-9 5.00% EV-10 5.00% EV-11 5.00% Economic performance   EC-1 33.33% EC-2 33.33% EC-3 33.33%  70  Asset Rating  AS-1 16% AS-2 22% AS-3 23% AS-4 18% AS-5 21%  4.7.2 Category weights Fuzzy synthetic evaluation adopts a weighted aggregation method. This study adopts three scenario-based weighting schemes through published literature and weights determined through expert consultation. The eco-centric scenario gives more emphasis to the environmental performance compared to the neutral scenario, while the neutral scenario gives equal importance to all three performance categories (Table 4:10). The weights of the third scenario are based on the expert consultation discussed in 3.4.3. Overall energy performance was determined by aggregating operational rating and asset rating using the fuzzy rules described in Section 4.7.3. Table 4:10:Scenario based weighting for operational performance  Neutral scenario Eco-centric scenario Stakeholder view scenario Environmental performance  33.33% 50% 24.5% Energy performance 33.33% 25% 43.9% Economic performance  33.33% 25% 31.6%  4.7.3 Fuzzy rules  Fuzzy rules developed through literature and expert opinion are presented in Table 4:11. These rules are used to determine the overall rating based on the asset rating and operational rating. Expert opinion was used in developing the fuzzy rules for the rating system.     71  Table 4:11:  Fuzzy rules for building energy rating Operational Rating Asset Rating  Building Energy rating Very Good Very Good Very Good Very Good Good Good Very Good Average Good Very Good Poor  Poor Very Good Very Poor Poor Good Very Good Good Good Good Good Good Average Average Good Poor  Poor Good Very Poor Very Poor Average  Very Good Good Average Good Average Average Average Average Average Poor  Poor Average Very Poor Very Poor Poor Very Good Poor Poor Good Poor Poor Average Poor Poor Poor  Poor Poor Very Poor Very Poor Very Poor Very Good Poor Very Poor Good Very Poor Very Poor Average Very Poor Very Poor Poor  Very Poor Very Poor Very Poor Very Poor  4.8 Summary The literature review identified the need for a comprehensive energy rating system that incorporates life cycle thinking and is not limited to energy performance. The questionnaire survey highlighted that stakeholders are interested in economic and environmental impacts. Further, stakeholders recognised the importance of asset performance.  Based on the energy consumption patterns and availability of metering in MURBs, the proposed tool assesses energy performance at the building level rather than at the individual apartment level. Since the focus of this study is existing buildings, this tool pays attention only to the operational stage of the building.   72  The indicator based assessment tool proposed in this study includes a comprehensive energy performance evaluation that covers energy performance, environmental performance, economic performance, and asset performance. Fuzzy sets were used to assess performance under each indicator proposed in the new tool. Use of fuzzy sets allows incorporation of uncertainties associated with the performance assessment. Fuzzy synthetic evaluation was used to aggregate indicators and sub-categories under the main performance categories: operational rating and asset rating. The main performance categories were aggregated using fuzzy rules to determine the overall building energy performance.  Benchmarks for the performance indicators were developed based on the data collected from Natural Resources Canada, BC Housing, Fisher Resource Efficiency Solutions Company, and the National Energy Code of Canada for Buildings: 2011. In this study, attention was primarily paid to conditions in British Columbia when developing indicators.   73  Chapter 5: Web Tool and Case Study 5.1 Web Tool Internet has played a significant role in making effective tools to help in data collection, processing, and analysing (Ho et al., 2004). Further , web based tools have provided a user friendly and easy access, platforms which is helpful in promoting different concepts (Russell, Torralba, Murphy, & Freeman, 2008).  The assessment process of the proposed building energy performance assessment includes several steps: determining the performance rating for indicators, fuzzifying the performance rating, and using FSE to aggregate indicators and determine the final rating. A web-based tool was developed, using Java, to facilitate this building energy performance assessment process. Standard LCA databases such as Athena, and Ecoinvent were used to determine the LCA database of the web tool. This web tool will make the building energy performance assessment process more user-friendly and efficient. This tool was developed as a flexible tool allowing the user to adjust weights for performance indicators, as the main purpose of the tool is to promote the methodology. The tool is expected to be hosted on either UBC or PICS servers.   5.1.1 Web tool development Java programming language was used to develop the web tool for the proposed building energy performance assessment. Java Development Kit 8 (JDK 8) was used in this study to support the use of the Java platform. NetBeans (Version 8.0.2) was used as an Integrated Development Environment (IDE), which helped in coding the software programming to develop the web tool. MySQL (version 5.6.17) was used as the database to store user data and benchmarks, and to perform necessary calculations to determine the overall building energy performance. Hibernate 4.3.1 framework was used to communicate with databases and the web tool. The developed tool was tested on Apache Tomcat 8,9 and Glassfish 4.1 to ensure the server compatibility. Therefore, this web tool can be hosted on servers compatible with Apache Tomcat or Glassfish. Bootstrap (version 3.3.1) framework was used for responsive design of the web tool, which facilitates the use of both desktop and mobile devices.   74  5.1.2 Web tool implementation Figure 5:1 shows the welcome screen of the tool. This will provide the tool user the option of rating a building after signing up, or continuing directly without signing up. Signing up will allow the user to save the data and access it later. Direct assessment without signing up will not have this advantage.   Figure 5:1: Welcome Screen Detailed information of the rating tool can be seen when scrolling down this page (Figure 5:2). This will allow the user to understand the evaluation methodology used in this tool. Figure 5:3 shows the sign up/sign in screen that will be available if the user selects the option. Once the user selects to proceed with the assessment (either by sign-up or without sign-up), he/she will be prompted to enter the building information as shown in Figure 5:4. The next steps will guide the user to enter information needed to assess the PIs for the building as shown in Figure 5:5. The top of Figure 5:5 shows where they are in the assessment process and how many steps are left to complete the assessment. The final tab of Figure 5:5 will display the result of the rating.   75   Figure 5:2: Additional information  76    Figure 5:3: Sign up/ Sign in    Figure 5:4: Building Information  77   Figure 5:5: Assessment Data   78  5.2 Case Study 5.2.1 Project information  The proposed assessment tool was demonstrated through a case study to assess the Purcell building at UBC Okanagan. This is a student residence at UBC Okanagan. The Purcell building (Figure 5:6) is a five-storey, wooden frame building that was completed in August 2011. It is a 68,213 sq. ft. residence with 212 beds. In addition to using different energy efficiency technologies to reduce energy use, this building also uses geothermal heating/cooling. However, if the geothermal heating system is not adequate to maintain the indoor temperature levels, natural gas is used to meet the additional energy requirement.  Figure 5:6: Purcell Residence UBC Okanagan The case study was conducted based on two different energy simulations: one for the actual design and another for the building using BC Building Code of 2012 for minimum requirements for energy performance. Though the actual electricity and natural gas consumption data was available, data was not available for the geothermal energy used. Since geothermal energy is the main source for heating and cooling, analysis based on the actual energy use could not be performed. Therefore, the case study was performed by assuming the Purcell building as a new construction. Two energy simulations (for actual design and design based on BC building code 2012 minimum requirements) were conducted to predict the total energy demand of the building.   79  5.2.2 Energy consumption and LCA simulation           DesignBuilder software was used to perform the energy simulation for the actual design of the building and the building with minimum requirements of BC Building Code 2012. In this study the model initially developed for Purcell Residence by Feng (2013) was  adjusted and two models were developed to reflect the actual design and energy code 2011 requirements. As per the simulation, the reference building consumed 555241.42 kBtu of electricity, 62758.56 kBtu for district cooling and 11782031.31 KBtu for district heating as site energy. The respective values for the actual design were 555241.42 kBtu, 64375.10 kBtu, and 11738454.44 KBtu. Summary tables of the energy consumption from the model are provided in Appendix E. Total energy use of the two buildings is approximately close. This could be because the building was designed based on the building code used prior to 2012. Therefore, even though the actual design performs slightly better than the 2012 building code, it would have performed significantly better than the previous building code requirements in 2006.  The provincial average use of different sources was obtained from the Energy Star Portfolio of Natural Resources Canada. The unit costs for different sources were obtained from BC Hydro, Natural Resources Canada, and UBC Okanagan’s Facilities Management. LCA analysis was conducted using Athena IE. Appendix F provides the results of the Athena LCA simulation.   Figure 5:7: Design Builder Model of the Purcell Building  Figure 5.0:8:Fuzzy Membership FunctionsFigure 5.0:9: Design Builder Model of the Purcell Building  Figure 5.0:10: Design Builder Model of the Purcell Building  Figure 5.0:11:Fuzzy M mbership FunctionsFigure 5.0:12: Design Builder Model of the Purcell Building  Figure 5:0:13: Design Builder Model of the Purcell Building  Figure 5.0:14:Fuzzy Membership FunctionsFigure 5.0:15: Design Builder Model of the Purcell Building  Figure 5.0:16: Design Builder Model of the Purcell Building  Figure 5.0:17:Fuzzy Membership FunctionsFigure 5.0:18: Design Builder Model of the Purcell Building  80  5.2.3 Energy performance assessment Table 5:1 details the performance level for each indicator of the proposed energy performance assessment tool. Table 5:1:Energy Performance Assessment of the Purcell Residence* Indicator Weight Monitored indicator values Unit Very Good Performance Good Performance Average Performance Poor Performance Very Poor Performance Energy performance           EN-1 50.00% 2.21 GJ/m2/year 0 0 0 0 1 EN-2 5.56% 50% % 0 0 1 0 0 EN-3 5.56% Moderate KVA 0 0 1 0 0 EN-4 5.56% No Qualitative 0 0 0 0.6 0.4 EN-5 5.56% Yes Qualitative 1 0 0 0 0 EN-6 5.56% Yes Qualitative 1 0 0 0 0 EN-7 5.56% Yes Qualitative 0.2 0.8 0 0 0 EN-8 5.56% Yes Qualitative 0 0.8 0.2 0 0 EN-9 5.56% In house maintenance team and contracted Qualitative 0 0.6 0.4 0 0 EN-10 5.56% Yes Qualitative 1 0 0 0 0 Environmental performance            EV-1 10% 1.40E+02 kg 0 0 0 0 1 EV-2 50.00% 6.36E+02 kg CO2eq 0 0.3 0.7 0 0 EV-3 10% 6.47E-09 kg CFC-11eq 0.2 0.8 0 0 0 EV-4 10% 1.79E-02 kg PO4-eq 1 0 0 0 0 EV-5 10% 4.22E-01 moles of N or S eq 0.8 0.2 0 0 0 EV-6 10% 3.37E-02 kg PM2eq 0 0 0 0 1 Economic performance             EC-1 33.30% Low Qualitative 0.6 0.4 0 0 0 EC-2 33.30% Low Qualitative 0.8 0.2 0 0 0 EC-3 33.30% No retrofits Qualitative 1 0 0 0 0 Asset Rating            AS-1 20% 90% % 1 0 0 0 0 AS-2 20% Compared to energy code 2011 Qualitative 1 0 0 0 0  81  AS-3 20% Compared to energy code 2011 Qualitative 0 1 0 0 0 AS-4 20% Compared to energy code 2011 Qualitative 0.6 0.4 0 0 0 AS-5 20% Compared to energy code 2011 Qualitative 0 0.5 0.5 0 0  *Weights used for individual indicators were assigned based on the emphasis for these performance indicators in existing rating systems Based on equation 2:2 (Section 2.6.2) fuzzy vectors were calculated for each sub-category level. Table 5:2 shows the results for each category.  Table 5:2: Fuzzy Vector for Sub Category  Very Good Performance Good Performance Average Performance  Poor Performance  Very Poor Performance  Energy performance 0.18 0.12 0.14 0.03 0.52 Environmental performance  0.25 0.23 0.35 0.00 0.17 Economic performance   0.80 0.20 0.00 0.00 0.00 Asset Performance  0.49 0.41 0.11 0.00 0.00  Equation 2:2 was applied again to assess the three scenarios discussed in Section 4.7.2.  Table 5:3 summarises the results.  Table 5:3:Scenario Analysis (Fuzzy Vector for Operational Rating)  Very Good Performance Good Performance Average Performance  Poor Performance  Very Poor Performance  Operational Rating (Neutral Scenario) 0.41 0.18 0.16 0.01 0.23 Operational Rating (Eco-centric Scenario) 0.37 0.20 0.21 0.01 0.21 Operational Rating (Stakeholder View) 0.39 0.17 0.15 0.01 0.27 Following numerical values (Table 5:4) were assigned to linguistic assessment levels for defuzzification.    82   Table 5:4:Defuzzification  Very Good Performance Good Performance Average Performance  Poor Performance  Very Poor Performance  Numerical value 1 0.75 0.5 0.25 0  Using these values and fuzzy rules defined in Table 4:8, the following overall ratings (Table 5:5) were obtained.  Table 5:5:Purcell Residence Ratings Scenario Operational Rating Asset Rating Overall Rating Neutral Scenario  0.63 Good 0.64 Good Good Eco-centric Scenario 0.62 Good Good Stakeholder View 0.60 Good Good  5.2.4 Discussion Purcell Residence is a building constructed in 2001 with energy efficient and green features, and this the main reason for its good/very good performance for most of the performance indicators (Table 5:1). The Purcell building has good performance in indicators related to its assets and use of renewable energy; however, the energy performance of this building is poor. This may be because this is a university residence, and student residents are not worried about their energy bill as it is included in the rent. Since the building uses renewable energy sources, it was able to perform well in environmental performance despite the poor energy performance. Even though individual assets of the Purcell building perform well, there is significant improvement needed with respect to metering. Currently there are no energy meters available to measure the geothermal energy used or the electricity generated by solar panels. As result, energy simulations are used in this case study in place of metered data.  In all three scenarios the operational rating was good, however the building performs slightly better in the neutral scenario. Even though the overall performance level is good when analysing each  83  sub-category of the proposed assessment tool (Table 5:2), it can be seen that this building is weak in energy performance and environmental performance compared to its economic performance and asset rating. Hence attention should be paid to improving the performance in these criteria. Since the asset rating is good compared to the lagging section, attention should be paid to behavioural changes to reduce energy consumption, establish best management practices, and use more renewable sources to reduce environmental impacts.  5.3 Summary   A web based tool was developed to increase the user-friendliness of the proposed method and to reduce the expertise needed to perform the assessment. The Java platform was used to develop this web based tool. A user can create an account or directly perform the assessment for a building. Creating an account will allow the user to save data, access it again at a later time, and make changes where necessary. Building properties and the performance level for PIs are the inputs for this web tool. The tool will generate the final assessment using the fuzzy sets and FSE as described in Chapter 4. The case study of the Purcell Building was performed to demonstrate how the assessment is performed using the indicators and methodologies discussed in Chapters 3 and 4. Purcell Residence is constructed in 2001 and has used latest energy efficient technologies. The case study assessment was performed under three scenarios to assess the sensitivity of the results, based on weights determined by the literature and expert opinion. In all three scenarios, Purcell building received the same results indicating the weights used and the methodology used is robust and give consistent assessment for the building.      84  Chapter 6: Conclusions and Recommendations This chapter presents the conclusion and contributions of this study. Further, originality and limitations of the proposed building energy performance assessment and future research in this topic are discussed.  6.1 Conclusions Building energy ratings have been successfully implemented to promote energy efficiency in the building sector. With the increased attention to climate change and other environmental impacts, it is important to address these impacts in the rating systems not limiting to energy performance. Incorporating environmental impacts in rating systems will help to raise awareness of how energy use contributes to potential environmental impacts. This awareness will help to explain the benefits of renewable energy sources. The objective of this study is to develop a life cycle thinking based energy performance assessment method for MURBs to help to achieve above mentioned goals. Four sub-objectives were identified to achieve this broad objective; a comprehensive review to identify limitations and challenges in popular energy rating systems, develop a life cycle thinking based framework to assess energy performance of existing MURBs, develop a web-based tool based on the proposed methodology and demonstrate the proposed energy performance assessment methodology through a case study. Based on the literature review and the survey conducted in this study, it was identified that energy consumption and environmental impacts are not the only concerns of the stakeholders. They are interested in the economic impacts of energy use. Therefore, paying attention to economic impacts will help to make energy ratings more relevant to stakeholders. Based on the survey it was identified that building owners and managers, building designers, and government stakeholders pay more attention to economic aspects than to environmental impacts. However, all stakeholder groups give priority to energy consumption.  Even though energy performance is the main focus in existing rating systems, they pay attention only to total energy used or energy consumption or asset rating. Total energy consumption is used in energy rating systems where the main purpose of the rating system is to improve the actual energy performance. Asset rating is used when the focus is to compare different buildings without  85  the effect of operational practices. Considering only one aspect overlooks the benefits of a combined approach.  The proposed method is developed as an internal management tool to facilitate building owners and managers to make inform decision to improve the energy performance of a MURB. The proposed building energy performance assessment method combine both operational rating and asset rating approaches to have a comprehensive assessment by considering energy use, environmental impacts, and economic impacts The combination of operational rating and asset rating concept will facilitate the building mangers and owners to consider both operational changes and retrofits in enhancing building energy performance. Further the use of the fuzzy concept in the proposed method will help to addresses the data uncertainty in assessing performance indicators.    Developed web based tool will be an interface to increase the user-friendliness of the proposed method and to reduce the expertise needed to perform the assessment. Hence the web tool will help to promote the proposed assessment tool among building owners/mangers. The case study conducted based on the Purcell Residence showed that the proposed energy performance method is easy to apply once the basic building data and energy consumption data is available. Further, under the three scenarios considered in the case study building achieved the same overall performance ensuring the robustness of the proposed assessment tool. 6.2 Originality and Contributions Current energy rating systems fall into the categories of either asset rating or operational rating. The proposed approach combines both these aspects in the same ratings system. Moreover, current energy rating systems focus mainly on energy consumption alone and overlook the life cycle impacts of energy use. The proposed approach provides a more comprehensive review of building energy performance by considering both the asset rating and the operational rating. This combined approach facilitates the consideration of both operational changes and retrofit options to improve the energy performance of a building. The incorporation of the triple bottom line sustainability concept into building energy rating and improvement will deliver long-term benefits at the multi-stakeholder level. The inclusion of economic concerns will assist decision makers in developing affordable MURBs by optimising the economic factors while improving the energy performance. This will help decision makers  86  such as building managers in selecting the most cost-effective performance enhancement strategies.  By considering the life cycle environmental and economic costs and benefits of energy use in MURBs, it is possible to obtain a more accurate view of the impacts, some of which may be hidden in traditional non-life cycle based analysis. Disregarding the life cycle impacts and focusing only on the directly perceived energy use related outcomes can accumulate hidden adverse impacts in the building’s energy life cycle. The proposed rating system applies this thinking in assessing both the operational energy use and the retrofits used for mitigating energy performance issues. Therefore, the proposed approach will be more suitable in achieving the climate action targets than conventional strategies.  Existing rating systems mainly focus on quantitative data. Even though quantitative data is useful to reduce the subjective judgment in assessment processes, using qualitative data for some indicators is useful to simplify the assessment process, especially where large numbers of indicators are used in the assessment. The proposed method incorporates both quantitative and qualitative data through the use of fuzzy set theory. Further, the use of fuzzy set theory allows the evaluators to reflect the ambiguity of available data in the assessment, rather than making a forceful judgment of these data.  The developed web-based tool is a flexible tool based on the proposed methodology to facilitate the assessment process by providing access at location and reducing the technical expertise needed to perform an assessment. Hence, this web tool will be extremely useful in promoting this new assessment framework.  6.3 Limitations Following are the main limitations of this study:  Model Limitations  This framework is designed to assess energy performance of MURBs at the building level and does not focus on the energy performance of individual apartments.  This assessment tool focuses only on the energy consumption during operation of the building as it targets existing MURBs.  87   Impacts of energy use are calculated based on production only. End use of energy is not considered in this assessment.   Data Limitations  The sample size for the expert consultations was determined based on assumptions about expected responses. As a result, the statistical significance of the results can vary.   Most of the building owners and managers participating in this study were from Kelowna. An assumption is made that this is generalizable to all of British Columbia.  Data for the building energy performance benchmarking were mainly based on the lower mainland and Victoria. Data for other areas of the province were not readily available.  LCA analysis was conducted based on the available databases.  More location specific LCA data will enhance the accuracy of the assessment.  6.4 Future Research The developed building energy performance assessment tool is an initial step to promote energy efficiency in the built environment. This tool will help to identify the existing condition of buildings. However, further research is needed to identify potential interventions and best management practices needed to improve a building’s performance from the existing level.  Further, effectiveness of those interventions in enhancing the energy performance of a building need to be explored, as stakeholders are interested in payback of their investments. Combining knowledge of those intervention and best management practices will help the tool user have a better idea of potential improvements and will help to attract more users to this tool. Further, since this is an indicator-based rating system, further studies may allow modification of this tool to assess the energy performance of building types other than MURBs.  Based on the survey, 53% of respondents indicated that they like to see mandatory energy rating tools, while 31% indicated that they prefer voluntary rating tools and 16% had no opinion. However, responders indicated that more incentives are needed to attract building owners to spend time and money on energy performance assessments. Further studies are needed to identify effective incentives that will attract more building owners and managers to energy performance  88  assessments. More building owners and managers interested in energy performance assessments will ultimately help in the goal of reducing environmental impacts such as greenhouse gas emissions and achieving the environmental targets set out by governments.     References Aday, L. A., & Cornelius, L. J. (2006). Designing and Conducting Health Surveys : A Comprehensive Guide (3rd ed.). San Francisco: John Wiley & Sons. http://doi.org/RA408.5.A33 2006 Al-Ghamdi, S. G., & Bilec, M. M. (2014). Green Building Rating Systems and Environmental Impacts of Energy Consumption from an International Perspective. Icsi 2014, 631–640. http://doi.org/10.1061/9780784478745.058 Al-Ghamdi, S. G., & Bilec, M. M. (2015). Life-cycle thinking and the LEED rating system: Global perspective on building energy use and environmental impacts. Environmental Science and Technology, 49(7), 4048–4056. http://doi.org/10.1021/es505938u AL-Nassar, F., Ruparathna, R., Chhipi-Shrestha, G., Haider, H., Hewage, K., & Sadiq, R. (2016). Sustainability assessment framework for low rise commercial buildings: life cycle impact index-based approach. Clean Technologies and Environmental Policy, 18(8), 2579–2590. http://doi.org/10.1007/s10098-016-1168-1 Andersen, R. V., Toftum, J., Andersen, K. K., & Olesen, B. W. (2009). Survey of occupant behaviour and control of indoor environment in Danish dwellings. Energy and Buildings, 41(1), 11–16. http://doi.org/10.1016/j.enbuild.2008.07.004 ASHRE. (2015). In Operation Assessment. Retrieved June 22, 2016, from http://buildingenergyquotient.org/inoperation.html Balaras, C. A., Droutsa, K., Dascalaki, E., & Kontoyiannidis, S. (2005). Heating energy consumption and resulting environmental impact of European apartment buildings. Energy and Buildings, 37(5), 429–442. http://doi.org/10.1016/j.enbuild.2004.08.003 Balaras, C. A., Gaglia, A. G., Georgopoulou, E., Mirasgedis, S., Sarafidis, Y., & Lalas, D. P. (2007). European residential buildings and empirical assessment of the Hellenic building stock, energy consumption, emissions and potential energy savings. Building and Environment, 42(3), 1298–1314. http://doi.org/10.1016/j.buildenv.2005.11.001 Bartlett, J. E., Kotrlik, J. W., & Higgins, C. C. (2001). Organizational Research: Determining Appropriate Sample Size in Survey Research Appropriate Sample Size in Survey Research. Information Technology, Learning and Performance Journal, 19(1), 43–50. http://doi.org/10.1109/LPT.2009.2020494 Bates, J. H. T., & Young, M. P. (2003). Applying Fuzzy Logic to Medical Decision Making in the Intensive Care Unit. American Journal of Respiratory and Critical Care Medicine,  89  167(7), 948–952. http://doi.org/10.1164/rccm.200207-777CP Berardi, U. (2015). Building Energy Consumption in US, EU, and BRIC Countries. Procedia Engineering, 118, 128–136. http://doi.org/10.1016/j.proeng.2015.08.411 Binder, C. R., Feola, G., & Steinberger, J. K. (2010). Considering the normative, systemic and procedural dimensions in indicator-based sustainability assessments in agriculture. Environmental Impact Assessment Review, 30(2), 71–81. http://doi.org/10.1016/j.eiar.2009.06.002 Blockley, D. I. (1979). The role of fuzzy sets in civil engineering. Fuzzy Sets and Systems, 2(4), 267–278. http://doi.org/10.1016/0165-0114(79)90001-0 BOMA BEST. (2016). BOMA BEST Application Guide. Borchers, A. M., Duke, J. M., & Parsons, G. R. (2007). Does willingness to pay for green energy differ by source? Energy Policy, 35(6), 3327–3334. http://doi.org/10.1016/j.enpol.2006.12.009 Brace, I. (2004). Design Survey Material for Effective. London: Kogan Page. BRE. (2016). BREEAM In-Use International. Retrieved from http://www.breeam.com/in-use BRE Global. (2012). Briefing Paper. BREEAM In-Use. Driving sustainability through existing buildings. Watford, UK. British Standards Institution. (2008). BS EN 15603:2008 Energy performance of buildings. Overall energy use and definition of energy ratings. London. Brown, C. B., & Yao, J. T. P. (1983). Fuzzy Sets and Structural Engineering. http://dx.doi.org/10.1061/(ASCE)0733-9445(1983)109:5(1211). http://doi.org/10.1061/(ASCE)0733-9445(1983)109:5(1211) Cabeza, L. F., Rincón, L., Vilariño, V., Pérez, G., & Castell, A. (2014). Life cycle assessment (LCA) and life cycle energy analysis (LCEA) of buildings and the building sector: A review. Renewable and Sustainable Energy Reviews, 29, 394–416. http://doi.org/10.1016/j.rser.2013.08.037 Canada`s Action on Climate Change. (2013). Copenhagen Accord. Canada Green Building Council. (2009). LEED® Canada for Existing Buildings: Operations and Maintenance 2009. Ottawa, ON. Canada Green Building Council. (2012). Leed Canada for Homes 2009 Rating System. Retrieved from http://www.cagbc.org/AM/PDF/Green Building Rating System and Addendum_LEED Canada-NC_v1_0.pdf Canadian Commission on Building and Fire Codes., National Research Council of Canada., & Institute for Research in Construction (Canada). (2011). National Energy Code of Canada for Buildings : 2011. National Research Council. Casals, X. G. (2006). Analysis of building energy regulation and certification in Europe: Their role, limitations and differences. Energy and Buildings, 38(5), 381–392. http://doi.org/10.1016/j.enbuild.2005.05.004 Chung, W.-S., Tohno, S., & Shim, S. Y. (2009). An estimation of energy and GHG emission intensity caused by energy consumption in Korea: An energy IO approach. Applied Energy, 86(10), 1902–1914. http://doi.org/10.1016/j.apenergy.2009.02.001 CIBSE Certification. (2016). Energy Performance Certificates and the Asset Rating.  90  Clifford, N., Cope, M., Gillespie, T., & French, S. (Eds.). (2016). UBC Library Holdings Information (Third). London: SAGE Publications. Corrado, V., & Mechri, H. E. (2009). Uncertainty and Sensitivity Analysis for Building Energy Rating. Journal of Building Physics, 33(2), 125–156. http://doi.org/10.1177/1744259109104884 Couper, M. P., Traugott, M. W., & Lamias, M. J. (2001). Web Survey Design and Administration. Public Opinion Quarterly, 65(2), 230–253. http://doi.org/10.1086/322199 Crawford, R. (2011). Life Cycle Assessment in the Built Environment. New York, NY: Taylor & Francis. de Bruijn, H., van Duin, R., & Huijbregts, M. A. J. (2002). Handbook on Life Cycle Assessment. (J. B. Guinee, M. Gorree, R. Heijungs, G. Huppes, R. Kleijn, A. de Koning, … H. A. Udo de Haes, Eds.) (Vol. 7). Dordrecht: Springer Netherlands. http://doi.org/10.1007/0-306-48055-7 de Wit, S., & Augenbroe, G. (2002). Analysis of uncertainty in building design evaluations and its implications. Energy and Buildings, 34(9), 951–958. http://doi.org/10.1016/S0378-7788(02)00070-1 Dehghan-Manshadi, B., Mahmudi, H., Abedian, A., & Mahmudi, R. (2007). A novel method for materials selection in mechanical design: Combination of non-linear normalization and a modified digital logic method. Materials & Design, 28(1), 8–15. http://doi.org/10.1016/j.matdes.2005.06.023 Department for Communities and Local Government. (2014). Improving the energy efficiency of our buildings: A guide to energy performance certificates for the marketing, sale and let of dwellings. Retrieved from https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/307556/Improving_the_energy_efficiency_of_our_buildings_-_guide_for_the_marketing__sale_and_let_of_dwellings.pdf Department of Industry Innovation and Science. (2016). Star rating scale overview. Retrieved June 22, 2016, from http://www.nathers.gov.au/owners-and-builders/star-rating-scale-overview El-Fadel, M., Chedid, R., Zeinati, M., & Hmaidan, W. (2003). Mitigating energy-related GHG emissions through renewable energy. Renewable Energy, 28(8), 1257–1276. http://doi.org/10.1016/S0960-1481(02)00229-X El shenawy, A., & Zmeureanu, R. (2013). Exergy-based index for assessing the building sustainability. Building and Environment, 60, 202–210. http://doi.org/10.1016/j.buildenv.2012.10.019 Energy star. (2015). Buildings & Plants. Energy Star. (2013). Source Energy. Energy Star. (2014a). ENERGY STAR score technical reference. Energy Star. (2014b). Technical Reference: ENERGY STAR Score for Multifamily Housing in the United States. Retrieved from http://www.energystar.gov/buildings/sites/default/uploads/tools/ENERGY STAR Score for K12 Schools_Canada.pdf?6422-e094 Environment and Climate Change Canada. (2016a). Federal Sustainable Development Stratergy  91  for Canada 2016-2019. Gatineau, QC. Environment and Climate Change Canada. (2016b). Progress Toward Canada’s Greenhouse Gas Emissions Reduction Target. Retrieved December 29, 2016, from https://www.ec.gc.ca/indicateurs-indicators/default.asp?lang=en&n=CCED3397-1 Environment Canada. (2014). Canada ’ s Emissions Trends. Gatineau QC. Eto, J. H. (1988). On using degree-days to account for the effects of weather on annual energy use in office buildings. Energy and Buildings, 12(2), 113–127. http://doi.org/10.1016/0378-7788(88)90073-4 EU. Directive 2010/31/EU of the European Parliament and of the Council of 19 May 2010 on the energy performance of buildings (recast), Official Journal of the European Union 13–35 (2010). http://doi.org/doi:10.3000/17252555.L_2010.153.eng EU. Directive 2012/27/EU of the European Parliament and of the Council of 25 October 2012 on energy efficiency, Official Journal of the European Union Directive 1–56 (2012). http://doi.org/10.3000/19770677.L_2012.315.eng European Commission. (2016). Paris Agreement. Retrieved from http://ec.europa.eu/clima/policies/international/negotiations/paris/index_en.htm Feng, H. (2013). Lifecycle based energy assessment of green roofs and walls. The University of British Columbia. Retrieved from http://elk.library.ubc.ca/handle/2429/45120 Findik, F., & Turan, K. (2012). Materials selection for lighter wagon design with a weighted property index method. Materials and Design, 37, 470–477. http://doi.org/10.1016/j.matdes.2012.01.016 Fischer, C. (2008). Feedback on household electricity consumption: a tool for saving energy? Energy Efficiency, 1(1), 79–104. Frischknecht, R., & Rebitzer, G. (2005). The ecoinvent database system: a comprehensive web-based LCA database. Journal of Cleaner Production, 13(13), 1337–1343. http://doi.org/10.1016/j.jclepro.2005.05.002 Fryrear, A. (2015). What’s a Good Survey Response Rate? Retrieved January 3, 2017, from https://www.surveygizmo.com/survey-blog/survey-response-rates/ Gasparatos, A., El-Haram, M., & Horner, M. (2008). A critical review of reductionist approaches for assessing the progress towards sustainability. Environmental Impact Assessment Review, 28(4–5), 286–311. http://doi.org/10.1016/j.eiar.2007.09.002 GBCA. (2011). Green Star Green Star – Performance. Retrieved from http://new.gbca.org.au/green-star/rating-system/performance/ German Sustainable Building Council. (2016). DGNB System. Retrieved June 22, 2016, from http://www.dgnb-system.de/en/ Giordano, R., Serra, V., Tortalla, E., Valentini, V., & Aghemo, C. (2015). Embodied Energy and Operational Energy Assessment in the Framework of Nearly Zero Energy Building and Building Energy Rating. Energy Procedia, 78, 3204–3209. http://doi.org/10.1016/j.egypro.2015.11.781 Gong, X., Nie, Z., Wang, Z., & Zuo, T. (2006). Research and development of Chinese LCA database and LCA software. Rare Metals, 25(6), 101–104. http://doi.org/10.1016/S1001-0521(08)60061-3 Gram-Hanssen, K. (2014). Retrofitting owner-occupied housing: remember the people. Building  92  Research & Information, 42(4), 393–397. http://doi.org/10.1080/09613218.2014.911572 Green Building Council Denmark. (n.d.). An introduction to DGNB. Green Building Council of Australia. (2015). List of Credits. Green Building Initiative. (2014). Green Globes for Existing Buildings Blank Survey. Green Building Initiative. (2015). Green Globes for New Construction (NC). Green Building Initiative. Retrieved from http://www.thegbi.org/green-globes-certification/how-to-certify/new-construction/ Grussing, M. N. (2013). Life Cycle Asset Management Methodologies for Buildings. Journal of Infrastructure Systems, (Fhwa 1999), 130404171828002. http://doi.org/10.1061/(ASCE)IS.1943-555X.0000157 Gu, Z., Wennersten, R., & Assefa, G. (2006). Analysis of the most widely used Building Environmental Assessment methods. Environmental Sciences, 3(3), 175–192. http://doi.org/10.1080/15693430600903230 Hemphill, L., Berry, J., & McGreal, S. (2004). An indicator-based approach to measuring sustainable urban regeneration performance: part 1, conceptual foundations and methodological framework. Urban Studies, 41(4), 725–755. http://doi.org/10.1080/0042098042000194089 Hernandez, P., & Kenny, P. (2011a). Development of a methodology for life cycle building energy ratings. Energy Policy, 39(6), 3779–3788. http://doi.org/10.1016/j.enpol.2011.04.006 Hernandez, P., & Kenny, P. (2011b). Development of a methodology for life cycle building energy ratings. Energy Policy, 39(6), 3779–3788. http://doi.org/10.1016/j.enpol.2011.04.006 Hillman, T., & Ramaswami, A. (2010). Greenhouse Gas Emission Footprints and Energy Use Benchmarks for Eight U.S. Cities. Environmental Science & Technology, 44(6), 1902–1910. http://doi.org/10.1021/es9024194 HKGBC. (2010). BEAM Plus for Existing Buildings, 1. HKGBC. (2016). BEAM Plus for Existing Buildings. Ho, K., Bloch, R., Gondocz, T., Laprise, R., Perrier, L., Ryan, D., … Wenghofer, E. (2004). Technology-enabled knowledge translation: Frameworks to promote research and practice. Journal of Continuing Education in the Health Professions, 24(2), 90–99. http://doi.org/10.1002/chp.1340240206 Hoonakker, P., & Carayon, P. (2009). Questionnaire Survey Nonresponse: A Comparison of Postal Mail and Internet Surveys. International Journal of Human-Computer Interaction, 25(5), 348–373. http://doi.org/10.1080/10447310902864951 Hossaini, N., Reza, B., Akhtar, S., Sadiq, R., & Hewage, K. (2015). AHP based life cycle sustainability assessment (LCSA) framework: a case study of six storey wood frame and concrete frame buildings in Vancouver. Journal of Environmental Planning and Management, 58(7), 1217–1241. http://doi.org/10.1080/09640568.2014.920704 Hu, S.-C., Shiue, A., Chuang, H.-C., & Xu, T. (2013). Life cycle assessment of high-technology buildings: Energy consumption and associated environmental impacts of wafer fabrication plants. Energy and Buildings, 56, 126–133. http://doi.org/10.1016/j.enbuild.2012.09.023 Hwang, C.-L., & Yoon, K. (1981). Multiple Attribute Decision Making: Methods and  93  Applications A State-of-the-Art Survey. Springer Berlin Heidelberg. http://doi.org/10.1007/978-3-642-48318-9 Indian Green Building Council. (2015). IGBC Green Existing Buildings O&M. Institute for Building Efficiency. (2013). Green Building Rating Systems Japan. International Energy Agency. (2010). Energy Performance Certification of Buildings: A policy tool to improve energy efficiency. Paris. ISO 14040. (2006). Environmental management-- Life cycle assessment - Principles and framework. Geneva: International Organization for Standardization. Jong, J. de, Hibben, K. C., & Pennell, S. (2016). Ethical Considerations. Guidelines for Best Practice in Cross-Cultural Surveys. Retrieved January 3, 2017, from http://ccsg.isr.umich.edu/index.php/chapters/ethical-considerations-in-surveys-chapter#Introduction JSBC, & IBEC. (2014). CASBEE for Building (New Construction). Juwana, I., Muttil, N., & Perera, B. J. C. (2012). Indicator-based water sustainability assessment - A review. Science of the Total Environment, 438, 357–371. http://doi.org/10.1016/j.scitotenv.2012.08.093 Kamali, M., & Hewage, K. N. (2015). Performance Indicators for Sustainability Assessment of Buildings. In Proceedings of ICSC15: The Canadian Society for Civil Engineering 5th International/11th Construction Specialty Conference (pp. 1–11). Vancouver, BC: Canadian Society of Civil Engineers. Kaplowitz, M. D., Hadlock, T. D., & Levine, R. (2004). A Comparison of Web and Mail Survey Response Rates. Public Opinion Quarterly, 68(1), 94–101. http://doi.org/10.1093/poq/nfh006 Kaufmann, R. K., & Cleveland, C. J. (1995). Measuring sustainability: needed-an interdisciplinary approach to an interdisciplinary concept. Ecological Economics, 15(2), 109–112. http://doi.org/10.1016/0921-8009(95)00062-3 Kawamura, K., & Miyamoto, A. (2003). Condition state evaluation of existing reinforced concrete bridges using neuro-fuzzy hybrid system. Computers & Structures, 81(18), 1931–1940. http://doi.org/10.1016/S0045-7949(03)00213-X Kellenberger, D., & Althaus, H.-J. (2009). Relevance of simplifications in LCA of building components. Building and Environment, 44(4), 818–825. http://doi.org/10.1016/j.buildenv.2008.06.002 Kim, J. T., & Todorovic, M. S. (2013). Towards sustainability index for healthy buildings - Via intrinsic thermodynamics, green accounting and harmony. Energy and Buildings, 62, 627–637. http://doi.org/10.1016/j.enbuild.2013.03.009 Klöpffer, W. (1997). Life cycle assessment. Environmental Science and Pollution Research, 4(4), 223–228. http://doi.org/10.1007/BF02986351 Lavrakas, P. (2008). Encyclopedia of Survey Research Methods. 2455 Teller Road, Thousand Oaks California 91320 United States of America : Sage Publications, Inc. http://doi.org/10.4135/9781412963947 Leung, W. (2001). How to design a questionnaire. Student BMJ, 9, 187–189. http://doi.org/10.1108/00400910310495996 Lewry, A. J., Ortiz, J., Nabil, A., Schofield, M. N., Vaid, R., & Davidson, P. (2013). Bridging  94  the gap between operational and asset ratings – the UK experience and the green deal tool. Watford. Li, D. H. W., Yang, L., & Lam, J. C. (2012). Impact of climate change on energy use in the built environment in different climate zones – A review. Energy, 42(1), 103–112. http://doi.org/10.1016/j.energy.2012.03.044 Liu, Z., Liang, S., Geng, Y., Xue, B., Xi, F., Pan, Y., … Fujita, T. (2012). Features, trajectories and driving forces for energy-related GHG emissions from Chinese mega cites: The case of Beijing, Tianjin, Shanghai and Chongqing. Energy, 37(1), 245–254. http://doi.org/10.1016/j.energy.2011.11.040 Lu, R.-S., Lo, S.-L., & Hu, J.-Y. (1999). Analysis of reservoir water quality using fuzzy synthetic evaluation. Stochastic Environmental Research and Risk Assessment, 13, 327. http://doi.org/10.1007/s004770050054 Lützkendorf, T., Foliente, G., Balouktsi, M., & Houlihan Wiberg,  a. (2014). Net-zero buildings: incorporating embodied impacts. Building Research and Information, 43(1), 62–81. http://doi.org/10.1080/09613218.2014.935575 Malmqvist, T., Glaumann, M., Scarpellini, S., Zabalza, I., Aranda, A., Llera, E., & Díaz, S. (2011). Life cycle assessment in buildings: The ENSLIC simplified method and guidelines. Energy, 36(4), 1900–1907. http://doi.org/10.1016/j.energy.2010.03.026 Mosteiro-Romero, M., Krogmann, U., Wallbaum, H., Ostermeyer, Y., Senick, J. S., & Andrews, C. J. (2014). Relative importance of electricity sources and construction practices in residential buildings: A Swiss-US comparison of energy related life-cycle impacts. Energy and Buildings, 68(PARTA), 620–631. http://doi.org/10.1016/j.enbuild.2013.09.046 Mu, S., Cheng, H., Chohr, M., & Peng, W. (2014). Assessing risk management capability of contractors in subway projects in mainland China. International Journal of Project Management, 32(3), 452–460. http://doi.org/10.1016/j.ijproman.2013.08.007 Mwasha, A., Williams, R. G., & Iwaro, J. (2011). Modeling the performance of residential building envelope: The role of sustainable energy performance indicators. Energy and Buildings, 43(9), 2108–2117. http://doi.org/10.1016/j.enbuild.2011.04.013 Namini, S. B., Preece, C., Tahmasebi, M. M., & Shakouri, M. (2014). Managerial sustainability assessment tool for Iran’s buildings. Proceedings of the ICE - Engineering Sustainability, 167(1), 12–23. http://doi.org/10.1680/ensu.12.00041 Narita, N., Nakahara, Y., Morimoto, M., Aoki, R., & Suda, S. (2004). Current LCA database development in Japan -results of the LCA project. The International Journal of Life Cycle Assessment, 9(6), 355–359. http://doi.org/10.1007/BF02979077 Natural Resources Canada. (2012). 2012 R-2000 Standard. Natural Resources Canada. (2013). Energy Efficiency Trends in Canada 1990 to 2010. Energy. http://doi.org/http://oee.nrcan.gc.ca/publications/statistics/trends11/pdf/trends.pdf Natural Resources Canada. (2015). ENERGY STAR Portfolio Manager in Canada. Retrieved June 2, 2016, from https://www.nrcan.gc.ca/energy/efficiency/buildings/energy-benchmarking/update/getready/15940 Natural Resources Canada. (2016a). EnerGuide in Canada. Retrieved July 4, 2016, from http://www.nrcan.gc.ca/energy/products/energuide/12523 Natural Resources Canada. (2016b). EnerGuide rating, service, label, and reports. Retrieved July  95  4, 2016, from http://www.nrcan.gc.ca/energy/efficiency/housing/home-improvements/17028 Natural Resources Canada. (2016c). EnerGuide Rating System Administrative Procedures Version 15.2. Natural Resources Canada. (2016d). Energy benchmarking technical resources. Retrieved June 2, 2016, from https://www.nrcan.gc.ca/energy/efficiency/buildings/energy-benchmarking/update/getready/15950 Natural Resources Canada. (2016e). ENERGY STAR® for New Homes Standard - Version 12.7. Retrieved from http://www.nrcan.gc.ca/energy/efficiency/housing/new-homes/energy-star/14178 Natural Resources Canada. (2016f). Guide to the EnerGuide label for homes. Natural Resources Canada. (2016g). Improving Energy Performance in Canada. Natural Resources Canada. (2016h). Improving Energy Performance in Canada, Report to Parliament Under the Energy Efficiency Act 2013–2015. Retrieved from https://oee.nrcan.gc.ca/publications/statistics/parliament/2013-2015/pdf/parliament13-15.pdf Natural Resources Canada. (2016i). R-2000 homes. Retrieved July 4, 2016, from http://www.nrcan.gc.ca/energy/efficiency/housing/new-homes/5085 Office of Environment and Heritage. (n.d.). How NABERS works. Retrieved June 22, 2016, from https://www.nabers.gov.au/public/WebPages/ContentStandard.aspx?module=10&template=3&id=5&include=HowNabersWorks.htm&side=factsheets.htm P??rez-Lombard, L., Ortiz, J., Gonz??lez, R., & Maestre, I. R. (2009). A review of benchmarking, rating and labelling concepts within the framework of building energy certification schemes. Energy and Buildings, 41(3), 272–278. http://doi.org/10.1016/j.enbuild.2008.10.004 Pedrycz, W., Ekel, P., & Parreiras, R. (2010). Notions and Concepts of Fuzzy Sets: An Introduction. In Fuzzy Multicriteria Decision-Making (pp. 21–62). Chichester, UK: John Wiley & Sons, Ltd. http://doi.org/10.1002/9780470974032.ch2 Pérez-Lombard, L., Ortiz, J., González, R., & Maestre, I. R. (2009). A review of benchmarking, rating and labelling concepts within the framework of building energy certification schemes. Energy and Buildings, 41(3), 272–278. http://doi.org/10.1016/j.enbuild.2008.10.004 Perez-Lombard, L., Ortiz, J., Gonzelez, R., & Maestre, I. R. (2009). A review of benchmarking, rating and labelling concepts within the framework of building energy certification schemes. Energy and Buildings, 41(3), 272–278. http://doi.org/10.1016/j.enbuild.2008.10.004 Pérez-Lombard, L., Ortiz, J., & Pout, C. (2008). A review on buildings energy consumption information. Energy and Buildings, 40(3), 394–398. http://doi.org/10.1016/j.enbuild.2007.03.007 Pew Research Center. (2017). Questionnaire Design. Retrieved January 3, 2017, from http://www.pewresearch.org/methodology/u-s-survey-research/questionnaire-design/ Ramesh, T., Prakash, R., & Shukla, K. K. (2010). Life cycle energy analysis of buildings: An  96  overview. Energy and Buildings, 42(10), 1592–1600. http://doi.org/10.1016/j.enbuild.2010.05.007 RDHBuilding Engineering Ltd. (2012). Energy Consumption and Conservation in Mid and High Rise Residential Buildings in British Columbia (Vol. 1). http://doi.org/10.1017/CBO9781107415324.004 Reza, B., Sadiq, R., & Hewage, K. (2013). A fuzzy-based approach for characterization of uncertainties in emergy synthesis: an example of paved road system. Journal of Cleaner Production, 59, 99–110. http://doi.org/10.1016/j.jclepro.2013.06.061 Reza, B., Sadiq, R., & Hewage, K. (2014). Emergy-based life cycle assessment (Em-LCA) of multi-unit and single-family residential buildings in Canada. International Journal of Sustainable Built Environment, 3(2), 207–224. http://doi.org/10.1016/j.ijsbe.2014.09.001 Ross, T. J. (2005). Fuzzy logic with engineering applications. Hoboken, N.J: John Wiley & Sons. Rossi, B., Marique, A.-F., Glaumann, M., & Reiter, S. (2012). Life-cycle assessment of residential buildings in three different European locations, basic tool. Building and Environment, 51, 395–401. http://doi.org/10.1016/j.buildenv.2011.11.017 Russell, B. C., Torralba, A., Murphy, K. P., & Freeman, W. T. (2008). LabelMe: A Database and Web-Based Tool for Image Annotation. International Journal of Computer Vision, 77(1–3), 157–173. http://doi.org/10.1007/s11263-007-0090-8 Saaty, R. W. (1987). The analytic hierarchy process-what it is and how it is used. Mathematical Modelling, 9(3–5), 161–176. http://doi.org/10.1016/0270-0255(87)90473-8 Saaty, T. L. (2008). Decision making with the analytic hierarchy process. International Journal of Services Sciences, 1(1), 83. http://doi.org/10.1504/IJSSCI.2008.017590 Sadiq, R., & Rodriguez, M. J. (2004). Fuzzy synthetic evaluation of disinfection by-products—a risk-based indexing system. Journal of Environmental Management, 73(1), 1–13. http://doi.org/10.1016/j.jenvman.2004.04.014 Sarak, H., & Satman, A. (2003). The degree-day method to estimate the residential heating natural gas consumption in Turkey: a case study. Energy, 28(9), 929–939. http://doi.org/10.1016/S0360-5442(03)00035-5 Song, B., & Kang, S. (2016). A Method of Assigning Weights Using a Ranking and Nonhierarchy Comparison. Advances in Decision Sciences, 2016, 1–9. http://doi.org/10.1155/2016/8963214 Spath, P. L., Mann, M. K., & Kerr, D. R. (1999). Life Cycle Assessment of Coal-fired Power Production. Other Information: Supercedes report DE00012100; PBD: 1 Sep 1999; PBD: 1 Sep 1999. http://doi.org/10.2172/12100 Srinivasan, R. S., Ingwersen, W., Trucco, C., Ries, R., & Campbell, D. (2014). Comparison of energy-based indicators used in life cycle assessment tools for buildings. Building and Environment, 79, 138–151. http://doi.org/10.1016/j.buildenv.2014.05.006 Statistics Canada. (1994). 1991 Census (Vol. 4). http://doi.org/Catalogue 92-301E Statistics Canada. (2011). Population, urban and rural, by province and territory. Statistics Canada. (2013). Household size, by province and territory (2011 Census). Retrieved January 10, 2017, from http://www.statcan.gc.ca/tables-tableaux/sum-som/l01/cst01/famil53a-eng.htm Statistics Canada. (2016). Evolution of housing in Canada, 1957 to 2014.  97  Sustainable Energy Authority of Ireland. (2014). A Guide to Building Energy Rating for Homeowners. Dublin. The Residential Energy Services Network. (2013). Understanding the HERS Index. Retrieved June 23, 2016, from http://www.hersindex.com/understanding The World Bank. (2016). Urban population (% of total). Retrieved July 4, 2016, from http://data.worldbank.org/indicator/SP.URB.TOTL.IN.ZS Thiers, S., & Peuportier, B. (2012). Energy and environmental assessment of two high energy performance residential buildings. Building and Environment, 51, 276–284. http://doi.org/10.1016/j.buildenv.2011.11.018 Trochim, William, M, K., & Donnelly, James, P. (2008). Research methods knowledge base (3rd ed.). Mason, Ohio: Atomic Dog. U.S. Department of Energy. (2016). Building Energy Asset Score. UBC- Office of Research Ethics. (2015). About Human Research Ethics |. Retrieved January 3, 2017, from https://ethics.research.ubc.ca/about-human-research-ethics United Nations Framework Convention on Climate Change. (2016). Historic Paris Agreement on Climate Change. US Department of Energy. (2016). Building energy asset score. USEPA. (2014). Risk Management Sustainable Technology. Verbeeck, G., & Hens, H. (2010). Life cycle inventory of buildings: A contribution analysis. Building and Environment, 45(4), 964–967. http://doi.org/10.1016/j.buildenv.2009.10.003 Vijayan, A., & Kumar, A. (2005). Development of a tool for analyzing the sustainability of residential buildings in Ohio. Environmental Progress, 24(3), 238–247. http://doi.org/10.1002/ep.10095 Vučićević, B., Jovanović, M., Afgan, N., & Turanjanin, V. (2014). Assessing the sustainability of the energy use of residential buildings in Belgrade through multi-criteria analysis. Energy and Buildings, 69, 51–61. http://doi.org/10.1016/j.enbuild.2013.10.022 Wang, L.-X., & Mendel, J. M. (1992). Generating fuzzy rules by learning from examples. IEEE Transactions on Systems, Man, and Cybernetics, 22(6), 1414–1427. http://doi.org/10.1109/21.199466 Wang, Y., Kuckelkorn, J. M., Zhao, F.-Y., Mu, M., & Li, D. (2016). Evaluation on energy performance in a low-energy building using new energy conservation index based on monitoring measurement system with sensor network. Energy and Buildings, 123, 79–91. http://doi.org/10.1016/j.enbuild.2016.04.056 Welkowitz, J., Cohen, B. H., & Lea, R. B. (2012). Introductory Statistics for the Behavioral Sciences (7th ed.). Hoboken, New Jersey:   John Wiley & Sons. http://doi.org/519.5/0243 Wright, K. B. (2006). Researching Internet-Based Populations: Advantages and Disadvantages of Online Survey Research, Online Questionnaire Authoring Software Packages, and Web Survey Services. Journal of Computer-Mediated Communication, 10(3), 00–00. http://doi.org/10.1111/j.1083-6101.2005.tb00259.x Zabalza Bribián, I., Aranda Usón, A., & Scarpellini, S. (2009). Life cycle assessment in buildings: State-of-the-art and simplified LCA methodology as a complement for building certification. Building and Environment, 44(12), 2510–2520.  98  http://doi.org/10.1016/j.buildenv.2009.05.001 Zadeh, L. A. (1965). Fuzzy sets. Information and Control, 8(3), 338–353. http://doi.org/10.1016/S0019-9958(65)90241-X Zhao, X., Hwang, B. G., & Gao, Y. (2016). A fuzzy synthetic evaluation approach for risk assessment: A case of Singapore’s green projects. Journal of Cleaner Production, 115(January 2005), 203–213. http://doi.org/10.1016/j.jclepro.2015.11.042 Zimmermann, H.-J. (2010). Fuzzy set theory. Wiley Interdisciplinary Reviews: Computational Statistics, 2(3), 317–332. http://doi.org/10.1002/wics.82 Zmeureanu, R., Fazio, P., DePani, S., & Calla, R. (1999). Development of an energy rating system for existing houses. Energy and Buildings, 29(2), 107–119. http://doi.org/10.1016/S0378-7788(98)00037-1    99         Appendix A: Energy Consumption Data   100  Table A 1: Sample Energy Consumption Data BEPI Comparison (ekWh/ft2)       Sum of BEPI (ekWh/ft2)  Reporting Year     Site Code 2005 2014 2015 % Change 2015 vs. 2005 % Change 2015 vs. 2014 101               51                  6                20  -61% 232% 102               23                14                13  -44% -11% 103               35                20                18  -48% -9% 104               31                19                18  -41% -5% 105               30                18                18  -40% -3% 106               42                27                26  -38% -4% 107               25                20                17  -30% -12% 108               16                11                10  -36% -6% 109               29                22                23  -22% 1% 110               17                  3                  3  -82% -13% 111               32                19                16  -48% -14% 112               22                14                13  -43% -13% 113               26                20                19  -28% -4% 114               23                13                11  -50% -11% 115               20                  5                  5  -77% -1% 116               20                15                13  -37% -14% 117                 9                  8                  7  -24% -16% 118               25                17                14  -46% -18%  101       Appendix B: Questionnaire    102  Energy Assessment Tool for Multi-Unit Residential Buildings in British Columbia Thank you for agreeing to participate in this survey. The purpose of the survey is to study the use of energy assessment tools and rating systems for Multi-Unit Residential Buildings (MURBs). The survey should take approximately 20 minutes to complete.  All information received will be treated in strictest confidence and in accordance with the UBCO code of conduct.  The terms energy assessment tool and energy rating system mentioned here refers to tools and rating systems used only to evaluate or assess the energy performance of a building. We are not focusing on construction, site condition, and water use of a building. Please select the most appropriate category applicable to you. MURB Owner/Manager   Designer/ Engineer (Building/ Energy System)  Government / External stakeholder  Researcher/ Academia   Years of experience in building/ energy sector: ………………………. Years  Section 1 1.1 Please rate the importance of different information you expect from an energy assessment tool.  (Please tick the appropriate rating) Criteria Very Important Important Not Important Energy performance compared to similar buildings    Environmental impacts from energy use    Total energy consumed to construct the building    Potential energy saving strategies (operational phase)    Cost savings that can be generated from energy savings    Other (Specify)    Other (Specify)     1.2 What energy rating systems you/your organization use/recommend to rate the energy performance of MURBs?  (Please select all that are applicable)   103  EnerGuide  R2000  Energy Star (Canada)  Other (Specify) None  Don’t know    Section 2 Table 1 (for reference only) shows the criteria to be used to rate the indicators in Table 2 [very high (5), high (4), average (3), low (2), very low (1)].   Table 0:1: Rating Criteria for Indicators (for reference only) Rating Description Very high (5) Must be included, indicator is highly relevant and highly important for energy consumption or impacts of energy use High (4) Highly relevant and of average importance for energy consumption or impacts of energy use Average (3) Average relevance and average importance for energy consumption or impacts of energy use Low (2) Indicator has low relevance and importance for energy consumption or impacts of energy use Very low (1) Seems to be irrelevant for energy consumption or impacts of energy use  2.1 Table 2 shows the proposed indicators for an energy assessment tool for Multi-Unit Residential Buildings. Based on the ratings provided in Table 1, please rate the indicators in Table 2. Please tick the applicable rating for each indicator in Table 2 [very high (5), high (4), average (3), low (2), very low (1)].   104  Table 0:2: Indicators of the rating system Category Indicator Description Rating Operational Rating  5 4 3 2 1 Energy performance Annual energy consumption Total energy consumption of the building per annum      Renewable energy consumption  Amount of energy used from renewable sources      Peak demand Time of the day which has the highest demand for energy and the demand      Availability of sub-metering  Meters available to measure energy use of each apartment and from different sources      Availability of energy recovery ventilation system Availability of a system that reclaim waste energy from the exhaust air stream and use it to treat the incoming air      Availability of combined heat and power Availability of a system which simultaneously produce heat and electrical or mechanical power by capturing the rejected heat from an electricity generation process in the building               105  Category Indicator Description Rating Operational Rating  5 4 3 2 1 Energy performance Energy-efficient operating procedures Availability of procedures to guide users for more energy efficient practices      Availability of energy monitoring Availability of a system to review the current energy use and take necessary actions      Trained staff for building management Availability of trained staff to manage the building functions and equipment      Availability of maintenance schedules Availability of a regular mechanical and electrical systems maintenance schedule      Environmental Performance Water depletion  Measure the impact on water sources      Global warming potential  Measures the impact on global warming through greenhouse gas emissions      Ozone depletion potential  Measures the impact on the depletion of ozone layer      Nutrification/ eutrophication potential  Measures the contribution to excessive nutrition in water which leads to dense plant growth, leading to death of fauna due to lack of oxygen       Heavy metal  Measures the contribution to generate heavy metal as a waste      Smog potential  Measures the contribution to generate smog       106  Acidification potential  Measures the contribution to generate acidic gasses      Radioactive waste/Eco-toxicity  Measures the potential to generate radioactive waste      Habitat Alteration Measures the impact on changing the habitat of different species       Human health respiratory effects potential  Measures the contribution to generate pollutants causing respiratory issues      Carcinogens  Measures the contribution to generate pollutants that could cause cancer      Economic performance Operational costs Operational cost of the energy system including purchasing power      Maintenance costs of energy system Maintenance cost of energy systems      Return on investment (ROI) from retrofits Savings generated in operation and maintenance costs due to energy retrofits        Category Indicator Description Rating Asset Rating  5 4 3 2 1 Asset Performance Remaining service life Expected remaining life of assets      Condition of the thermal characteristics of the building  This include characteristics that affect the heat exchange between indoor and outdoor environment      Efficiency of the heating installation Efficiency of the heating systems       107  and hot water supply, including their insulation method Efficiency of the air-conditioning installation where installed Efficiency of the air conditioning system       Efficiency of the artificial built-in lighting Efficiency of the lighting system operated with external power       2.2 Indicators in the above table (Question 2.1) are categorised into two main criteria: operational rating, and asset rating. Please indicate the relative importance of these two criteria. (Please indicate the relative importance by ticking the appropriate column) Criterion 1 Criterion 1 is more important Criterion 1 and 2 are equally important  Criterion 2 is more important Criterion 2 Operational rating    Asset rating  2.3 Indicators under operational rating (Table 2 of question 2.1) are categorised into energy performance, environmental performance, and economic performance. Please indicate the relative importance of these three criteria. (Please indicate the relative importance by ticking the appropriate column) Criterion 1 Criterion 1 is more important Criterion 1 and 2 are equally important  Criterion 2 is more important Criterion 2 Energy performance    Environmental performance Energy performance    Economic performance  108  Environmental performance    Economic performance   2.4 Following are the indicators for asset rating (Table 2 of question 2.1). Please indicate the relative importance of these indicators. (Please indicate the relative importance by ticking the appropriate column) Criterion 1 Criterion 1 is more important Criterion 1 & 2 are equally important  Criterion 2 is more important Criterion 2 Remaining service life    Condition of the thermal characteristics of the building including the air permeability Remaining service life    Efficiency of heating installation and hot water supply, including their insulation method Remaining service life    Efficiency of the air-conditioning installation where installed Remaining service life    Efficiency of artificial built-in lighting Condition of the thermal characteristics of the building including the air permeability    Efficiency of the heating installation and hot water supply, including their insulation method Condition of the thermal characteristics of the building including the air permeability    Efficiency of the air-conditioning installation where installed  109  Condition of the thermal characteristics of the building including the air permeability    Efficiency of the artificial built-in lighting Efficiency of the heating installation and hot water supply, including their insulation method    Efficiency of the air-conditioning installation where installed Efficiency of the heating installation and hot water supply, including their insulation method    Efficiency of the artificial built-in lighting Efficiency of the air-conditioning installation where installed    Efficiency of the artificial built-in lighting  Section 3 3.1 What are the potential barriers in implementing the proposed energy assessment tool? (If you have any suggestions to overcome those barriers, please indicate) (E.g. Lack of intensives for energy efficient enhancements)             110  3.2 Please provide any other comments on the proposed energy assessment tool.  (E.g.  Include additional indicators to measure ……)          3.3 Do you think the energy rating systems (e.g. EnerGuide, Energy Star) or energy assessment tools should be mandatory to assess the energy performance of residential buildings? (Please select the applicable response)   3.4  Please provide any other comments, if any.           Thank you for participating in this study and providing your valuable inputs.   Yes  No, it should be voluntary    111       Appendix C: Ethics Approval and TCPS 2 Certification    112     113      114     115       Appendix D: Modified Digital Logic Analysis    116  Modified Digital Logic to Determine Weights Based on Survey Responses Operational Rating Table D1: MDL analysis for operational rating Researcher       Energy-Env Ener-Econ Env-Econ   Energy performance 15 20  35 0.448718 Environmental Performance 5  16 21 0.269231 Economic performance  12 10 22 0.282051     78 1             Designer       Energy-Env Ener-Econ Env-Econ   Energy performance 26 17  43 0.37069 Environmental Performance 10  14 24 0.206897 Economic performance  23 26 49 0.422414     116 1             Owner       Energy-Env Ener-Econ Env-Econ   Energy performance 17 11  28 0.4375 Environmental Performance 7  11 18 0.28125 Economic performance  9 9 18 0.28125     64 1             Government       Energy-Env Ener-Econ Env-Econ   Energy performance 9 9  18 0.5 Environmental Performance 3  5 8 0.222222 Economic performance  3 7 10 0.277778     36 1   117  Asset Rating Table D2: MDL analysis for asset rating Researchers               1-2 1-3 1-4 1-5 2-3 2-4 2-5 3-4 3-5 4-5   1 Remaining service life 13 11 11 12       47 0.146875 2 Condition of the thermal characteristics of the building including the air permeability 19    16 16 21    72 0.225 3 Efficiency of heating installation and hot water supply, including their insulation method  21   16   18 21  76 0.2375 4 Efficiency of the air-conditioning installation where installed   21   16  14  19 70 0.21875 5 Efficiency of artificial built-in lighting    20   11  11 13 55 0.171875             320 1                             Designers               1-2 1-3 1-4 1-5 2-3 2-4 2-5 3-4 3-5 4-5   1 Remaining service life 17 14 16 18       65 0.1625 2 Condition of the thermal characteristics of the building including the air permeability 23    22 21 24    90 0.225 3 Efficiency of heating installation and hot water supply, including their insulation method  26   18   23 26  93 0.2325  118  4 Efficiency of the air-conditioning installation where installed   24   19  17  19 79 0.1975 5 Efficiency of artificial built-in lighting    22   16  14 21 73 0.1825             400 1                             Owners                1-2 1-3 1-4 1-5 2-3 2-4 2-5 3-4 3-5 4-5   1 Remaining service life 12 11 16 8       47 0.195833 2 Condition of the thermal characteristics of the building including the air permeability 12    15 16 14    57 0.2375 3 Efficiency of heating installation and hot water supply, including their insulation method  13   9   17 15  54 0.225 4 Efficiency of the air-conditioning installation where installed   8   8  7  9 32 0.133333 5 Efficiency of artificial built-in lighting    16   10  9 15 50 0.208333             240 1                             Government               1-2 1-3 1-4 1-5 2-3 2-4 2-5 3-4 3-5 4-5   1 Remaining service life 4 3 4 3       14 0.116667  119  2 Condition of the thermal characteristics of the building including the air permeability 8    4 7 5    24 0.2 3 Efficiency of heating installation and hot water supply, including their insulation method  9   8   7 5  29 0.241667 4 Efficiency of the air-conditioning installation where installed   8   5  5  3 21 0.175 5 Efficiency of artificial built-in lighting    9   7  7 9 32 0.266667             120 1  120        Appendix E: Energy Simulations    121    Table E1:  Results for the building according to actual design    122        Appendix F: LCA Analysis    123    Table F1: Summary of LCA results for the case study    

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
https://iiif.library.ubc.ca/presentation/dsp.24.1-0348244/manifest

Comment

Related Items