Open Collections

UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

A systematic investigation of thermal comfort compliance criteria Li, Peixian 2020

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Notice for Google Chrome users:
If you are having trouble viewing or searching the PDF with Google Chrome, please download it here instead.

Item Metadata

Download

Media
24-ubc_2020_may_li_peixian.pdf [ 3.32MB ]
Metadata
JSON: 24-1.0388823.json
JSON-LD: 24-1.0388823-ld.json
RDF/XML (Pretty): 24-1.0388823-rdf.xml
RDF/JSON: 24-1.0388823-rdf.json
Turtle: 24-1.0388823-turtle.txt
N-Triples: 24-1.0388823-rdf-ntriples.txt
Original Record: 24-1.0388823-source.json
Full Text
24-1.0388823-fulltext.txt
Citation
24-1.0388823.ris

Full Text

A SYSTEMATIC INVESTIGATION OF THERMAL COMFORT COMPLIANCE CRITERIA  by  Peixian Li  M.A.Sc., The University of British Columbia, 2016 B.E., Tongji University, 2014  A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF  DOCTOR OF PHILOSOPHY  in  THE FACULTY OF GRADUATE AND POSTDOCTORAL STUDIES (Civil Engineering)  THE UNIVERSITY OF BRITISH COLUMBIA (Vancouver)  February 2020  ©  Peixian Li, 2020   ii  The following individuals certify that they have read, and recommend to the Faculty of Graduate and Postdoctoral Studies for acceptance, the dissertation entitled: A systematic investigation of thermal comfort compliance criteria submitted by Peixian Li in partial fulfillment of the requirements for the degree of Doctor of Philosophy  in Civil Engineering  Examining Committee: Sheryl Staub-French, Civil Engineering, The University of British Columbia Supervisor Thomas M. Froese, Civil Engineering, The University of Victoria Supervisory Committee Member Karen Bartlett, Population & Public Health, The University of British Columbia University Examiner Steven Rogak, Mechanical Engineering, The University of British Columbia University Examiner   Additional Supervisory Committee Members: Adam Rysanek, Architecture, The University of British Columbia Supervisory Committee Member Richard de Dear, Architecture, The University of Sydney Supervisory Committee Member      iii  Abstract Thermal comfort impacts occupant health and productivity and is responsible for a significant portion of the total building energy consumption. However, the compliance criteria in current standards to assess thermal comfort in a building’s operation phase are mostly based on a theoretical thermal comfort model (predicted mean vote—PMV) derived from laboratory studies and lack validation from data in real buildings. The research objective is to use field-based data to systematically investigate whether current point-in-time and long-term compliance criteria can reliably predict subjective evaluations of thermal environments. First, an extensive bibliometric analysis of 146 post-occupancy evaluation (POE) projects introduces the state-of-the-art and state-of-the-practice of approaches for field data collection. Then, an analysis of ASHRAE Global Thermal Comfort Database II demonstrates that tiered PMV classes are not appropriate for thermal comfort assessment at a point of time, and we proposed a new approach to derive acceptable temperature ranges as the point-in-time compliance criteria. The derived acceptable temperature ranges in real buildings using this new method are wider than the current standards mandate. Last, using continuous thermal measurements and occupant feedback in four air-conditioned office buildings in Sydney, an assessment of 23 existing and 36 new long-term thermal comfort indices (as the long-term compliance criteria) and their correlation with the occupants’ long-term thermal satisfaction found that the majority of existing indices, especially those based on PMV index, had a weak correlation with thermal satisfaction. The percentage of time outside specified temperature ranges was the best-performing index from the standards (𝑟 = −0.63). The newly proposed index based on the percentage of time that daily temperature range is greater than a threshold reported the strongest correlation (𝑟 =−0.8) with thermal satisfaction for the used dataset. The results suggest that occupants in real buildings can accept a wider temperature range at a point of time than expected, and their long-term thermal satisfaction with a space is dominated by the frequency and severity of temperature excursions outside an acceptable range and beyond a daily variability threshold. This research informs future amendments of the point-in-time and long-term compliance criteria in international standards to reduce energy consumption in building operations.   iv  Lay Summary Thermal comfort impacts occupant health and productivity and is responsible for a significant portion of the total building energy consumption. The compliance criteria to define thermal comfort in current international standards lack validation from data in real buildings, and this research used field-based data to systematically investigate the validity of these thermal comfort compliance criteria. The dissertation provided a review of approaches for field data collection, provided evidence that indicate the invalidity of current compliance criteria, and proposed new compliance criteria. This research adjusts previous misunderstanding regarding thermal comfort, and the suggested amendments of the thermal comfort standards can help minimize unnecessary energy use.     v  Preface A version of Chapter 2 has been published [P. Li, T.M. Froese, G. Brager, Post-occupancy evaluation: State-of-the-art analysis and state-of-the-practice review, Build. Environ. 133 (2018) 187–202.]. I was the lead investigator, responsible for all major areas of concept formation, data collection and analysis, as well as manuscript composition. Thomas Froese was the supervisory author on this project and was involved throughout the project in concept formation and manuscript composition. Gail Brager was involved in the later stages of concept formation and contributed to manuscript edits. A version of Chapter 3 has been published [P. Li, T. Parkinson, G. Brager, S. Schiavon, T.C.T. Cheung, T. Froese, A data-driven approach to defining acceptable temperature ranges in buildings, Build. Environ. 153 (2019) 302–312.]. I was the lead investigator, responsible for all major areas of concept formation, data collection and analysis, as well as the majority of manuscript composition. Thomas Parkinson and Gail Brager were involved throughout the project in concept formation and manuscript edits. Stefano Schiavon provided ideas to increase the robustness of the data analysis. Toby C. T. Cheung, and Thomas Froese were involved in manuscript edits. A version of Chapter 4 has been submitted to a Journal [P. Li, T. Parkinson, S. Schiavon, T. Froese, R. de Dear, A. Rysanek, S. Staub-French. Improved long-term thermal comfort indices for continuous monitoring.]. I was the lead investigator, responsible for all major areas of concept formation, data collection and analysis, as well as the majority of manuscript composition. Thomas Parkinson was involved throughout the project in concept formation and manuscript edits. Stefano Schiavon was involved in later stage to ensure the correctness and robustness of the data analysis. Thomas Froese, Richard de Dear, Adam Rysanek, and Sheryl Staub-French were the supervisory authors on this project and were involved throughout the project in concept formation and manuscript edits. Richard de Dear offered access to the data used in this project. Data used in Chapter 3 and 4 are covered by ethics approvals obtained by their original researchers. I obtained UBC Ethics Certificate number H19-003 initially for work in Chapter 4 but it was not used as Chapter 4 did not involve any new surveys or interviews.   vi  Table of Contents  Abstract --------------------------------------------------------------------------------------------------------------- iii Lay Summary -------------------------------------------------------------------------------------------------------- iv Preface ----------------------------------------------------------------------------------------------------------------- v Table of Contents --------------------------------------------------------------------------------------------------- vi List of Tables --------------------------------------------------------------------------------------------------------- ix List of Figures --------------------------------------------------------------------------------------------------------- x List of Symbols ----------------------------------------------------------------------------------------------------- xii List of Abbreviations ---------------------------------------------------------------------------------------------- xiii Acknowledgements ----------------------------------------------------------------------------------------------- xiv Dedication ------------------------------------------------------------------------------------------------------------ xv Chapter 1: Introduction ----------------------------------------------------------------------------------------------------- 1 1.1 Topic --------------------------------------------------------------------------------------------------------------- 1 1.2 Problem ----------------------------------------------------------------------------------------------------------- 4 1.3 Research questions -------------------------------------------------------------------------------------------- 5 1.4 Dissertation structure and methodology ----------------------------------------------------------------- 6 Chapter 2: Paper 1—Review of Post-Occupancy Evaluation ----------------------------------------------------- 9 2.1 Introduction ------------------------------------------------------------------------------------------------------ 9 2.2 Methods ---------------------------------------------------------------------------------------------------------- 9 2.3 Results ------------------------------------------------------------------------------------------------------------ 10 2.3.1 Relationship between POE and green building certification systems ------------------------------------- 10 2.3.2 History, definition and benefits of POE ---------------------------------------------------------------------------- 12 2.3.3 Analysis of POE Projects ----------------------------------------------------------------------------------------------- 14 2.3.3.1 Buildings assessed: types and countries ------------------------------------------------------------------ 14 2.3.3.2 Purposes ----------------------------------------------------------------------------------------------------------- 17 2.3.3.3 Methods ----------------------------------------------------------------------------------------------------------- 21 2.3.3.4 POE protocols ---------------------------------------------------------------------------------------------------- 24 2.4 Discussion ------------------------------------------------------------------------------------------------------- 36 2.4.1 Emerging POE research topics --------------------------------------------------------------------------------------- 36   vii  2.4.2 Status and future research -------------------------------------------------------------------------------------------- 39 2.5 Conclusion ------------------------------------------------------------------------------------------------------- 42 Chapter 3: Paper 2—Investigation of Point-in-Time Thermal Comfort Compliance Criteria ----------- 44 3.1 Introduction ----------------------------------------------------------------------------------------------------- 44 3.2 Methods --------------------------------------------------------------------------------------------------------- 47 3.2.1 Analysis of current point-in-time compliance criteria --------------------------------------------------------- 48 3.2.2 Methods to derive new point-in-time compliance criteria --------------------------------------------------- 49 3.2.3 Results compared to standards -------------------------------------------------------------------------------------- 52 3.3 Results ------------------------------------------------------------------------------------------------------------ 52 3.3.1 Analysis of current point-in-time compliance criteria --------------------------------------------------------- 52 3.3.2 Methods to derive new point-in-time compliance criteria --------------------------------------------------- 55 3.3.3 Results compared to standards -------------------------------------------------------------------------------------- 58 3.4 Discussion ------------------------------------------------------------------------------------------------------- 63 3.5 Conclusion ------------------------------------------------------------------------------------------------------- 67 Chapter 4: Paper 3—Investigation of Long-term Thermal Comfort Compliance Criteria ---------------- 69 4.1 Introduction ----------------------------------------------------------------------------------------------------- 69 4.2 Methods --------------------------------------------------------------------------------------------------------- 71 4.2.1 BOSSA survey database and subjective index ------------------------------------------------------------------- 72 4.2.2 SAMBA IEQ monitoring database ----------------------------------------------------------------------------------- 73 4.2.3 Data preparation -------------------------------------------------------------------------------------------------------- 74 4.2.4 Long-term physical indices -------------------------------------------------------------------------------------------- 78 4.2.4.1 Existing long-term physical indices ------------------------------------------------------------------------- 79 4.2.4.2 New long-term physical indices ----------------------------------------------------------------------------- 81 4.2.5 Correlation analysis ----------------------------------------------------------------------------------------------------- 83 4.3 Results ------------------------------------------------------------------------------------------------------------ 84 4.4 Discussion ------------------------------------------------------------------------------------------------------- 89 4.4.1 Preparing time series Data -------------------------------------------------------------------------------------------- 89 4.4.2 Use of air or operative temperature ------------------------------------------------------------------------------- 89 4.4.3 Specifying index thresholds ------------------------------------------------------------------------------------------- 91 4.4.4 Occupant adaptation and sensitivity to variation -------------------------------------------------------------- 91 4.4.5 Proposed use of new index in standards -------------------------------------------------------------------------- 93 4.4.6 Study limitations --------------------------------------------------------------------------------------------------------- 94 4.5 Conclusion ------------------------------------------------------------------------------------------------------- 95   viii  Chapter 5: Conclusion ------------------------------------------------------------------------------------------------------ 97 5.1 Summary of results ------------------------------------------------------------------------------------------- 97 5.2 Contributions --------------------------------------------------------------------------------------------------- 99 5.3 Practical implications --------------------------------------------------------------------------------------- 100 5.4 Limitations and future research ------------------------------------------------------------------------- 102 Bibliography ------------------------------------------------------------------------------------------------------- 104     ix  List of Tables Table 2-1 Top 20 frequent words in the purposes of POE projects ------------------------------------ 18 Table 2-2 Classification of POE Purposes --------------------------------------------------------------------- 18 Table 2-3 A comparison of current POE protocols --------------------------------------------------------- 25 Table 3-1 Three classes of indoor thermal environment in ISO 7730 ---------------------------------- 45 Table 3-2 Summary of basic parameters in ASHRAE database ------------------------------------------ 47 Table 3-3 Summary of subjective answers in ASHRAE database. Thermal sensation is a continuous scale from -3 (cold) to 3 (hot) with 0 being neutral. Thermal comfort is a continuous numeric scale from 1 (very uncomfortable) to 6 (very comfortable). ----------- 48 Table 3-4 Observed percentage of satisfaction in three PMV classes --------------------------------- 53 Table 4-1 Measurement accuracy of the SAMBA IEQ Monitoring device compared to the “desired” performance level specified in ISO 7726 [261] ------------------------------------------ 74 Table 4-2 Matched BOSSA survey campaigns and SAMBA measurements included in our analyses -------------------------------------------------------------------------------------------------------- 76 Table 4-3 Existing and new physical indices tested in this study for long-term thermal comfort evaluation ----------------------------------------------------------------------------------------------------- 78 Table 4-4 Comfort classifications based on operative temperature ranges for office buildings in standards ------------------------------------------------------------------------------------------------------ 79 Table 4-5 Temperature ranges derived from time series data for the calculation of the new comfort indices ---------------------------------------------------------------------------------------------- 82 Table 4-6 Percentile thresholds tested for the daily range outlier indices. Stable conditions in the monitored offices result in a daily variance of less than 2.5 °C in most cases. ----------- 82 Table 4-7 Results of the simple linear regression and cross-validated linear regression of long-term comfort indices and thermal satisfaction measure. Underlined is the best performing existing index as the baseline. Bold is better performance than baseline.--------------------- 86    x  List of Figures Figure 1-1 Conceptual relationships between the body of work ----------------------------------------- 7 Figure 2-1 Number of POE publications and POE projects ----------------------------------------------- 10 Figure 2-2 Building Performance Evaluation (BPE) process model (Source: Preiser 2005 [57]) - 13 Figure 2-3 Number of POE projects and number of buildings per type of building ----------------- 15 Figure 2-4 Number of POE projects per country ------------------------------------------------------------ 16 Figure 2-5 Number of buildings assessed per country ---------------------------------------------------- 17 Figure 2-6 Venn Diagram of POE Direct Purposes ---------------------------------------------------------- 20 Figure 2-7 Percentage of projects that used certain method (note that most projects used more than one method) ------------------------------------------------------------------------------------------- 22 Figure 2-8 Trends of the usage of POE methods ------------------------------------------------------------ 23 Figure 2-9 Evolution of CBE carts (Sources: [147,169–172]) --------------------------------------------- 33 Figure 2-10 Examples of other IEQ carts (Sources: [32,49,153]) ---------------------------------------- 33 Figure 2-11 Examples of long-term IEQ monitoring framework (Sources: [31,174]) --------------- 35 Figure 2-12 Technology Adoption Lifecycle (Source: Wikipedia Commons) ------------------------- 40 Figure 3-1 Illustrations of the two statistical methods used for deriving acceptable temperature ranges ---------------------------------------------------------------------------------------------------------- 50 Figure 3-2 Observed percentage of satisfaction in PMV ranges (e.g. 0.1 means |PMV|≤ 0.1) - 53 Figure 3-3 Observed vs. predicted percentage of dissatisfied ------------------------------------------- 54 Figure 3-4 Acceptable air temperature range derived by (a) method 1 and (b) method 2 ------- 56 Figure 3-5 Acceptable temperature range for each “building” using (a) method 1 and (b) method 2 ------------------------------------------------------------------------------------------------------ 57 Figure 3-6 Acceptable air temperature ranges by method 2 compared to the standards. ISO 7730 does not specify temperature ranges for homes. n pubs = number of publications. 59 Figure 3-7 Clothing level (a) and air velocity (b) in different spaces and seasons. Pink dot is the mean. Boxplot shows 25th percentile, median, and 75th percentile. Violin plot shows the density of records. ------------------------------------------------------------------------------------------ 60   xi  Figure 3-8 Acceptable air temperature ranges for Asian (top) and European (bottom) datasets compared to the standards. ISO 7730 does not specify temperature ranges for homes. n pubs = number of publications. -------------------------------------------------------------------------- 62 Figure 4-1 Methodology Diagram. The image on the right depicts the SAMBA IEQ Monitoring device. --------------------------------------------------------------------------------------------------------- 71 Figure 4-2 Distribution of the temp year scores in the 33 datasets included in our analysis. One boxplot contains the calculated temp year scores in one dataset (a certain floor of a building). The middle thick bar shows the median. The lower and upper hinges correspond to the first and third quartiles (the 25th and 75th percentiles). The red diamond point shows the mean, i.e. the subjective index value for each dataset. ------------------------------ 73 Figure 4-3 An example time series plot of the air temperature in a one-year dataset on Level 30 of Building D after data cleaning ------------------------------------------------------------------------- 78 Figure 4-4 The statistical relationships between the long-term comfort physical indices and reported thermal satisfaction subjective index for 23 existing indices on the left (purple) and 36 new indices on the right (green). Indices are grouped by type, and the Pearson correlation coefficients for each index are given. Darker shading denotes statistical significance of the correlation (p<0.05). --------------------------------------------------------------- 85 Figure 4-5 Scatterplots of the best-performing indices for each type (N = 33) ---------------------- 88 Figure 4-6 Comparison of air temperature and operative temperature in the studied SAMBA datasets. Red dashed line is zero. The lower and upper hinges of the boxplots correspond to the 25th and 75th percentiles. The middle black bars are the medians. ------------------- 90    xii  List of Symbols 𝑇𝑎  Air Temperature 𝑇𝑜  Operative Temperature 𝑇𝑟  Mean Radiant Temperature 𝑇𝑔  Globe Temperature    xiii  List of Abbreviations ASHRAE American Society of Heating, Refrigerating, and Air-Conditioning Engineers BOSSA Building Occupants Survey System Australia CO Carbon Monoxide CO2 Carbon Dioxide HVAC Heating, Ventilation and Air Conditioning IAQ Indoor Air Quality IEQ Indoor Environmental Quality PM Particulate Matter (PM2.5 is with a diameter of 2.5 micrometers or less) PMV Predicted Mean Vote POE Post-Occupancy Evaluation PPD Predicted Percentage Dissatisfied RH Relative Humidity SAMBA Sentient Ambient Monitoring of Buildings in Australia TSV Thermal Sensation Vote TVOC Total Volatile Organic Compounds    xiv  Acknowledgements I offer my enduring gratitude to China Scholarship Council and The University of British Columbia (UBC) for the funding support and to the faculty, staff and my fellow students at UBC, who have inspired me to continue my work in this field.  Firstly, I would like to express my sincere gratitude to my supervisor Prof. Thomas M. Froese for the continuous support of my Ph.D. study and related research, for his patience, motivation, and immense knowledge. His guidance helped me in all the time of research and writing of this thesis. Besides, I would like to thank the rest of my supervisory committee: Prof. Sheryl Staub-French who has taken the official supervisor role after Dr. Froese moved to University of Victoria, Prof. Adam Rysanek who enlarged my vision of building science, and Prof. Richard de Dear who generously offered data for the third part of this research. They have provided insightful comments and encouragement, as well as questions which incented me to widen my research from various perspectives. My sincere thanks also go to Prof. Gail Brager, who provided me a precious opportunity to join the Center of the Built Environment (CBE) at UC Berkeley as a visiting researcher. The inclusive atmosphere there exposed me completely to the built environment research and set the key for my dissertation. Prof. Edward Arens, Prof. Stefano Schiavon, Dr. Hui Zhang, and all other colleagues at CBE offered endless, selfless help and inspired me a lot through their passion for research. Special thanks are owed to Dr. Thomas Parkinson, who has been not only an unparalleled research collaborator but also an inspirational friend who always cheered me up from research to life. Without his support it would not be possible to conduct this research. Last but not the least, I would like to thank my family, Jun, Caiting, and Pengfei, and friends for supporting me spiritually throughout my years of education and my life in general. Without their warm love, I would not be able to go through the ups and downs along the journey.   xv  Dedication         To the planet    1  Chapter 1: Introduction 1.1 Topic There is an increasing interest in building performance due to an increasing concern for two important factors: energy consumption and human comfort. Building operations and building construction combined account for 36% of global final energy consumption and nearly 40% of total direct and indirect CO2 emissions [1], 80-90% of which comes from the operating phase over the buildings’ life cycle [2]. This, together with the increasing ratio of existing buildings to new constructions, invites particular scrutiny of the operation of existing buildings. In addition to energy efficiency concerns, there is growing emphasis on buildings that better serve people, the inhabitants, who spend up to 90% of their time indoors [3]. There is mounting evidence that indoor environmental quality (IEQ) impacts the occupants’ health, well-being and productivity [4]. This has driven increasing academic and industry efforts to investigate the relationship between buildings’ actual performance and human health and comfort.  One of the key methodological tools used to investigate the quality of indoor environments for building occupants is post-occupancy evaluation (POE), which is defined as “any activity that originates out of an interest in learning how a building performs once it is built (if and how it has met expectations) and how satisfied building users are with the environment that has been created” [5]. IEQ is one of the main evaluation aspects of a POE not only because health and well-being is relevant to every individual but also because discomfort is likely to cause lower productivity. In a typical business case, staff costs, including salaries and benefits, account for about 90% of a business’ operating costs, while energy costs account for 1% and rental costs account for 9% [4]. This determines that anything that impacts the employees’ ability to be productive will be a major concern for the managers in commercial buildings.  As one of the main components of IEQ, thermal comfort, defined as “the condition of mind that expresses satisfaction with the thermal environment” [6], is known to be a key determinant in the overall evaluation of indoor environments. While its ranked importance relative to other IEQ factors is debated, it is usually considered as one of the most important [7]. Yet 40% of occupants in office buildings in the US are dissatisfied with their thermal environment   2  [8]. Thermal comfort has been classified as a “basic” requirement for occupants, meaning it contributes overwhelmingly to dissatisfaction when it is lacking but little to positive satisfaction when it is acceptable [9]. Thermal comfort is also known to interact with other IEQ factors such as indoor air quality [10,11]. The provision of thermal comfort by the heating, ventilation and air conditioning (HVAC) system accounts for a significant proportion of the total building energy consumption, e.g., 39% in Australian office buildings [12], 44% in U.S. commercial buildings [13], 48% in U.S. homes [14], and 61% in Canadian residential and commercial buildings [15]. The demand for air conditioning is predicted to increase as a result of global warming, economic growth, and swelling populations in emerging economies in hot climates such as India—the International Energy Agency predicts the number of air-conditioners worldwide to total 5.6 billion units by 2050 from 1.6 billion units today [16]. In light of all the evidence above, it is clear that thermal comfort plays an important role in efforts to improve IEQ and reduce energy consumption. To achieve thermal comfort, buildings are designed with envelope systems (roof, walls, windows, and doors) that aim to provide relatively stable indoor climates compared to the outdoor environments. The principle of Passive House is to use continuous highly-insulated envelopes to minimize heat losses and achieve energy-efficiency [17]. While the Passive House approach is gaining interest, not many buildings in Canada have been certified as Passive House. Most of current Canadian buildings continue to rely heavily on the HVAC systems to regulate indoor temperature, humidity, and air quality. In practice, building managers control the HVAC systems to provide an indoor environment that is predicted to satisfy the majority of the occupants. Thermal comfort standards, such as ASHRAE 55 [6], ISO 7730 [18], and EN 16798 [19], have set compliance criteria to assess an acceptable thermal environment based on physical parameters as a proxy for the subjective comfort, which in general can be classified as two types—point-in-time compliance criteria and long-term compliance criteria. The majority of these compliance criteria are based on the dominant steady-state heat balance thermal comfort model—the predicted mean vote (PMV) proposed by Fanger in 1970 [20].   3  The PMV model considers six input variables—air temperature, mean radiant temperature, air velocity, relative humidity, clothing insulation, and metabolic rate—to determine the predicted mean thermal sensation vote (TSV) of a group of people on a seven-point scale (-3 = Cold, -2 = Cool, -1 = Slightly cool, 0 = Neutral, 1 = Slightly warm, 2 = Warm, 3 = Hot). Most standards bodies consider occupants voting cold, cool, warm or hot as thermally dissatisfied. This relationship between sensation and satisfaction is captured by the predicted percentage of people dissatisfied (PPD), which is calculated using PMV. The PMV model was developed using data collected in climate chamber studies where human subjects were surveyed under controlled thermal environments. However, difficulties in estimating the personal factors (metabolic rate and clothing insulation), the variances in environmental factors (air temperature, mean radiant temperature, air velocity, and relative humidity), and the fact that steady state rarely occurs in daily life, led to observed inaccuracies of PMV model in predicting people’s thermal sensation in real buildings over the past decades [21,22]. The discrepancy between the predicted and the actual TSV is particularly large in non-air-conditioned buildings, i.e., naturally ventilated buildings. This observation led to an extension of the PMV model by including an expectancy factor for use in non-air-conditioned buildings in warm climates [23] and the proposal of the adaptive comfort model [24–26] for use in naturally ventilated buildings. The adaptive comfort model suggests a range of acceptable indoor operative temperatures based on prevailing mean outdoor temperatures with the theory that occupants can adapt to their environment. Current thermal comfort standards generally suggest using the PMV model for air-conditioned building design and the adaptive comfort model for naturally ventilated building design. For mix-mode buildings with mechanical heating and cooling systems and operable windows, there is no explicit rule in current standards, but in practice, the PMV model is often used for their HVAC design. The point-in-time compliance criteria are used to assess the level of thermal comfort at a single point of time in post-occupancy phase. The Analytical Comfort Zone Method in ASHRAE 55:2017 [6] sets the comfort range as -0.5 < PMV < 0.5, corresponding to 80% acceptability based on 10% whole body dissatisfaction from PPD, plus an assumed additional 10% local   4  dissatisfaction. ISO standard 7730:2005 [18] prescribes three classes of thermal comfort as the compliance criteria: Class A (PMV ±0.2), Class B (PMV ±0.5), and Class C (PMV ±0.7). EN 16798:2019 [19] adopts the same three classes (but named Class I, II, and III respectively) for mechanically conditioned buildings.  The long-term compliance criteria are a number of long-term comfort evaluation indices that can be calculated from either physical measurements or simulated conditions to assess a thermal environment over time e.g. a year and estimate occupant satisfaction. Most of these indices are based on the PMV model, e.g. the percentage of time that PMV is outside the specified range, weighted PPD, average PPD, etc. Current thermal comfort standards do not mandate continuous monitoring of the existing buildings and suggest the long-term comfort indices to be informative only. However, with the aforehand mentioned concerns of energy consumption and human comfort, evaluation in the operation phase—specifically physical long-term monitoring of built environments—has become imperative. 1.2 Problem A general problem of these compliance criteria is that they were based on the theoretical PMV model derived from laboratory studies and they lack validation from data in real buildings. Inappropriate compliance criteria may lead to profligate operating energy use without necessarily ensuring occupant satisfaction. Scrutinizing how we define acceptable thermal environments is as important as, if not more important than, improving technology efficiencies. The tiered PMV classes imply that a narrower PMV range ensures higher thermal satisfaction. Yet the comprehensive analysis of three databases of field studies by Arens et al. [27] showed that Class A (I) does not ensure any satisfaction benefit in office buildings. In fact, pursuing narrower PMV ranges in offices promotes the widespread use of air-conditioning, leading to a higher chance of sick building syndrome and increased energy costs and greenhouse gas emissions [28]. d’Ambrosio Alfano et al. identified one of the challenges in operationalizing the tiered PMV classification to be that the widths of PMV ranges required in ISO 7730 and EN 16798 are close to the measurement uncertainty of common sensors, making classification a   5  random operation in many instances [29]. Despite these concerns and evidence, the tiered PMV classes remain in use for point-in-time assessment. The long-term compliance criteria in standards have gained less attention compared to the point-in-time compliance criteria. One of the main reasons is that the prohibitive cost of installing and maintaining environmental sensors for continuous monitoring has limited the main application of long-term indices to the outputs of building performance simulations performed during the design phase. Existing long-term indices were proposed in line with the PMV model to increase their robustness and usefulness, but surprisingly, these long-term indices have never been validated using continuous monitoring data from real buildings nor occupant feedback. If the long-term indices in standards are not effective predictors of actual long-term thermal comfort, they will not be able to assess the true long-term thermal performance of a building. There is a need to validate their usefulness to ensure they do not promote wasteful HVAC operation strategies without meaningfully improving occupant comfort. 1.3 Research questions The question of the validity of current thermal comfort compliance criteria echoes the fundamental question in thermal comfort research: what thermal conditions are acceptable or desirable to occupants in real buildings? The following three research questions are needed to answer the fundamental question. 1. How do we collect data in post-occupancy phase to assess thermal comfort? a. What is the place of thermal comfort measurements in POE? b. What are the current practices for thermal comfort measurements? 2. How should we assess thermal comfort at a point of time in post-occupancy phase based on collected physical measurements? a. Are PMV classes appropriate point-in-time compliance criteria?  b. If not, what are other potential point-in-time compliance criteria?   6  3. How should we assess the general thermal comfort over a long period in post-occupancy phase based on collected physical measurements? a. Are the existing long-term indices able to reliably predict long-term subjective evaluations of thermal environments?  b. If not, are there other indices that could be proposed to improve long-term performance metrics? Although studies exist that assess thermal comfort only, in more cases, thermal comfort is assessed as part of IEQ, which is also one aspect of POE. Therefore, to answer the first research question, a review of POE was conducted to provide a more thorough background for thermal comfort assessment. To answer the other two research questions, secondary data analyses were conducted using the thermal measurements in real buildings to assess the relevant compliance criteria. 1.4 Dissertation structure and methodology This dissertation follows a manuscript format: the body of the thesis is comprised of three published or submitted peer-reviewed journal papers. Each paper forms a stand-alone research activity towards the aims of the larger research project. The introduction chapter of the dissertation briefly outlines the research topic, problem, and objective, and provides an overview of the dissertation. Each subsequent chapter describes the details of the stand-alone research activity following the structure of introduction, methods, results, discussion, and conclusion; these are slight modifications from the published versions. Finally, the conclusion chapter summarizes the findings from the three papers and clarifies the contributions and impact of this doctoral research. The three main chapters of this dissertation (i.e. the three journal papers) are thematically related by their shared objective of investigating thermal comfort compliance criteria (Figure 1-1). Compliance criteria define whether the thermal environment of an indoor space is acceptable based on physical measurements. Post-occupancy evaluations involve collecting the requisite physical and subjective data to conduct this assessment of thermal comfort in buildings.   7  Therefore, POE datasets can be used to determine the accuracy of thermal comfort compliance criteria found in international standards. Paper 1, the review of POE, sets up the background for Paper 2 and Paper 3.   Figure 1-1 Conceptual relationships between the body of work  Paper 1 starts with a qualitative literature review and a quantitative bibliometric analysis of POE to provide a thorough background of the research activity during the post-occupancy phase of buildings. This review includes the history, geographic distribution, purpose, and tools used in previous POE projects, as well as a summary of emerging POE topics and a discussion of future trends. In particular, this paper introduces the methods used to measure thermal comfort in real buildings which helps understand the data collection process in Paper 2 and Paper 3. In most IEQ-related POE projects, researchers place measurement instruments in real buildings for a few minutes (point-in-time measurement) and simultaneously ask occupants at the measurement location about their right-now satisfaction with the indoor environment (snapshot survey). Another emerging practice is to develop integrated IEQ sensors and place them in various locations for at least one year. The sensors measure IEQ factors including thermal parameters at a-few-minute intervals and transmit the data to a cloud database. There are also standardized occupant survey to ask occupants’ general satisfaction with IEQ factors.   8  The matched point-in-time physical measurements and subjective evaluations can be used to investigate the validity of thermal comfort point-in-time compliance criteria. Paper 2 applies data analysis techniques to the open-source ASHRAE Global Thermal Comfort Database II [30] of over 100,000 field measurements in 23 countries and demonstrates that the tiered PMV classes are not appropriate point-in-time compliance criteria. Then, it recommends a new data-driven method for deriving new point-in-time compliance criteria—acceptable temperature ranges—after comparing two methods from a methodological perspective. Finally, the newly derived acceptable temperature ranges are compared with the comfort temperature ranges recommended by ISO 7730 and EN 16798 standards. Beyond the investigation of point-in-time compliance criteria, this work attempted to understand what environments in real buildings are acceptable to occupants at a single point of time. Any investigation of long-term compliance criteria requires continuous long-term thermal measurements and retrospective surveys. Paper 3 conducted correlation analyses between the calculated long-term thermal comfort indices (23 existing indices and 36 newly proposed indices) from continuous IEQ data and the subjective metrics calculated from a POE survey to test the validity of existing long-term compliance criteria and propose new long-term compliance criteria. This analysis was done in collaboration with the IEQ laboratory at The University of Sydney who provided access to their Sentient Ambient Monitoring of Buildings in Australia (SAMBA) database [31] and Building Occupants Survey System Australia (BOSSA) database [32], both of which will be introduced in detail in a later chapter. Issues observed during the data analysis and the implication of the results are discussed as well. This work not only informs standards’ amendments but also helps to understand what conditions leave lasting impressions on occupants’ sense of general thermal comfort over time. The three papers answer the three research questions respectively and together answer that what thermal conditions are acceptable or desirable to occupants in real buildings.   9  Chapter 2: Paper 1—Review of Post-Occupancy Evaluation 2.1 Introduction  Energy consumed in the building sector accounts for 20.1% of the total delivered energy consumed worldwide and is expected to increase by an average of 1.5% per year from 2012 to 2040 [33]. This impact is much higher in the U.S., where the building sector is estimated to account for approximately 40% of total U.S. energy consumption [34]. In addition, people spend almost 90% of their time indoors [3], and there is overwhelming evidence which demonstrates that the indoor environment impacts the health, wellbeing and productivity of the occupants (summarized in [4]). Post-Occupancy Evaluation (POE) is a general approach of obtaining feedback about a building’s performance in use, including energy performance, indoor environment quality (IEQ), occupants’ satisfaction, productivity, etc. Previous work has introduced and reviewed POE [35–40] but has lacked a quantitative analysis of POE characteristics, applications, and trends. This chapter presents a comprehensive and critical review to provide both a qualitative and a quantitative assessment of the state-of-the-art of POE projects and methodologies, including an evaluation of current research and potential future trends.  2.2 Methods The methods involved in this chapter include a traditional literature review to understand the background of POE and a bibliometric analysis of 146 POE projects, including a descriptive statistical analysis of the types and countries of the buildings assessed to show the focuses of POE research, a content analysis of the purposes of the projects to develop a taxonomy, and a descriptive statistical analysis of the methods used to show the trends. During the bibliometric analysis phase, searching “post-occupancy evaluation” in the topic field (which includes title, abstract, and key words) in the Web of Science index generated 382 results as of September 2017. The blue line in Figure 2-1 shows that the number of POE-related publications increased dramatically around 2010. Therefore, we decided to review the 269 publications from 2010 to 2017, which represent the majority and more recent POE studies. Out   10  of the 269 publications, we identified 146 POE projects (the orange line in Figure 2-1) and recorded the key information of those POE projects in Excel for later statistical analysis in Tableau. A “POE project” refers to a research project or practitioner investigation of building(s) in the real world using POE methods. We excluded publications that were reviews rather than original research, not in English, not available online, or not for buildings (e.g., landscape, park, garden, a single system, etc.). A “POE project” can investigate more than one building and can be described in more than one publication. As long as the publications described the same POE results of the same building(s) (but perhaps from different perspectives), we counted those publications as one project. In any project where questionnaires were distributed to a large set of people without specifying how many buildings were assessed, we attributed only one building to this project.   Figure 2-1 Number of POE publications and POE projects  2.3 Results 2.3.1 Relationship between POE and green building certification systems To a large degree, the building industry’s transition towards energy savings and better indoor environments has been driven by the increasing adoption of green building certification systems, or at the very least, by the way in which these systems impact design conversations   11  (whether or not certification is actually pursued). Worldwide, at least 150 tools (i.e., green building rating systems) and methodologies for building assessment and benchmarking have been reported to date [41]. However, the questions of whether certified buildings save energy or not [42–45] and whether certified buildings provide better IEQ or not [46–51] are being debated at length in the literature. Towards this end, POE has taken on increasing importance in the context of studying buildings that have pursued various green building certification. For the most part, certification systems are primarily used in the design phase, even though some certifications require on-going measurement and verification during the operation phase. However, the most important indicators of whether a building is green or not should be its actual performance, not simply design intent, but this is only addressed in a few of the rating systems. POE is therefore an essential tool to help verify whether these buildings are performing as intended. There are several examples of certification systems based on actual performance, rather than modeled or anticipated performance, where some aspects of POE play a role in their process. The Living Building Challenge is particularly noteworthy for this; projects must be operational for at least 12 consecutive months prior to the final audit for certification [52]. The WELL Building Standard also has large parts based on in-use building conditions. An authorized WELL Assessor will usually spend one to three days in the building to validate the project’s design documentation and to complete a series of performance tests, spot-checks and measurements spanning all WELL Concepts [53]. BOMA BEST, a voluntary program designed by industry for industry, is Canada’s largest environmental assessment and certification program for existing commercial real estate [54]. After the applicant completes a self-assessment questionnaire online, a third-party will conduct an on-site visit to verify the answers and review the energy and water data, as well as other documents; however, they are not required to conduct any on-site measurements. The Sustainability Tracking, Assessment & Rating System™ (STARS®) is a transparent, self-reporting framework for colleges and universities to measure their sustainability performance, created by Higher Education for Higher Education in the US beginning in 2010 [55]. An institution completes the STARS report online and submits it to the Association   12  for the Advancement of Sustainability in Higher Education (AASHE), and then an AASHE staff reviews portions of each report for accuracy and consistency. No further third-party verification or on-site visit is required. The National Australian Built Environment Rating Scheme (NABERS) is a national initiative managed by the Australian government that addresses the in-use energy efficiency, water usage, waste management and indoor environment quality of a building or tenancy and its impact on the environment [56]. The users can either use the free online calculator to get an idea of how well their building is performing or seek an accredited NABERS rating by finding a NABERS Accredited Assessor, who will collect and verify all the data for a rating according to the NABERS rules or validation protocols. As noted, only a few rating systems involve the measurement of actual performance and the requirement of third-party verification is varied and limited. Thus, POE is important and necessary to capture the actual performance of the buildings. While POE can be a part of a green building certification system or can be used in conjunction with certification systems, it is a distinct approach and set of techniques and it can be used for non-green buildings as well. 2.3.2 History, definition and benefits of POE An extensive literature review [57] stated that the history of modern-day POE methods dates back to the 1960s, although not all the studies conducted then were called POE. In the 1960s, Sim Van der Ryn of the University of California, Berkeley, and Victor Hsia of the University of Utah conducted a systematic assessment of university dormitories from the occupants’ point of view. Around the same time in England, Peter Manning of the University of Liverpool conducted a study of the physical environment and emotional sensations experienced by people within office buildings [58]. The first publication with the term “POE” in the title was authored by Herb McLaughlin of KMD Architecture in San Francisco in the AIA Journal issue of January 1975. Other pioneers in the 1960s-1980s include Thomas A. Markus [59] of the University of Strathclyde, the UK, David Kernohan and his colleagues [60] at the Architecture Research Group, Victoria University of Wellington, New Zealand, and Gerald Davis [61] at the International Centre for Facilities in Ottawa, Canada. The concept and terminology became more mainstream when, in 1988, Preiser, Rabinowitz and White wrote a POE textbook, where POE was defined as “the   13  process of evaluating buildings in a systematic and rigorous manner after they have been built and occupied for some time” [62]. Book “Building Evaluation Techniques” by George Baird et. al. in 1996 introduced 120 evaluation concepts, techniques and tools in terms of “how to do” POEs [63]. As POEs become broader in scope and purpose, in 2002 an industry-accepted definition of POE was stated as “any activity that originates out of an interest in learning how a building performs once it is built (if and how it has met expectations) and how satisfied building users are with the environment that has been created” [5].  In 1997, the concept of POE was expanded upon, when Preiser and Schramm proposed an integrated framework of building performance evaluation (BPE). In this framework, POE represents only one of the six internal review loops (Figure 2-2). BPE focuses on the entire life of the building, from planning, programming, design, construction, occupancy, to adaptive re-use or recycling. Although there is a trend to regard BPE as a new name for POE (and in some literature the definition of BPE is exactly the definition of POE), we acknowledge the difference between POE and BPE and will only focus on POE in this paper (i.e., the occupancy phase of a building’s life cycle).  Figure 2-2 Building Performance Evaluation (BPE) process model (Source: Preiser 2005 [57])   14  POE plays an important role in the life-cycle of a building: feedback. It offers a wide range of activities and benefits, including: assessment of building performance, exploration of relationships between inhabitant behavior and building resource use, optimization of the indoor environment for inhabitants, more informed decisions about future building design, and opportunities to enhance the dialogue within design teams and their partners [64,65]. 2.3.3 Analysis of POE Projects This section describes the quantitative and qualitative analysis of the 146 POE projects.  2.3.3.1 Buildings assessed: types and countries Starting with the list of building types on Wikipedia, we modified the categories slightly (e.g., separated offices from the more general category of commercial buildings) with the aim of showing the research focuses clearly. In this paper, building types are defined as follows: • Commercial building: including supermarkets, clubs, convention centers, etc.  • Office: including office buildings and those mixed-use commercial buildings where the focus of the POE research was the office area. • University building: including sport center, canteen, cafeteria, and others with multiple functions in universities. • Educational building: including kindergarten, school, preschool, childcare, library, and gallery. • Medical building: including hospital, healthcare, and cancer support center. • Residential building: including house, apartment, flat, dwelling unit, villa, and dormitory. • Transport building: including railway station and airport terminal building.  • Government building: including court, museum, and post office. In many cases, a POE project evaluated more than one building. Thus, Figure 2-3 shows both the number of projects and number of buildings, organized by building type. We found that residential buildings were the most popular research targets, followed by office, university   15  buildings and educational buildings. This is not a surprise since these buildings are where people spend most of their time: living, working, and studying.    Figure 2-3 Number of POE projects and number of buildings per type of building  POEs for different types of building are often very different in terms of both their purpose and methodology. POEs of residential buildings often focus on occupants’ experience and use of facilities, and therefore, almost every project would use an occupant survey or interview as the research method. POEs of office buildings are typically interested in occupants’ comfort and productivity, and the more sophisticated of these would utilize both a survey and physical measurements of IEQ. POEs of university building are variable but, depending on the objective, could be similar to the POEs of either office or residential buildings. POEs of kindergartens and   16  schools usually focus on the efficiency of teaching activities, sometime including the analysis of children’s behaviors, and thus, observation is the key component of the methodology. POEs of medical buildings are typically quite distinct from other POEs: on one hand, they use variable methods to evaluate the general user experience (e.g., accessibility and wayfinding); on the other hand, medical buildings have strict requirements on IEQ (e.g., sound insulation and indoor air quality (IAQ) of wards, which would require in-situ physical measurement of IEQ). We also recorded the country in which the buildings were assessed, again in terms of both number of projects and number of buildings. Figure 2-4 and Figure 2-5 show that the UK, the US, China, Australia, Canada and Malaysia are more active in POE research.    Figure 2-4 Number of POE projects per country   17   Figure 2-5 Number of buildings assessed per country  2.3.3.2 Purposes POE projects are conducted for numerous purposes. Preiser, in 1995, classified three levels of POE: indicative, investigative and diagnostic [66]. In 2008, Hadjri and Crozier stated that “the overarching notion of the purpose of POE is to facilitate the accumulation of information/knowledge that can be subsequently utilized to improve the procurement of buildings to the benefit of all the stakeholders involved” [38]. Although much has been written about POE, the literature still lacks a systematic review and nuanced typology of the purposes. We used voyant-tools.org to analyze the word frequency of the recorded purposes of the 146 projects (Table 2-1). The word frequency test shows that POE projects aim to evaluate, assess, or investigate the buildings’ performance, for the purpose of learning about occupant response, energy use, physical IEQ, performance of specific design features, etc. Some of these words were simply describing the process, while others were more focused on what the   18  investigators were trying to learn. In making this distinction, we propose a hierarchy to categorize the POE purposes as summarized in Table 2-2, and described further below. Table 2-1 Top 20 frequent words in the purposes of POE projects Frequency Rank Word Count Frequency Rank Word Count 1 building(s) 46 11 use 14 2 performance 27 12 design 13 3 occupant(s) 24 13 investigate 13 4 evaluate 20 14 comfort 12 5 energy 19 15 quality 12 6 environment(al) 19 16 green 10 7 user(s) 19 17 IEQ 10 8 satisfaction 18 18 post 10 9 assess 14 19 occupancy 10 10 indoor 14 20 thermal 9  Table 2-2 Classification of POE Purposes Purposes Description with examples from the literature Level 1: Direct Purposes Evaluate Design To examine design innovations [67], design features for certain group of occupants [68–70], or the design process of a project [71]. Evaluate Occupants To evaluate occupants’ comfort, satisfaction, well-being, or health [72–76]; investigate the factors that affect their satisfaction [77–80]; understand their opinions or experiences of a space [81–84]; assess their productivity [85,86]; understand occupant behavior [87–91]; assess occupant opinions of green building rating tools [92,93]; or evaluate the sociality of occupants [94]. Evaluate Energy Performance To understand the energy use, usually in a case with energy retrofit, renovation, or energy-saving strategy [95–97]; or for benchmarking [98]. Evaluate IEQ To measure one or more physical characteristics of IEQ: thermal condition [99,100], lighting [101,102], indoor air quality [103], acoustics [104]. Evaluate Facility To assess the quality and functionality of facilities [105–108], safety performance [109], or to inform the maintenance management [110,111].   19  Purposes Description with examples from the literature Level 2: Indirect Purposes Identify issues To find functional failures or defects [112,113], investigate overheating risk [114–116], expose issues related to occupant control [117], etc. Inform future projects To provide suggestions for future refurbishment/retrofitting projects [118] or design [119]. Improve POE method To inform the development of a POE methodology/software [120,121] or the development of a component of POE such as questionnaire [122,123] Impact standard/criteria To provide basis for guidelines/standards for IEQ such as lighting design [124–127] or to test the existing green building standards [128,129] Evaluate Technology To assess the effectiveness of certain technology, i.e., mixed-mode air conditioning [130], ‘passive downdraught evaporative cooling’ [131], mechanical ventilation systems with heat recovery [132], natural ventilation [133], an integrated façade [134], etc.  Validate models Use actual data to validate thermal comfort model [135], glare probability model [136], energy model [137], etc.   Level 1, “Direct Purpose”, includes the direct evaluation, measurement, or assessment of the topic in question, including the design, occupants, energy performance, IEQ and facilities. A POE project could, and usually does, have several level 1 purposes, e.g., investigate effects of IEQ on occupants’ comfort [138,139], reveal relationships between human factors and IEQ satisfaction [140], evaluate the impact of design features on well-being outcomes [141], etc. Figure 2-6 shows the concurrence of level 1 purposes, where the area of the circles and intersections indicate the relative number of projects found in this literature review. The most common focus of a POE evaluation is on the occupant, followed by IEQ, energy, design and facility.   20   Figure 2-6 Venn Diagram of POE Direct Purposes  Level 2, "Indirect purpose”, relates to the question “why POE”, or what ultimate impact is one trying to have. A project could have more than one indirect purposes as well. By evaluating one or more aspects of a building, a POE project usually aims to contribute to a body of knowledge about the individual building or, ideally, to generalize lessons learned for a broader application. These could include identifying issues, design strategies, or problems that affect building performance, influencing future projects by helping design teams and owners make more informed decisions, improving future POE methods, impacting building standards or green rating systems, evaluating the effectiveness of technologies, or validating predictive models.  Of the projects we investigated, nearly 30% were intended for some sort of comparison, although we did not categorize comparison as its own separate purpose. Of these: 12 projects compared the actual performance to predicted performance from models, simulation or targets; 16 projects involved comparison between green building performance and non-green building performance (including specific features such as natural ventilation); six projects compared occupants’ satisfaction to benchmark results; and five projects compared IEQ measurements to standard requirements. Other comparisons included pre- vs. post-retrofit, and new vs. old homes or facilities.   21  2.3.3.3 Methods POE methods can broadly include energy and water assessment, IEQ physical measurements, occupant survey questionnaires, focus group meetings, structured interviews, visual records, walkthroughs, and technical measurement of building structure, services and systems [35,37]. A few projects used window opening sensors or GPS-enabled mobility tracking to study occupant behaviors. For the 146 projects studied, we used the following categories to track the use of the most common methods: • Subjective methods a. Occupant survey: including standardized occupant satisfaction survey, thermal comfort survey, visual comfort survey, and in most of the cases, customized surveys. These can include questions that inquire about “how do you feel right now” or “general satisfaction”.  b. Interview: including structured or semi-structured interviews and focus group meetings, usually with occupants, sometimes with experts.  c. Walkthrough: including expert tours meant to identify issues, usually along with photo/video recording, design/condition checklists, and observation forms. • Physical measurements a. IEQ in-situ measurements: i. Thermal condition (infrared thermal imaging, sensors/meters for temperature, relative humidity, air velocity, etc.) ii. Lighting (illuminance and luminance meters, high dynamic range (HDR) imaging cameras) iii. IAQ (sensors for CO2, TVOC, formaldehyde, CO, respirable particles, etc.) iv. Acoustics (sound level meters, reverberation test) b. Energy: assessed via audit, sensors, meters or bills. Water: assessed via meters or bills.   22   Figure 2-7 Percentage of projects that used certain method (note that most projects used more than one method)  As Figure 2-7 shows, occupant survey is the most widely used method (81.51% of projects), probably because it could help quantify subjective opinions through the use of questions with scaled responses, and then benchmark the results. In general, subjective methods like walkthrough, interviews and surveys (which might include qualitative, open-ended questions) are more commonly used because they are inexpensive (no need for equipment associated with physical measurements) and they can help identify problems quickly. When the researchers walk through the building, they can better relate and understand the occupants’ perspective to the context for which they are responding to a survey. But challenges do exist for these subjective methods, e.g. incomplete surveys, participants’ misunderstanding of the questions, potential recall biases in answers, etc. In terms of IEQ in-situ measurements, 42.47% of the projects measured thermal conditions, while only 13.70% of the projects measured acoustic condition, which substantiates a common understanding in the industry that acoustics is a relatively “ignored” area of IEQ. However, even fewer projects (less than 10%) measured water consumption, and there is not much attention paid to energy (only 26% of projects calculated energy consumption). This is most likely simply a result of the literature review methods, where we used “post-occupancy evaluation” as the key word, and many research   23  projects that evaluate energy performance separate from occupant issues will not likely use that phrase. In order to explore the trends in the use of these methods, we plotted the percentage of projects that used each of the methods every year (Figure 2-8). No explicit trends were recognized except for energy measurement and survey. As Figure 2-8 shows, energy is fading from POE research, again perhaps just representing a trend in the term “POE” being used for such research. In contrast, the occupant survey is gradually becoming a must for a POE project, confirming that there is increasing attention being paid in the building industry on issues of occupant health and wellbeing. In addition, while water and acoustics measurements are the least used methods, it seems that the attention on acoustics is increasing in recent years while that on water is decreasing. Nevertheless, these trends are not statistically significant, and this variability may relate largely to the sets of keywords selected for this literature review.    Figure 2-8 Trends of the usage of POE methods    24  Ethics and privacy are serious concerns in POEs, especially with the subjective POE methods that are human participants-related. An ethical review is necessary before any human-related research. The ethical review processes differ across countries and institutions, but a book by Wagner et al. [142] provided general guidance for ethical review applications including issues in recruitment, risk identifications, data storage, informed consent, etc. Anonymous questionnaire is common practice to protect participants’ privacy but anonymous data collection can be difficult in research that needs to identify participants, e.g. pre and post comparisons, effects on or of gender and age, etc. 2.3.3.4 POE protocols Some of the papers we reviewed examined the overall performance of the building—including energy, occupants, IEQ and more—with a systematic methodology. For these studies, we call the package of POE methods a “protocol”, and Table 2-3 provides a brief comparison of 16 existing POE protocols we found from the literature and in practice. The “year” either means the year in which the protocol was first developed or the year of the first related publication (if the year of development is unknown).    25  Table 2-3 A comparison of current POE protocols POE protocol Ref. Year Developer Country Building Type Aspects Evaluated Notes Post-Occupancy Review of Building Engineering (PROBE) [143–145] 1995 Energy for Sustainable Development, William Bordass Associates, Building Use Studies, Target Energy Services  UK Office, University, Educational, Medical, Government • Energy audit by Office Assessment Method (OAM) • BUS occupant survey • Design and Construction • Maintainability • Control Issues • Review of performance • Methods used may be different for the 23 case studies.  • One report for one case study. CBE Building Performance Evaluation (BPE) toolkit [146], [147] 2000 Center for the Built Environment (CBE) at UC Berkeley US Office, University, Government • Occupant IEQ satisfaction survey about thermal comfort, air quality, acoustics, lighting, cleanliness, spatial layout, and office furnishings • Indoor Climate Monitor (ICM): device with sensors for CO2, illuminance, globe temperature, air velocity, dry bulb temperature, and RH • Portable UFAD Commissioning Cart (PUCC): measure T at many levels for a space that is conditioned by a stratified system such as an underfloor air distribution system (UFAD) • Sound level pressure meter • Web-based survey with online reporting • Software and hardware to support PMP-based IEQ analysis • GIS-enabled floor plan maps • Scorecard and report generation tool Cost- effective Open-Plan Environments (COPE)  [148,149] 2000 National Research Council Canada Canada Office • A cart-and-chair system to measure sound level, T, RH, air movement, CO2, CO, total • 779 workstations in nine buildings • Followed by many reports and analyses   26  POE protocol Ref. Year Developer Country Building Type Aspects Evaluated Notes hydrocarbons, methane, and illuminance for about 10 minutes • Night measurement of illuminance and Speech Intelligibility Index • A 27-item occupant satisfaction survey Health Optimization Protocol for Energy-efficient buildings (HOPE) [150,151] 2002 14 organizations in nine European countries  Europe Office, Residential • Inspection checklist  • Interviews with building managers • Occupant IEQ satisfaction survey • Measurements of chemical, biological and physical parameters • The database of 164 buildings, conclusions and guidelines are available on its website. NEAT [127,152,153] 2003 Center for Building Performance and Diagnostics at Carnegie Mellon University US Office • Electricity and gas bills • IEQ snap-shot: NEAT cart to measure T, RH, CO2, CO, PM and TVOC; hand-held sensors to measure light levels, radiant temperature, air velocity, and noise level • Occupant: Cost-effective Open-Plan Environments (COPE) environmental satisfaction questionnaire, a long-term survey, and interviews • Technical Attributes of Building Systems documenting • Thermal envelope evaluated by thermographic camera  • NEAT cart provides automatic data logging. • NEAT interface has been developed.   27  POE protocol Ref. Year Developer Country Building Type Aspects Evaluated Notes Whole Building Cost and Performance Measurement [154] 2005 Pacific Northwest National Laboratory US Office (federal buildings) • Water • Energy • Maintenance & Operations • Waste Generation and Recycling • IEQ • Transportation • For each aspect above, metrics are identified as required and optional. • Building and Site Characteristics are collected first to filter buildings, allowing a valid comparison. • Published the second version in 2009 • Recommend data visualization charts • Disclose the selection criteria of the metrics EcoSmart [155] 2005 Stantec Consulting Ltd. (formerly Keen Engineering) Canada Office • 14 Components: Kick-off discussion with Owner, Kick-off discussion with design team, Energy Consumption, Water Consumption, Information from building operator, Information from Occupants (web-based survey), Information from Occupants (Qualitative), Washroom and Washroom Fixture Evaluation, Indoor Air Quality Measurements, Lighting Measurements, Acoustic Measurements, Thermal Comfort Measurements, Wrap-up discussion with design team and occupant representatives, Process Reflection and Conclusions. • Available in Excel format • For each component, the protocol defines whether it is required or optional, explains the purpose, the evaluator skills, participants in discussion, time required, and cost.  • CBE occupant satisfaction survey and CBE operator survey are recommended.   28  POE protocol Ref. Year Developer Country Building Type Aspects Evaluated Notes Performance Measurement Protocol (PMP) [156] 2010 ASHRAE, USGBC, CIBSE US Office, Commercial • Energy and water use • IEQ measurements: thermal comfort, acoustics, IAQ, lighting/daylight • Occupant surveys: CBE survey is recommended. • Three levels—Basic (indicative), Intermediate (diagnostic), and Advanced (investigative). Each level measures the six aspects to different details. • Articulates what should be measured, measurement methods and their cost, recommended indicators, industry standards, and benchmarks. Creative Energy Homes (CEH) [121,157] 2010 University of Nottingham UK Residential • Electricity (individual power circuits and appliance meters), water use, energy and heat meters • IEQ monitoring: sensors for T, RH, air quality • Occupancy patterns and space use monitoring using a real-time location tracking system (ultra-wideband radio frequency) • Every CEH house has smart monitor system and display screens installed. • Developed software to process data Building Occupants Survey System Australia (BOSSA) [32] 2011 University of Sydney, University of Technology Sydney Australia Office • IEQ snap-shot: BOSSA Nova cart to measure T, air velocity, RH, CO, CO2, TVOC, formaldehyde, ambient sound, and illuminance • Occupant: BOSSA Time-lapse survey, BOSSA Snap-shot surveys, about 9 IEQ dimensions • Online survey results populated the BOSSA database. NRC [49,158] 2012 National Research Council Canada Canada Office • Energy: whole building utility bills (sub-systems & water, if available) • IEQ snap-shot: HDR photography and the NICE (National Research Council Indoor Climate Evaluator) cart to measure T, air • 12 pairs of green and non-green buildings were matched based on building characteristics.   29  POE protocol Ref. Year Developer Country Building Type Aspects Evaluated Notes velocity, RH, formaldehyde, CO2, CO, PM, illuminance, luminance and sound pressure • IEQ monitoring: the “Pyramids” to collect a subset of IEQ parameters collected by the cart for several days • On-line questionnaire about environmental satisfaction, job satisfaction, health, absenteeism, environmental attitudes, commuting patterns • Interview with building manager Tsinghua protocol [159,160] 2013 Key Laboratory of Eco Planning & Green Building, Tsinghua University China Office • Energy metering • IEQ monitoring: T, RH, CO2 • IEQ snap-shot: illuminance, sound intensity • Occupant: IEQ satisfaction survey • Measured energy consumption of 31 green and 481 non- green buildings • IEQ measurement in 10 green buildings International Institute for a Sustainable Built Environment (iiSBE) protocol [161,162] 2014 Ryerson University, University of British Columbia, University of Manitoba Canada Office, University, Educational • Energy and water bills or meters • IEQ snap-shot of lighting, thermal, acoustics and air quality • Occupant survey based on the survey of NRC • Interviews with owners/managers • Structured walkthrough with building managers • Design documents review • For each aspect, three types of performance data were collected: Actual, Predicted, and Reference values or benchmarks for typical buildings of similar use in the region.   30  POE protocol Ref. Year Developer Country Building Type Aspects Evaluated Notes A Diagnostic POE Model for an Emergency Department [120] 2014 Guinther, Lindsey; Carll-White, Allison; Real, Kevin US Medical • IEQ snap-shot: sound and lighting level meters • Occupant: Behavioral Mapping, Occupancy counts, Staff Questionnaire, Patient and Visitor Questionnaire, Focus groups, Interviews • Use of space: Walkability Studies, Waiting Times, the frequency of using equipment, etc. • Phase 1 and Phase 2 data collection, each with a set of methods. POE framework for higher education residence halls [163] 2015 Alborz, Nakisa; Berardi, Umberto US, Canada Residential • Electricity, water, gas consumption via meters/bills • IEQ: building automation controls reading of T and RH, student survey of indoor sound insulation • Occupant satisfaction survey about the controllability of IAQ parameters and about building controls ease of use • Commissioning, maintenance program, use of building automation control systems or Building Energy Management Systems, and end-user education efforts were evaluated by documentation and survey of FM personnel. • Concluded 12 POE indicators with data collection methods Post-Occupancy Evaluation for Multi-Unit Residential Buildings [164] 2016 Open Green Building Society Canada Residential • Required steps: Kick-off meeting and basic information gathering, Building Manager Survey, Occupant survey, Energy and water use (ENERGY STAR® Portfolio Manager is recommended) • Optional: Interviews with residents • Informative guide for administrators, without specifying tools/devices   31  A landmark protocol is the PROBE (Post Occupancy Review of Building Engineering) study, in which three series of 23 case studies in total were evaluated using standardized methodology (evolving slightly from the first to the last case study), from 1995 to 2002 in the UK. The BUS (Building Use Studies) survey used in the PROBE studies has been used to evaluate over 700 buildings worldwide to date [165]. The BUS survey provides both a domestic (housing) version and a non-domestic version, and has become an integral part of many programs such as NABERS in Australia and Arup Appraise. The Center for the Built Environment (CBE) at UC Berkeley is another pioneer in POE studies. The CBE occupant IEQ satisfaction survey is a web-based questionnaire and reporting tool developed in 2000 that has been implemented in over 1,200 buildings, with over 100,000 individual occupant responses (as of November 2017) [166]. Some protocols such as EcoSmart and ASHRAE’s Performance Measurement Protocol (PMP) (described below) recommend using the CBE survey. CBE was also the first to use an IEQ mobile measurement cart (in the 1980s), which will be introduced with more details later. After the release of the PMP protocol, CBE adjusted their toolkit to better support the PMP protocol with both hardware and software [147]. Another well-developed protocol is NEAT (National Environmental Assessment Toolkit), developed in 2003 by a research team at Carnegie Mellon University in partnership with the U.S. General Services Administration for POE of 20 commercial office buildings. The core components of the NEAT toolkit include both hardware and software, including the IEQ cart, occupant survey, and Technical Attributes of Building Systems documenting. Later, NEAT was used for many other projects and in some cases included energy and thermal envelope evaluation.  Another remarkable protocol is the PMP (Performance Measurement Protocols for Commercial Buildings), jointly developed by ASHRAE (American Society of Heating, Refrigerating, and Air-Conditioning Engineers), USGBC (The U.S. Green Building Council) and CIBSE (The Chartered Institution of Building Services Engineers, U.K.). PMP is the only protocol that classifies three levels of measurement and has the most published details: it specifies what should be measured, measurement methods and their cost, recommended indicators, industry standards, and benchmarks. An application of the basic level of PMP can be found in [167].   32  The protocols in Canada are correlated with each other. As part of the COPE project sponsored by National Research Council Canada (NRC), a field study of IEQ was conducted in 2000-2002 to examine the relationships between measured physical conditions and occupant satisfaction. The methodology evolved into the NRC protocol in 2012. In the industry, Keen Engineering (now Stantec) developed the EcoSmart protocol in 2005 and tested it in six Canadian buildings in 2006. Some of the researchers involved in the development of the EcoSmart protocol later created the International Initiative for a Sustainable Built Environment (iiSBE) protocol. The iiSBE project was funded by National Sciences and Engineering Research Council of Canada, iiSBE Canada, Stantec, and Ryerson University. Nine Canadian buildings were evaluated using the iiSBE protocol in 2014. The occupant survey used in the iiSBE project was based on the survey of NRC protocol in 2012. Recently, some researchers involved in the iiSBE project helped develop an open-source POE template for multi-unit residential buildings in British Columbia, Canada.  As noted from Figure 2-4, POE projects in Asia are booming, but few protocols have been proposed. This could be because Asian projects are highly case-dependent and often use subjective methods only, especially in Malaysia and Turkey. In China, projects use variant methods and some protocols are emerging. For example, Tsinghua University’s protocol has been applied to many green and non-green buildings. Table 2-3 indicates that the current protocols are mostly aimed at residential buildings and office buildings. POE of medical buildings is more complicated. Rather than developing a protocol, the government of Alberta, Canada proposed a 10-step methodology to guide how to develop a BPE scorecard for healthcare facilities [168]. So far, academia has been the main practitioners of POE while the industry has shown increasing interests and efforts in either applying existing, or developing their own, POE protocols. Most of the POE protocols summarized here include IEQ measurement, which generally includes numerous sensors and equipment. UC Berkeley created the first mobile instrumented cart in 1985 (described in [169]), which then evolved to a more sophisticated wireless cart, first used as a portable wireless monitoring system (PWMS) to support commissioning of underfloor air distribution systems [170] (Figure 2-9). The idea of an IEQ cart has been widely adopted later   33  in many studies and protocols (Figure 2-10). In addition to saving measurement time, a major benefit of an IEQ cart is to enable consistent measurement of different levels of temperature. The mobile cart is usually placed in a location for several minutes as a snapshot of IEQ, i.e. the short-term/point-in-time measurements of IEQ.  Figure 2-9 Evolution of CBE carts (Sources: [147,169–172])  Figure 2-10 Examples of other IEQ carts (Sources: [32,49,153])   34  As opposed to point-in-time measurements, there is increasing interest in long-term measurements of IEQ. Although there is no explicit, widely-acknowledged definition of the length of “long-term”, in practice, it usually means at least three months. In building certification systems such as LEED and WELL, “long-term” mostly refers to one year. Long-term monitoring of IEQ requires sensors installed on the walls or devices placed on desks for months and years, which has historically been expensive but recently been affordable as sensor technologies developed (over 60% drop in sensor costs from 2004 to 2016 [173]).  The University of Sydney has developed SAMBA [31] which is an integrated IEQ sensor device with a low-cost suite of sensors and modest data-processing capabilities to autonomously measure key IEQ indicators: air temperature, globe temperature, relative humidity, air speed, sound pressure level, illuminance, carbon dioxide (CO2), carbon monoxide (CO), Formaldehyde, and total volatile organic compounds (TVOC). SAMBA data are autonomously transmitted to a cloud database, and a web service called IEQAnalytics provides a dashboard of real-time visualization of all measured IEQ parameters and calculated indices. The SAMBA project was initiated in 2012 and collected data in real buildings starting 2016. Over 200 SAMBA devices have been installed in 46 office buildings in Australia as of July 2019. The SAMBA database was used and introduced further in Chapter 4. About the same time of SAMBA project, Tsinghua University also developed an integrated IEQ sensor device called IBEM together with cloud server, database, website platform and an application on mobile phone that enable measurements and visualization of five IEQ parameters: air temperature, relative humidity, CO2 concentration, PM2.5 concentration and illuminance data. The research team have put 198 IBEM devices in 41 green office buildings to collect continuous IEQ data from June 2017 to August 2018 [174]. SAMBA and IBEM were created by academic groups to continuously monitor IEQ for research purposes, but it is recognized that there has emerged large-scale long-term monitoring initiative in the industry using commercialized IEQ sensors that are usually with lower measurement accuracy.   35   Figure 2-11 Examples of long-term IEQ monitoring framework (Sources: [31,174])  Another critical component of a POE protocol is an occupant survey. BUS and CBE IEQ survey are the most widely used standardized surveys. They ask the respondents to rate various aspects of performance on a 7-point satisfaction scale to quantify occupants’ general satisfaction. Their huge databases enable benchmarking, comparison and further analysis of buildings. Other standardized questionnaires include DQI (Design Quality Indicators), OLS (Overall Liking Score), REF (Ratings of Environmental Features), SCATs (Smart Controls and Thermal Comfort), COPE (Cost-effective Open-Plan Environments), HOPE (Health Optimization Protocol for Energy-efficient Building), BASE (Building Assessment Survey and Evaluation), PWESQ (Physical Work Environment Questionnaire), NEP (New ecological paradigm), etc. Previous research has   36  provided reviews of the questionnaire-based methods [175–177]. Gupta and Chandiwala summarized short-term and long-term techniques to collect occupant feedback, with a focus on applications for housing [178]. A review from CBE discussed the subjective and objective IEQ measurement methods together [11].  2.4 Discussion 2.4.1 Emerging POE research topics In addition to evaluating the ways in which POE is used to evaluate a building’s performance, this review also identified some broader, emerging research topics related to this field.  1. Visualization of POE POE is a critical investigative methodology for understanding building performance. But it could be vastly more effective with improvements in how the results are analyzed, presented, and interpreted. POE results are often shown by charts in a report. To enhance the feedback to owner and occupants, BIM (Building Information Modelling) and GIS (Geographic Information Systems) are sometimes used to show the spatial mapping of occupant satisfaction and IEQ [176,179]. EnViz is a 3D-model-based software application that was developed to visualize IEQ data [180,181]. CBE at UC Berkeley also developed two interactive tools that allow users to explore the large sets of thermal comfort field data that combine both surveys and physical measurements [182]. 2. Analyses of Occupant Survey Databases In the last 5-10 years, researchers have been statistically analyzing POE databases to address novel questions about the performance of buildings from the perspective of human response, particularly in light of changes we are seeing in workplace design, such as a greater attention to green building strategies, and the prevalence of open plan offices. Examples of these studies, using subsets of BUS, CBE or other occupant survey databases include:   37  • Performance of green vs. conventional buildings [46,183–185]; • The advantages and disadvantages of a variety of forms of benchmarks for IEQ satisfaction [186]; • The effect of spatial configuration (open-plan office vs. enclosed office) on IEQ satisfaction [187]; • Gender differences in office occupant perception of IEQ [188,189]; • The relationships between occupant satisfaction and indoor environmental parameters and building feature [190];  • The relationships between individual IEQ factors and overall workspace satisfaction [9];  • Influence of non-IEQ factors (office type, spatial layout, distance from window, building size, gender, age, type of work, time at workspace, and weekly working hours) on occupant satisfaction [191].  3. Measurement of Occupancy Measurement and verification of energy savings is an important component of green building certification as well as of energy retrofit projects. In many cases, however, the energy simulation during the design phase does not reflect the actual use patterns of the building, resulting in large gaps between the predicted and the actual energy use [162]. To solve this gap problem, the energy model should be revised to better match occupant density patterns. Yang et. al. considered the variability of daily occupancy and the additional occupancy due to visitors in institutional buildings to better predict the energy performance [192]. Using simplified baseline models to illustrate the effect, Liang et. al. incorporated an occupancy variable to a simple regression model developed by Lawrence Berkeley National Laboratory (LBNL) model that used just outdoor air temperature and time [193]. Niu et. al. developed a virtual reality (VR) integrated approach to help building designers collect occupancy information, and then used that to identify design strategies that could guide occupants to behave in the most energy-efficient way [194]. This is a unique approach for integrating occupancy information more effectively into the building design process.   38  As we move towards evaluating actual performance, as opposed to predicted performance, it becomes increasingly important to use actual occupancy data. There have been ongoing conversations in the building industry about whether the conventional metric of Energy Use Intensity (EUI, kBtu/sq.ft) should be expressed in terms of energy used per person. Vale and Vale [195] took this idea even further, and said that future residential POE should connect the performance of the building with the inhabitants’ lifestyle by linking the overall building/site consumption to the number of occupants, so that we could measure the resource use per person, waste production per person, transport, income, etc.  A simple way to calculate the actual occupancy is based on records of human resources, class enrolment numbers, class schedules, and recreational schedules [161], but this is not necessarily accurate. Techniques are available to measure the actual occupancy data [196], or proxies for occupancy (i.e., using questionnaire and interviews, radio frequency, infrared, ultrasound, video cameras, CO2, GPS (global positioning system), cellular data, WLAN (wireless local area network), Bluetooth, etc.). Researchers at University of Nottingham [197] tested three unobtrusive occupancy measuring technologies (i.e., Passive Infra-Red (PIR), Carbon Dioxide (CO2), and Device-free Localization (DfL)), and found that windows and occupants’ metabolic rates had significant impacts on the reliability of the PIR and CO2 data. DfL estimates the location and the activity of a person by analyzing its shadowing effect on surrounding wireless links. By applying a deep learning approach, Wang et. al proved that the DfL system could achieve 85% or higher accuracy based on experiments in laboratory and experiments in an apartment [198]. In addition, Sensible Building Science, a start-up company from the University of British Columbia, is engaged in one of the early efforts to leverage existing real-time Wi-Fi activity data to produce occupancy data for building automation system optimization [199]. Aftab et. al. [200] recently developed an occupancy-recognition algorithm to count the number of people crossing a virtual reference line (near the entrance) in the video captured by a fisheye camera. The real-time video processing can provide 80-90% accuracy of occupancy recognition.    39  2.4.2 Status and future research Although POE has not become a norm in the building industry, it has developed rapidly over the past decade and will continue growing as more people realize the importance of evaluating actual real-time performance and the important role of occupant feedback. The methodology of POE has been sufficiently well developed that many POE protocols are in widespread use in the UK, the US, Canada, Australia and other countries. But no standardized POE protocol has gained worldwide or nation-wide dominance. It might be that the inherent nature of POE—i.e., that its purpose and associated methods are highly case-dependent—makes it difficult to have a dominant standardized protocol for all the POE projects. Notwithstanding, one prevailing protocol for one type of buildings is highly possible, especially for residential buildings and office/commercial buildings where most of the research efforts have been devoted.  From a closer perspective, it is inspiring that occupant feedback has become the major focus of POE studies, beyond the domain of social scientists. An occupant survey has become an essential piece of most POE methodologies, even by studies within the building sciences, which have traditionally focused on the physical performance of the building. This reflects that a wider range of researchers now acknowledge that it is the people who occupy the spaces that have the power to determine the success or failure of a building. However, researchers should be aware of the nuanced challenges to assessing users’ experience of the built environment, including “defining users, agreeing on the meaning of experience, and organizing if not delimiting what is included in the notion of built environment” [201]. Current POE studies also have limitations. Despite the large number of POE studies that have been conducted, because POE results are largely context-based, the knowledge gained can be difficult to generalize and then feed back to the whole building industry. Moreover, because of the frequent lack of integration between the design, construction and operation phases of a building, many POE projects are limited in terms of linking their evaluation back to the phases that were most responsible for the relative successes and failures. Pati and Pati [202] argue that designers have not fully benefitted from POEs. Integrated Project Delivery (IPD) might help avoid   40  this, by bringing POE experts to the table where designers could pre-identify design decisions that need to be supported by POE.  If we regard POE as a “technology” and refer to the technology adoption lifecycle proposed by Rogers [203] (Figure 2-12), where the adopters are categorized into innovators, early adopters, early majority, late majority, and laggards, we might argue that POE is just at the first stage–only innovators adopt POE. Some of the barriers to more widespread adoption of POE include the ambiguity of who pays for POE, defending professional territory, split incentives within the procurement and operation processes, lack of agreed-upon and reliable indicators, potential liability issues, exclusion from current delivery expectations, and exclusion from professional curricula, etc. [204,205]. Moore [206] states that there is a chasm between the early adopters (Figure 2-12) where many technologies fail to be adopted by the mainstream. Rating systems for green building design (e.g., LEED, etc.) have already crossed the chasm–they are in the early majority stage. If we want POE to cross the chasm, we need to create a bandwagon effect in which enough momentum builds, and then POE becomes a de facto standard. The momentum can be internally driven (i.e., from building owners, operators, and occupants), or externally driven (i.e., from regulations, policies, LEED requirements, etc.).    Figure 2-12 Technology Adoption Lifecycle (Source: Wikipedia Commons)    41  In our opinion, POE can be more useful if the following transitions are made:  1. From one-off to continuing Most of the POEs are one-off studies. However, in many cases, the studies found some problems that could not be fully explained, or on the contrary, no problems were identified. In some cases, this could be because the scope and methodology were not well defined. Thus, a more effective strategy would be to have a continuing POE with a phased approach to the level-of-detail in the methods; i.e., use relatively inexpensive and easy methods to evaluate broad aspects in the first phase, and then use those findings to decide which areas of the building or performance issues require further in-depth study in subsequent phases. Vischer [204] also mentioned the need for a few, carefully selected indicators of environmental quality and, considering the cost of instrument measurements, she suggested to “use the analysis of user responses to indicate where and when follow-up instrument measures might clarify the nature of the problems identified and indicate possible solutions”. 2. From high-level to detailed Some high-level POE methods are standardized, while the more nuanced details of POE methods are less so and may need to be standardized as well to render more reliable interpretation of the results. For example, high-level whole-building energy performance is easily measured via bills and meters. But we need more standardized methods to understand detailed end-use patterns, or to collect more accurate occupancy data to recalibrate the energy model and, thus, to enable a more fair and accurate comparison between the predicted and the actual performance. 3. From researchers-oriented to owners/occupants-oriented The POE results are often compiled in a report or a paper with all the technical figures and charts. However, non-professionals like the owners and occupants also need to understand the buildings’ performance. Research is needed on how to provide more vivid visualization of POE results.   42  4. From academia to industry Right now, academic researchers are the main developers and users of POE. Learning from the success of green building certifications worldwide, industry should play a stronger role in driving the development and implementation of POE. 5. From independent to integrated POE is often a discrete activity, independent from the ongoing building management. But to exploit the effectiveness of the evaluation, it is better to regard POE as an integrated part of the building management. For example, one might continuously feed the results of occupant satisfaction surveys to the building automation control system, or feed the assessed facility conditions to the facility management system, etc. Although it was not a continuous process, an attempt was made by Cao et. al., who used a survey to quantify occupant satisfaction and then developed an agent-based model to prioritize maintenance work to achieve maximum occupant satisfaction [207].  2.5 Conclusion The evaluation of building performance and occupant satisfaction in the post-occupancy phase is relatively under-developed compared to evaluation methods applied during a building’s design phase. Yet POE has continued to attract increasing research attention over the past decade. The analysis of 146 POE projects since 2010 shows that residential buildings and office buildings are the most popular research targets, occupant satisfaction is the most common focus, and occupant surveys are the most frequently used method. Many POE protocols have been proposed in the UK, the US, Canada, and other countries, but no singular POE protocol has gained worldwide or nation-wide dominance. Some emerging research topics related to POE include visualization of POE results, analyses of occupant survey databases, and measurement of occupancy patterns. Based on the literature review, we suggest five directions for future POE development and applications: from one-off to continuing, from high-level to detailed, from researchers-oriented to owners/occupants-oriented, from academia to industry, and from independent to integrated. This chapter provides a thorough introduction of POE to the   43  beginners in this area, as well as informing more seasoned investigators about the trends, gaps, and potential future directions in POE research.    44  Chapter 3: Paper 2—Investigation of Point-in-Time Thermal Comfort Compliance Criteria 3.1 Introduction  Building upon the knowledge from the previous chapter about how data is collected in POEs, this chapter uses data collected in real buildings to investigate the point-int-time thermal comfort compliance criteria (Table 3-1), which are used to assess the thermal comfort level of a space at a single point of time. ISO standard 7730:2005 [18] prescribes three classes of thermal comfort: Class A (PMV ±0.2), Class B (PMV ±0.5), and Class C (PMV ±0.7). EN 16798:2019 [19] adopts the same three classes (but named Class I, II, and III respectively) for mechanically conditioned buildings. An implicit assumption of the tiered compliance criteria is that a narrower PMV range ensures higher thermal acceptability among the occupants i.e. lower predicted percentage of dissatisfied (PPD) in Table 3-1. However, pursuing narrower PMV ranges in offices promotes increased energy consumption and costs and may lead to a higher chance of sick building syndrome [28] without really ensuring any satisfaction benefit in office buildings [27]. In addition to concerns around the energy costs and comfort implications, there are significant challenges in operationalizing the tiered PMV classification because the narrow range of environmental conditions is close to the measurement uncertainty of common sensors [29].  In an acknowledgment of the effect of measurement accuracy of the PMV input variables, both ISO 7730 and EN 16798 also recommend operative temperature ranges based on the PMV model with assumptions of the activity level (met) and clothing (clo) in different building types. For a typical office in summer, the recommended temperature ranges for three classes are 2 °C, 3 °C, and 5 °C (Table 3-1), assuming air temperature is equal to operative temperature, 0.5 clo (thermal insulation for a typical combination of garments in summer), 1.2 met (sedentary office activity), and 60% relative humidity (moderate environment). Although these assumptions are likely to differ from what is experienced in most office buildings, expressing the compliance criteria as a temperature range has the advantage of being more readily understood and implemented by practitioners. Whether these recommended temperature ranges, derived from   45  their equivalent PMV ranges, actually represent the comfortable range of conditions for building occupants is still unclear.   Table 3-1 Three classes of indoor thermal environment in ISO 7730 Class PMV range Temperature range (°C) for a typical office in summer PPD (%) A –0.2 < PMV < +0.2 24.5 ± 1 <6 B –0.5 < PMV < +0.5 24.5 ± 1.5 <10 C –0.7 < PMV < +0.7 24.5 ± 2.5 <15  Increasing criticism of the accuracy of the PMV model to predict comfort in real buildings served as a backdrop for the development of alternative methods to derive comfort temperature ranges from field studies rather than laboratory studies. Most notable are the adaptive comfort models [24–26] which regress neutral operative temperatures with the prevailing mean outdoor temperature. The development of the adaptive models involved the use of occupant survey responses to determine the neutral temperature (either at the building or individual level), and the resulting tool is a predictive model requiring outdoor air temperature as the sole input variable. There is no doubt that much of the success of the adaptive model can be attributed to its simplicity, as well as that it’s based on field data rather than the artificial laboratory conditions. As such, we thought it would be valuable to explore the field data even further, to determine acceptable indoor temperature ranges using occupant responses directly, rather than a predictive model. A large dataset of subjective evaluations of the indoor environment with contemporaneous physical measurements across different contexts (e.g. climate, culture, building types, etc.) is required to comprehensively explore the psychophysical relationship between thermal acceptability and temperature. Such a resource is now available with the release of the ASHRAE Global Thermal Comfort Database II [30]. Combining the original ASHRAE RP-884 database [208] with newly compiled data from field studies around the world, it is the largest global database of thermal comfort field studies to date: 107,583 records contributed   46  from 66 publications from 1982 to 2016 covering 98 cities in 28 countries across 16 Köppen climate types.  One of the challenges of using a large database to define acceptable temperature ranges is that different statistical methods, including the selection of input and output variables and algorithms, will produce different outcomes with the same data. Such methodological considerations have not been widely discussed in the research literature and are even less well-understood when using occupant survey data compared to instrumental measurements. Arens et al. [27] used different psychometric scales from three distinct databases to determine the percentage of acceptability in binned operative temperatures. Zhang et al. [209] analyzed the percentage of thermal acceptability votes compared to binned operative temperature measurements. Ryu et al. [210] determined the comfort zone (indoor temperature and relative humidity range) on the psychrometric chart using the criteria -1 < thermal sensation vote < 1, -3 < comfort sensation vote < 0, and 0% < percent dissatisfied < 20%. These slight differences between studies highlight the hitherto unexplored implications of such decisions and the sensitivity of the selected analytical procedure to the determination of acceptable comfort temperature ranges. As the number of thermal comfort field studies continually increases, a comparison of data-driven approaches is necessary to determine the most appropriate method for deriving acceptable temperature ranges from psychometric data. The aim of this chapter is to perform a metanalysis of the ASHRAE database to investigate the current point-in-time compliance criteria and propose a new method of deriving acceptable temperature ranges based on occupant responses. The specific objectives are: 1. Test the validity of tiered PMV classes as the compliance criteria by repeating the analysis by Arens et al. [27] on a large, contemporary thermal comfort database; 2. Recommend a method for deriving acceptable temperature ranges from occupant survey data and discuss the advantages of such an approach from a methodological perspective;   47  3. Compare the recommended comfort temperature ranges found in ISO 7730 and EN 16798 standards to the newly derived acceptable temperature ranges. 3.2 Methods To achieve the three objectives above, we applied data analysis techniques to the ASHRAE Global Thermal Comfort Database II, referred to hereafter as the “ASHRAE database”, which includes matched point-in-time thermal measurements and occupant responses about their right-now experiences across a diversity of season, building type, building-level cooling strategy, age, and gender (Table 3-2). Physical thermal measurements include air temperature, operative temperature, relative humidity, air speed, PMV and PPD (provided by the original researchers). Occupant responses are recorded with one or more of the four common psychometric scales: thermal acceptability, thermal sensation, thermal preference, and thermal comfort. Table 3-3 summarizes responses to these questions from the database. No identifiable information is available to each record in the ASHRAE database. As the ASHRAE database collated measurements collected in different research projects by different researchers around the world, not every record in the database contains values for every variable (complete list of variables in [30]). The “Records” in Table 3-2 and Table 3-3 are the numbers of total records excluding the missing values (NAs).  Table 3-2 Summary of basic parameters in ASHRAE database Season Building type Building-level Cooling strategy Age Gender Autumn: 17,161  Spring: 12,680  Summer: 40,876  Winter: 36,625  Records: 107,342  Classroom: 17,852  Multifamily housing: 10,401 Office: 67,755  Others: 6,555 Senior center: 821  Records: 103,384 Air Conditioned: 32,372  Mechanically Ventilated: 180  Mixed Mode: 26,519  Naturally Ventilated: 47,285  Records: 106,356 Min.: 6  Median: 29 Mean: 32  Max.: 95  Records: 43,576  Female: 30,895 Male: 36,140  Records: 67,035    48  Table 3-3 Summary of subjective answers in ASHRAE database. Thermal sensation is a continuous scale from -3 (cold) to 3 (hot) with 0 being neutral. Thermal comfort is a continuous numeric scale from 1 (very uncomfortable) to 6 (very comfortable). Thermal acceptability Thermal sensation Thermal preference Thermal comfort 0 (unacceptable): 14,045 1 (acceptable): 48,399 Records: 62,444  Min.: -3 Median: 0 Mean: 0.1679 Max.: 3 Records: 104,454 Cooler: 27,725 No change: 43,256 Warmer: 14,518 Records: 85,499 Min.: 1 Median: 5 Mean: 4.31 Max.: 6 Records: 34,481  3.2.1 Analysis of current point-in-time compliance criteria We first used the ASHRAE database to test the validity of the tiered PMV classes. Since thermal comfort standards, as well as the PPD index, refer to “satisfaction”, but the above four metrics don’t speak to this directly, we had to make the following assumptions to equate each of the four different scales to “satisfaction”: • “Acceptable” votes. • “Thermal sensation” votes (TSV) between -1 and 1 (sometimes referred to as the central points). • “Thermal preference” votes of “no change”. • “Thermal comfort” votes equal to or greater than 3.5 (between neutral and very comfortable).  These assumptions are widely used in thermal comfort research, and whilst their statistical validity may be challenged it is beyond the scope of this chapter to do so. For this analysis, we dropped records without one of these four scale responses or a corresponding PMV and used the resulting data to calculate the observed percentage of satisfaction in each PMV class using the assumptions noted above. To further investigate the effect of the PMV class thresholds themselves, we calculated the percentage of satisfaction for ten PMV ranges (from ±0.1 to ±1.0).    49  3.2.2 Methods to derive new point-in-time compliance criteria The analysis of the PMV classes found that they are unhelpful in defining an acceptable thermal environment and suggested that a new approach is required. Practically speaking, methods to derive comfort ranges should limit the necessary input parameters to simplify the measurement and implementation for practitioners utilising them in building operations. For specifying comfort ranges, operative temperature has the advantage of combining the radiant and convective heat exchanges that characterise non-uniform exposures in buildings. However, the number of operative temperature measurements in the ASHRAE database (n = 37,963) is much smaller than that of air temperature (n = 99,911). Moreover, preliminary comparative analysis showed that using air temperature vs. operative temperature resulted in a difference of less than 1 °C for the derived acceptable temperature range. For these reasons, the present analysis expresses comfort ranges using air temperature, which has the additional advantage of being the most commonly controlled parameter in buildings and routinely measured by building management systems.  There are different ways to define what temperature ranges are “acceptable” statistically, and these different statistical methods could result in very different temperature ranges. No previous research has discussed this issue from a methodological perspective. We therefore compared two statistical methods in their derived acceptable temperature range using occupant survey data: one used in previous studies but rendered unrealistic results for the ASHRAE database and a new method which rendered more realistic and stable results. Figure 3-1 illustrates the two methods.    50   Figure 3-1 Illustrations of the two statistical methods used for deriving acceptable temperature ranges  Method 1: percentage of acceptability in temperature bins  Used in earlier studies [27,209], method 1 involves binning temperature into intervals of 1 °C and using the “Acceptability” scale directly to calculate the percentage of acceptability within each of those temperature bins. If more than 80% of occupants voted “acceptable” in a bin, that temperature bin would be deemed as an acceptable temperature. The acceptable temperature range is defined by the upper and lower temperature bins achieving such levels of acceptability. Method 2: neutral temperature range We propose a novel method which involves determining the neutral temperature corresponding to each individual vote of neutrality on the thermal sensation scale (TSV = 0) and defining the acceptable temperature range based on the population distribution. First, we calculated the neutral temperature for each record in the ASHRAE database based on measured air temperature and thermal sensation votes according to the Griffiths method [211] (see Equation 3-1). Then, the range of air temperatures containing 80% of the populations’ neutral temperatures defines the acceptable range. Neutral temperatures below the 10th percentile and above the 90th percentile are considered outliers, i.e. occupants with extreme thermal preferences. Humphreys and Nicol [212] suggested 0.4 to be an appropriate constant for the Griffiths method but later used 0.5 when developing the European adaptive model [26]. Both   51  constants were tested and 0.4 was selected because it derived temperature ranges closer to those found when using method 1. An implicit assumption of method 2 is the equivalence of a neutral sensation and thermal acceptability despite the fact that people may not necessarily consider neutral as their preferred thermal comfort condition [213–215]. 𝑇𝑛𝑒𝑢𝑡𝑟𝑎𝑙 = 𝑇𝑎 −𝑇𝑆𝑉𝐺(3-1) where G is the Griffiths constant with a unit K-1 and we took G = 0.4 in this analysis. Performance measures of methods The two methods above are not predictive models per se, so conventional metrics of prediction accuracy are not applicable to evaluate their performances. However, we conducted a test of the reliability of the two methods to compare their performance. The reliability test involved randomly partitioning the database into training and testing sets (80% and 20% of samples, respectively), using both method 1 and 2 to derive the acceptable temperature range for different building types, and then calculating the absolute difference between the resulting ranges found in the training and testing sets. This is somewhat representative of the systematic error of the method in determining acceptable temperature ranges. The test was run 500 times in order to achieve stable mean differences. Building-level differences After applying the two methods to derive acceptable temperature ranges for different building types, we noticed the acceptable temperature range may vary between different individual buildings. To further find out the differences between the two methods, we decided to derive acceptable temperature ranges for each building. Since the ASHRAE database does not identify buildings, we used heuristics to develop a proxy building-level unit of analysis based on several different parameters. Records with a unique combination of publication source, city, building type, and cooling strategy were classified as being from the same building. Although this approach is coarse, we deemed it to be sufficient for the current analysis. Once buildings were   52  identified, we used the two methods for deriving acceptable temperature ranges at the building level, rather than by building type across the entire dataset.  3.2.3 Results compared to standards After a comparison of the two statistical methods for deriving new compliance criteria, method 2 was deemed to be more appropriate for deriving temperature ranges from the ASHRAE dataset because its results are more realistic and stable. In order to compare the newly derived acceptable temperature ranges with the temperature ranges recommended by ISO 7730 and EN 16798, we used method 2 to determine the acceptable air temperature ranges for different building types and for summer and winter (swing seasons were dropped). It is recognised that a binary classification of season is not necessarily a robust approach for considering the effects of prevailing weather or climate. However, this coarse level of differentiation is what is used in the standards and we felt was therefore applicable for such a comparison.  3.3 Results  3.3.1 Analysis of current point-in-time compliance criteria Table 3-4 shows the percentage of votes corresponding with thermal satisfaction within the three PMV classes. For thermal acceptability, even the narrowest PMV Class A does not achieve acceptability levels above 80%. The table shows the same result for thermal sensation and thermal preference. Satisfaction as expressed through thermal comfort votes was the only metric to reach levels above the 80% threshold. Importantly, there was no significant difference between the three PMV classes for any of the four psychometric scales tested. This confirms the analysis by Arens et al. in 2010 [27] and supports the general critique by Roaf et al. [28] that PMV classes only encourage greater energy expenditure without necessarily improving occupant comfort. A limitation of this result is that only a small portion of records in the ASHRAE database contain temperatures at different heights, making it impossible to consider the temperature stratification. However, a follow-up analysis of the original ASHRAE RP-884 database by Richard de Dear showed that after considering the local discomfort the resulted percentages of   53  satisfaction only changed slightly and there remain no significant differences between the three PMV classes.  Table 3-4 Observed percentage of satisfaction in three PMV classes   PMV Class (range)   A (0±0.2) B (0±0.5) C (0±0.7) Thermal acceptability  Sample size (inclusive) 11,200 21.650 26,853 % of acceptability 77.7 77.3 76.8 Thermal sensation  Sample size (inclusive) 17,163 34,080 42,902 % of -1 ≤ TSV ≤ 1 79.0 78.6 78.2 Thermal preference Sample size (inclusive) 15,296 29,989 37,424 % of no change 53.3 53.3 52.8 Thermal comfort  Sample size (inclusive) 4,006 8,319 10,621 % of comfort (vote ≥ 3.5) 81.4 80.8 80.6   Figure 3-2 Observed percentage of satisfaction in PMV ranges (e.g. 0.1 means |PMV|≤ 0.1)   54  Figure 3-2 shows the observed percentage of satisfaction in ten PMV ranges. As the PMV range widens, the percentage of satisfaction only very slightly declines across all scales (about 1% decrease from PMV ±0.1 to ±1.0). This contradicts the claim made in the standards that one will see a decrease of 9% in satisfaction (reciprocal of an increase of PPD) as the PMV ranges from ±0.2 to ±0.7 (i.e., an association determined by the PMV-PPD relationship). The difference between the scales will be discussed in a later section. To further explore the validity of the PMV-PPD model, the observed percentage of dissatisfied (OPD), the reciprocal of percentage of satisfaction, is shown in Figure 3-3. The classic PMV-PPD curve is superimposed for reference. Each dot represents the percent dissatisfied for the corresponding scale in the PMV bin (size = 0.1) and the smooth curves are quadratic regression models weighted by sample size. The OPD is not as sensitive as PPD to PMV, shown by the flatter slopes of the OPD-PMV curves compared to the PPD-PMV curve. This reinforces the earlier finding that, for the purpose of creating PMV classes for comfort standards, narrower PMV ranges around a neutral point do not provide greater levels of satisfaction.   Figure 3-3 Observed vs. predicted percentage of dissatisfied   55  It is interesting to note that in addition to the flatter slopes of the dissatisfaction curves shown in Figure 3-3, the lowest OPD—based on any of the four subjective scales and in relation to the PMV metric—is approximately 20%. This is much higher than the 5% minimum predicted by the conventional PMV-PPD relationship. It is difficult to offer a conclusive explanation for this finding given the diverse range of field studies contained in the ASHRAE database, but a related analysis of the PMV-PPD model using the same database may shed more light on the discrepancy [22]. There it was found that PMV may be a greater source of error than the PPD metric. That study found that if the actual thermal sensation vote is known (or the PMV prediction is accurate) then the predicted dissatisfaction level using the PPD curve is somewhat reliable. To conclude, results in this section proved that the tiered PMV classes are not appropriate point-in-time compliance criteria. 3.3.2 Methods to derive new point-in-time compliance criteria  To derive new point-in-time compliance criteria, two statistical methods were applied to the entire database to derive acceptable temperature ranges for different building types. Figure 3-4 (a) shows the acceptable temperature ranges derived by method 1: 18 ˚C – 29 ˚C for classroom, 16 ˚C – 31 ˚C for housing, 23 ˚C – 24 ˚C for office, and 19 ˚C – 29 ˚C for other building types. The advantage of this method is that it strictly follows the conventional definition of an acceptable thermal environment—over 80% of occupants deeming a given thermal environment to be acceptable. However, the tight acceptable temperature range found for office buildings (only 2 °C wide) seems to contradict the now-routine finding from field studies conducted around the world that building occupants accept much wider temperatures than the PMV-PPD model predicts. This raises the obvious question of whether 80% acceptability—as measured by the binary Acceptable-Unacceptable scale—is an appropriate or realistic threshold for contested spaces with limited controls such as an office. Whilst this might seem discouraging, it does lend support for the uptake of personal comfort systems as a potential solution to the fallacious one-size-fits-all approach that has dominated thermal comfort thinking. This will be discussed in a later section of this chapter.   56  Figure 3-4 (b) shows that method 2 derives wide ranges for classroom, housing and other building types but they are generally narrower than the ranges from method 1. Most importantly, the acceptable temperature range for offices (9.1 °C) is wider than the 2 °C found using method 1, and it is similar to the range for classrooms (9.4 °C). This is an encouraging result because offices and classrooms are similar thermal contexts—contested spaces with fewer adaptive options—compared to homes and other building types.   Figure 3-4 Acceptable air temperature range derived by (a) method 1 and (b) method 2  The reliability test of the two methods found that the error (or the mean difference between training set and testing set) using method 1 was 5.3 °C for classrooms, 2.6 °C for multifamily housing, 0.5 °C for offices, and 2.6 °C for other building types. The errors were substantially smaller using method 2, with 0.2 °C for classrooms, 0.2 °C for multifamily housing,   57  0.04 °C for offices, and 0.2 °C for other building types. The test results show that the derived temperature ranges from method 2 are more reliable than method 1, which appears to be highly dependent upon the subset of data being used to derive the range. The error reported for office buildings is the lowest of all building types regardless of the method used. This is likely due to the large number of measurements available from offices compared to other building types, underlining the importance of larger datasets when conducting metanalyses of subjective votes.   Figure 3-5 Acceptable temperature range for each “building” using (a) method 1 and (b) method 2  Grey bars in Figure 3-5 display the number of buildings for each acceptable temperature range using method 1 (left) and method 2 (right). The total number of buildings applicable to method 1 is far smaller than those applicable to method 2 because of the fewer acceptability votes than sensation votes in the ASHRAE database. The method 2 result is close to a normal distribution while using method 1 the number of buildings decreases as the range widens. In fact, 23 out of the 37 “buildings” in Figure 3-5 (a) are office buildings. This explains why narrow ranges dominated in Figure 3-5 (a).  The red lines in Figure 3-5 show the reverse cumulative percentage of buildings, indicating the number of buildings (y%) in the database that would be deemed as having acceptable thermal   58  environments if the temperature range threshold is set to be x °C. This may be helpful in the discussion of an appropriate temperature range threshold. For instance, the cumulative percentage of buildings in Figure 3-5 (b) with a > 6 °C air temperature range is nearly 80%, i.e., the number of buildings with range 6 °C, 7 °C, …, and 14 °C accounts for 80% of the total number of buildings. This may be interpreted practically by saying that a 6 °C acceptable temperature range specified in the standards would correspond to 80% of building occupants expressing satisfaction with that temperature range. However, for 80% of buildings to be deemed as acceptable using method 1, the acceptable temperature range threshold should be 2 °C (see red line in Figure 3-5 (a)). The results in this section suggest that although method 1 is conceptually sound, the results are greatly influenced by the dataset used, the sample size, and the building type. Method 2 uses a pragmatic statistical approach that leverages the larger sample size afforded by the more widely used thermal sensation vote. The resulting temperature ranges show strong agreement with method 1 for classrooms, residential houses, and other building types. The major difference occurs in office buildings, but it is argued that the wider temperature ranges from method 2 are more realistic and align with results reported in thermal comfort field studies. It is clear from these findings that the approach to defining temperature ranges should depend on the features of the dataset being used. Method 2 is therefore used in the following sections as it is more suitable for use with large datasets comprised of diverse contexts. 3.3.3 Results compared to standards The analysis first compared the derived ranges following method 2 with the middle class (Class B/II) of comfort temperature ranges given in ISO 7730 and EN 16798. The building type classification in the ASHRAE database is not as detailed as what is published in the standards. Therefore, Figure 3-6 only compares the temperature ranges for classrooms, multifamily housing, and offices, and shows the acceptable temperature ranges by building type and season using method 2 alongside those specified in the standards.   59   Figure 3-6 Acceptable air temperature ranges by method 2 compared to the standards. ISO 7730 does not specify temperature ranges for homes. n pubs = number of publications.  The figure shows that, for winter, there is relatively close agreement between the lower limit of temperature ranges found in the standards and those determined through method 2 using the ASHRAE database. However, the upper limits for both summer and winter are too conservative in the standards, with field study data showing much greater tolerance to warmer temperatures by building occupants in both seasons. One possible explanation for this discrepancy is that the temperature ranges in standards were developed using the PMV model with assumptions of some physical and personal parameters. First, it was shown earlier in this analysis that the PMV-PPD relationship does not correctly predict the observed satisfaction in real buildings; there is very little change within the range of -1.0 < PMV < 1.0. Second, the assumptions of both the environmental and personal input parameters into the PMV model used in ISO 7730 and EN 16798 for their summer and winter designations appear to differ to those observed in buildings.    60   Figure 3-7 Clothing level (a) and air velocity (b) in different spaces and seasons. Pink dot is the mean. Boxplot shows 25th percentile, median, and 75th percentile. Violin plot shows the density of records.  Uncertainty analyses have shown clothing insulation level and metabolic rate to be the two largest sources of uncertainty in the inputs for PMV [215–219]. For the specification of comfort temperature ranges in the standards, clothing level was assumed to be a fixed 0.5 clo in summer and 1.0 clo in winter in the standards. Figure 3-7 (a) shows that although the mean and median clothing level in summer in real buildings is close to 0.5 clo, there is large variance in clothing across the database ranging from just above 0 clo to over 1 clo. In winter, people generally dress below the 1.0 clo assumed by European standards, which may explain why occupants were found to accept higher temperatures in winter than the standards suggest. The metabolic rate of occupants in the ASHRAE database was found to be close to the assumed level in the standards (1.2 met). This is likely to be attributable to the near universal use of lookup   61  tables for met estimation due to the significant technical requirements to properly measure metabolic rate.  While the fixed assumptions made by the European standards of the two personal PMV inputs likely contribute to the discrepancy between the predicted and the observed comfort temperatures, a similar issue for some environmental parameters may further compound those errors. EN 16798 assumes a “low” air velocity and ISO 7730 specifies the maximum mean air velocity to be 0.19 m/s in summer and 0.16 m/s in winter. The empirical basis for these assumptions is unclear, but Figure 3-7 (b) shows that measured air velocity in real buildings can be much higher than those speeds, particularly in classrooms and homes. Such elevated air speeds could also help explain the higher acceptable temperatures found in all building types for both summer and winter. Moreover, the large variance in relative humidity in all building types will exert some influence over the range of acceptable temperatures.  It is understandably necessary for standards bodies to make assumptions about the PMV input parameters in order to specify acceptable temperature ranges. Yet, the present analysis has shown that discrepancies between those fixed assumptions and the thermal exposures characterised by field studies in real buildings are likely to contribute to the determination of different neutral temperatures for occupants. Whilst it is impossible to consider all possible permutations within a single temperature range, our analysis suggests that there appears to be a cultural bias in the assumptions of both personal and environmental parameters within the standards. Both ISO 7730 and EN 16798, although prepared for the European contexts, are widely used in other parts of the world. To investigate potential cultural differences in the field measurements, Figure 3-8 shows the acceptable air temperature ranges using method 2 for separate Asian and European subsets of the ASHRAE database. When compared with the analysis of the full dataset in Figure 3-6, the European acceptable temperature ranges are shifted towards the cooler side whilst the Asian subset is shifted towards the warmer side. This is unsurprising given the predominant climates in the Europe cities in the database are temperate or cold, whilst the entire database encompasses a variety of climate types particularly from tropical climates (e.g. Asia). Interestingly, the lower temperature limits for homes and offices determined using   62  method 2 are below the 20 °C suggested by the standards for the European subset. More significant adaptive opportunities afforded to people in homes, such as different clothing levels, are likely to substantially explain the cooler limits. Psychological factors associated with the ability to utilise natural ventilation through operable windows, modify window furnishings to influence connectedness to outdoors, or even the material and color selection of the interiors may also affect thermal sensation. Unfortunately, the ASHRAE database does not contain the requisite information to explore the relationship between these factors and thermal sensation.   Figure 3-8 Acceptable air temperature ranges for Asian (top) and European (bottom) datasets compared to the standards. ISO 7730 does not specify temperature ranges for homes. n pubs = number of publications.    63  In summary, the recommended temperature ranges in ISO 7730 appear to be too narrow, particularly when expanded beyond the European context. The widest compliance class (Class C)—5 °C for summer and 6 °C for winter—is conservative compared to the 7.4 °C – 12.2 °C neutral temperature ranges derived in this chapter based on field studies in different building types and season (Figure 3-6). Possible reasons for these discrepancies include the inaccuracy of PMV-PPD model, variance in the input variables of PMV model, and the generalization of a context-specific model. These should all be considered before endorsing the universal use of such temperature ranges for thermal comfort compliance assessments of buildings. 3.4 Discussion The PMV-PPD model marked a significant step forward in thermal comfort understanding by establishing an empirical relationship between thermal sensation and the associated satisfaction. However, countless field studies have shown the shortcomings of this deterministic approach in many different contexts [220–223]. Humphreys and Nicol attribute this failure to three important factors: the uncertainties of input variables, the structure of the equation itself, and its application to non-steady-state conditions [21]. These all indicate that a heat-balance model in practice requires significant simplifications that reduce important contextual aspects of thermal perception and ignore the adaptive processes of building occupants. The aim of this chapter is not to simply demonstrate the inaccuracies of the PMV model itself, but instead to argue that these shortcomings have flow-on effects to the specification of the thermal comfort compliance requirements, and particularly the PMV classes, that falsely assume a narrower PMV range leads to higher occupant satisfaction. Not only is this specious connection between tighter indoor temperature tolerances and improved comfort untrue, it promotes energy-intensive HVAC use that significantly contributes to the problems of greenhouse gas emissions.  In addition to the model uncertainties of PMV-PPD, the variant terms used in thermal comfort field studies present challenges to determining the level of thermal satisfaction of occupants that the PMV model aims to predict. ASHRAE Standard 55 uses “acceptability” as the target outcome and “satisfaction” as part of the definition of comfort, but laboratory and field studies have primarily used “thermal sensation” scales, leading to the PMV model having to use   64  assumptions to equate specific thermal sensation responses with thermal satisfaction. This raises important questions around the semantic equivalence of acceptability, satisfaction, and sensation that have yet to be addressed adequately by the thermal comfort research community. The same problem applies to other common psychometric scales like thermal preference and thermal comfort, which similarly require assumptions and rule-of-thumb techniques when converting to satisfaction.  It was anticipated that the percentage of thermal preference votes of “no change” would be the lowest among the four common psychometric scales, and the percentage of acceptability would be the highest. This was based on the assumption that preference represents the ideal condition for the occupant while acceptability refers to a broader notion of tolerance [214]. In the analysis of Figure 3-2, percentage of “no change” votes is indeed the lowest as anticipated, but the percentage satisfied using the thermal acceptability scale is not the highest – it is lower than the sensation and comfort scales, suggesting that TSV between -1 and 1 and comfort are even broader requirements than thermal acceptability. However, this interpretation needs further examination, since the reason why TSV between -1 and 1 seems to have broader meaning than acceptability could partly be due to the fact that people may not perceive thermal sensation scale as equidistant [224], and/or the effects of language and context on the interpretation of words [225]. Clearly the choice of scales, and the widely used conversion rules, lead to different outcomes and may not be directly translatable.  These considerations are an important acknowledgement of the challenges of using psychometric scales for thermal comfort research and reinforce the importance of selecting the appropriate scale for the research question. Rather than directing attention towards understanding the nuanced semantic difference between scales, perhaps a more pragmatic effort would be to standardise the use of scales for thermal comfort research. If the standards continue to define comfort as an acceptable thermal environment, then “thermal acceptability” scales should be used in field studies, particularly when focused on thermal comfort compliance assessment. If a laboratory study is investigating a particular aspect of the human thermoregulatory system, then the thermal sensation scale may be more appropriate. Thermal   65  preference is more helpful in building control applications, with emerging technologies like Comfy asking occupants’ thermal preference to appropriately adjust the HVAC system [226]. Carefully selecting the scale for the particular research question or practical application would reduce the need to convert between metrics.  Based on the laboratory-derived relationship, the lowest PPD is 5% when PMV is neutral. In the standards, the oft-cited aim is for greater than 80% thermal acceptability in offices—accounting for 10% of occupants experiencing whole body discomfort (assumed to be thermal sensations greater than +/-2), and 10% more are presumed to be uncomfortable due to local discomfort (e.g., draft, asymmetry). However, the results presented in this chapter suggest that office workers are generally difficult to satisfy (Figure 3-3), and 5% dissatisfaction is unlikely to be achieved by any centrally-conditioned building. Luo et. al. found that occupants quickly increase their thermal comfort expectations and rarely compromise once raised [227]. Occupants becoming accustomed to, or even demanding, tighter temperature tolerances might explain why tight temperature ranges do not necessarily improve thermal comfort. Thermal influences on positive vs. negative overall environmental assessments can also vary. Kim and de Dear [9] showed that the thermal environment has a clear negative impact on overall satisfaction when occupants are unhappy with the conditions, but contributes less to positive evaluations when conditions are satisfactory. These studies, along with the analysis presented here, raise the question of the appropriateness of the 80% acceptability threshold for office buildings without some type of personal control. The large inter-individual distribution of thermal preferences and the physical constraints of centralized HVAC systems to deliver bespoke conditions effectively preclude the provision of ideal thermal environments for all occupants. Continuing to encourage unrealistic levels of thermal satisfaction using such systems seems certain to increase HVAC energy use without any tangible improvement in occupant comfort. Rather than the U-shape curve defining the PMV-PPD relationship, occupants in real buildings appear to be more tolerant of non-neutral indoor environments as defined by the standard-based PMV metric. The regression lines in Figure 3-3 have an almost-flat bottom and gradually increase towards the ends of the thermal sensation scale. These lines indicate that the   66  thermal satisfaction of a population is similar across a wide PMV range, and therefore a wide range of air temperatures. This contradicts the popular idea of an optimum or ideal temperature that has been promoted by the steady-state heat balance approach to thermal comfort, and in turn the comfort standards. Rather than a single controlled body temperature with a fixed setpoint, contemporary thermophysiological theory instead promotes the concept of a thermoneutral zone where vasomotor tone is able to regulate against body temperature fluctuations without initiating shivering or sweating [228,229]. Very few indoor environments would push occupants’ thermoregulatory system beyond the thresholds of the thermoneutral zone, and it is very likely that the comfort zone exists within this range of body temperatures [230]. This conceptual model of human thermoregulation supports the observed flatter dissatisfaction curves found in the present analysis, and further discourages the pursuit of an optimum comfort temperature or even narrow temperature ranges. The major finding of this analysis is that it is difficult to specify a universally-applicable comfort temperature range for different contexts without resorting to heavy HVAC requirements that promote profligate energy use. The results in Figure 3-8 suggest that for the European context both the upper and lower temperature limits for offices can be relaxed by 2 °C i.e. cooling setpoint of 28 °C instead of 26 °C, and a heating setpoint of 18 °C instead of 20 °C. The recommend temperature ranges for classrooms in EN 16798 appear to be in line with neutral temperatures found using the ASHRAE database. While the results for residential housing in Figure 3-8 are somewhat aligned with one residential comfort study [231] and appear to suggest a widening of the temperature range, some caution should be taken when interpreting this due to the relatively small sample size. Interestingly, Cheung et al. [22] tested a simple model that predicts thermal sensation based solely on air temperature. The neutral temperature band of this simple model ranged from 18 °C to 30 °C, similar to what was reported in Figure 3-6, and the overall prediction accuracy was higher than the PMV-PPD model. So whilst a universal prescription of a comfort temperature range is neither possible nor desirable [228], the current recommendations for offices found in international standards such as ISO 7730 appear to be too narrow and could be   67  relaxed to still maintain comfort while avoiding encouraging unnecessary energy expenditure on space conditioning.  Strategies to widen the permissible temperature ranges in offices are more likely to succeed when coupled with the availability of local control options that recognise individual differences [232] and allow for the creation of bespoke microclimates. The theoretical basis and design solutions for such an approach to thermal comfort in buildings can be found in research studies of thermal adaptation and personal control systems [233–235]. Both chamber studies [236] and field studies [237] have demonstrated the overwhelmingly positive effect of individual control on thermal comfort whilst potentially reducing HVAC energy use by 32% - 73% [238]. Personal comfort systems such as desk fans or footwarmers can deliver comfort to occupants whilst allowing for a relaxation of the room air temperature range to 18 °C – 29 °C [239–241]. Unfortunately, the ASHRAE database utilized here does not contain sufficient information on personal controls to perform such an analysis. It is likely that the majority of buildings surveyed in the database did not have personal comfort systems, so the derived temperature ranges in Figure 3-6 and Figure 3-8 may even be conservative if the corrective potential of personal comfort systems are considered [242]. Therefore, instead of demanding ever-increasing central control over the environment, standards should aim to link performance criteria to thermal adaptation opportunities, such as access to and degree of personal control [243]. This is particularly important given the number of emerging technologies around personal comfort systems, such as thermally responsive clothing fabric [244] or a heating and cooling robot [245]. Perhaps a more promising approach is the development and use of personal comfort models based on physiology or behaviour that can dynamically control HVAC setpoints based on occupants’ comfort profile and energy use restrictions [246–249]. 3.5 Conclusion This chapter builds on data analyses of the largest-to-date global database of thermal comfort field studies and focuses on the point-in-time compliance criteria of a thermal environment. First, the observed thermal satisfaction (based on any of the four thermal scales) showed no significant difference between the three PMV classes currently included in   68  international standards, meaning that tiered PMV classes are not appropriate compliance criteria. This demonstrates a need for other compliance criteria, such as the direct use of temperature ranges given that air temperature is the most commonly controlled environmental parameter in buildings. As the field-based global database becomes more widely utilized, data should be able to inform the acceptable temperature range directly rather than predicting the temperature range from traditional laboratory-based thermal comfort models. However, one should exercise caution when using data-driven techniques as different statistical methods yield different results. The methodological discussion led to our recommending method 2—using individual neutral temperatures calculated from corresponding air temperature and TSV to determine the acceptable temperature range from the 10th percentile to the 90th percentile—in an attempt to standardize the data-driven methods of deriving acceptable air temperature range. The resulting acceptable temperature ranges (7.4 °C – 12.2 °C) are wider than the ISO 7730 (2 °C – 6 °C) and EN 16798 (maximum 26 °C and minimum 20 °C) mandate, and the reason may be three-fold: inaccuracy of PMV-PPD model, variance in the input variables of PMV model, and the generalization of the European context where ISO was predominantly used. Wider acceptable temperature ranges are not only valid in reality, but also favorable because of their energy savings, particularly when combined with increasingly popular solutions to personal comfort systems. Wider temperature ranges also acknowledge and better cater to the dynamics of indoor thermal environments arising from synoptic-scale weather patterns, temporal and spatial differences, individual physiological differences (including activity levels and clothing levels), and differences in thermal preference between individuals. Researchers and practitioners are encouraged to develop context-specific compliance criteria that are suitable for inclusion in relevant comfort standards, i.e. in a specific region, for a specific type of building, etc.   69  Chapter 4: Paper 3—Investigation of Long-term Thermal Comfort Compliance Criteria 4.1 Introduction In contrast to point-in-time compliance criteria, long-term compliance criteria are used to assess a space’s thermal comfort level over a long period, e.g. a year. New standards highlight the growing interest in long-term indoor environmental monitoring for understanding and evaluating building performance. For example, version 2 of the WELL Building Standard [250] now requires HVAC systems to both monitor and control air temperature, mean radiant temperature, relative humidity, and air speed in all regularly occupied spaces for the purposes of performance reporting and verification. A similar emphasis on long-term sensor monitoring can be seen in the new RESET building performance standard [251] designed to certify buildings using continuous IEQ data gathered over three months. The existing long-term thermal comfort compliance criteria in thermal comfort standards are a number of indices that can be calculated from physical measurements or simulations. Because of the prohibitive cost of installing and maintaining environmental sensors for continuous monitoring, previous application of long-term comfort indices has been limited to design phase assessments using data from building performance simulations. This led the majority of long-term comfort indices recommended by standards to be grafted with popular thermal comfort models such as PMV in order to increase their robustness and usefulness, e.g. percentage of time outside a PMV range, etc. (a list of indices is shown later in this chapter in Table 4-3). While PMV [20] continues to be the dominant model used for thermal comfort assessments, researchers have reported inaccuracies of PMV model in predicting people’s thermal sensation in real buildings [21,22]. Researchers have proposed various new long-term indices as summarized in [252], but those new indices are used for design trade-offs [253] or summer overheating risk assessments [254–256]. The existing long-term indices in thermal comfort standards for the evaluation of existing buildings’ operations have never been validated against long-term physical monitoring data in actual buildings nor against occupant feedback.    70  Given the observation in Paper 2 of the weakness of PMV classes in predicting point-in-time thermal comfort, there is a further question of the efficacy of the existing long-term comfort indices, as most of them are based on PMV/PPD indices as well. Do the existing long-term comfort indices reliably predict long-term subjective evaluations of thermal environments? If so, which index most closely corresponds with occupants’ actual levels of satisfaction? If not, are there better indices to evaluate long-term thermal comfort? The recent proliferation of low-cost sensors for building IEQ measurements and the rise of smart buildings have dissolved the barriers to assessing building performance with in situ long-term monitoring during the operational phase [31]. These new sensor technologies have coincided with increasing awareness of the importance of long-term physical monitoring of built environments. The average cost of industrial sensors has dropped from $1.3 in 2004 to $0.5 in 2016 according to a survey by Bank of America [173]. The low cost of sensor technology promoted the evolution of Wireless Sensor Networks (WSN) towards the Internet of Things (IoT) [257], which is used for sensing and datafication of the physical built environments. Along with the physical measurements, assessing thermal comfort over time in existing buildings also requires subjective evaluations by occupants. Post-Occupancy Evaluation (POE) is a general approach of obtaining feedback about a building’s performance once it is built [5,36–40,258]. In commercial office buildings, it is common practice to assess the long-term satisfaction of occupants using POE surveys (e.g. BUS survey [165], CBE survey [166], BOSSA survey [32], etc.). Although these surveys are conducted at a point of time, their questions are often general and applicable for longer term insights (three to six months or more as suggested in ASHRAE 55 [6]). When combined with continuous IEQ sensor data, these POE responses can be used to examine the accuracy of the physical comfort indices in evaluating thermal environments over time. The results in Paper 2 showed that thermal comfort is particularly challenging in commercial office buildings—building managers are motivated to minimise any disruption or distraction resulting from thermal discomfort in the interest of employee productivity but there is still a pretty large percentage of dissatisfaction in offices compared to other building types. There is a clear need to investigate the accuracy of the long-term comfort indices from standards   71  in evaluating long-term thermal comfort in commercial office buildings. The objectives of this paper are 1) to evaluate the predictive skill of existing long-term thermal comfort indices from standards and 2) to propose new indices based on continuous, in situ physical monitoring and subjective evaluations from four air-conditioned office buildings. 4.2 Methods To address the aim of this study, we conducted secondary data analyses using the Building Occupants Survey System Australia (BOSSA) database [32] and the Sentient Ambient Monitoring of Buildings in Australia (SAMBA) IEQ measurement database [31], both of which were developed by the Indoor Environmental Quality Lab at The University of Sydney. Figure 4-1 outlines the overall methodology of this study. The BOSSA survey is comprised of retrospective questions regarding occupant long-term thermal satisfaction, and the SAMBA devices measure the relevant thermal parameters continuously over time. We used responses to the BOSSA surveys to estimate the true long-term thermal comfort expressed by the subjective index. A variety of physical indices calculated from the SAMBA database were then compared to the subjective index using Pearson correlation analysis. The stronger the correlation is, the better the physical index predicts the true long-term thermal comfort. The following sections describe the databases and data analyses in detail.   Figure 4-1 Methodology Diagram. The image on the right depicts the SAMBA IEQ Monitoring device.   72  4.2.1 BOSSA survey database and subjective index BOSSA is a POE survey tool designed to automate the process of collecting occupants’ subjective assessments of their indoor environment [32]. Respondents are asked to rate their satisfaction with core building elements and functions such as overall design, physical environments, building maintenance, etc. As of July 2019, 91 BOSSA survey campaigns have been completed totalling 7974 questionnaires. A survey campaign is designed to collect responses over a short period—often less than a month—but the questions in the survey reflect occupants’ long-term thermal comfort. The BOSSA Time-Lapse survey is comprised of 58 questions concerning occupants’ satisfaction with different aspects of their workspaces, including six questions on their thermal experience—satisfaction with the indoor temperature in winter, indoor temperature in summer, air movement, humidity, air movement control and temperature control. These questions use a 7-point Likert scale of satisfaction, ranging from Dissatisfied (1) to Neither (4) to Satisfied (7). A preliminary correlation analysis between these retrospective answers and the physical environment (measured 𝑇𝑎, 𝑇𝑔, air velocity and RH) rendered coefficients of nearly zero, meaning that the recall questions were not biased by the right-now experience when occupants filled out the surveys. Different combinations of the six thermal questions in the BOSSA survey were used to calculate a single subjective index. Our testing found the average score of the following two questions had the strongest correlation with the calculated physical indices. 1. Please rate the temperature conditions of your normal work area in winter. 2. Please rate the temperature conditions of your normal work area in summer. For each individual response, we calculated the average of the summer and winter satisfaction scores as occupants’ satisfaction with temperature throughout the year, named “temp year score” (Figure 4-2). The final subjective index representing the overall evaluation of a space was calculated as the mean of the temp year scores for all respondents located on the   73  same floor of a building (the red diamond points in Figure 4-2). The inclusion criteria of the 33 datasets are described in section 4.2.3.   Figure 4-2 Distribution of the temp year scores in the 33 datasets included in our analysis. One boxplot contains the calculated temp year scores in one dataset (a certain floor of a building). The middle thick bar shows the median. The lower and upper hinges correspond to the first and third quartiles (the 25th and 75th percentiles). The red diamond point shows the mean, i.e. the subjective index value for each dataset.   4.2.2 SAMBA IEQ monitoring database  SAMBA is a low-cost wireless sensor network designed to be placed on office work desks for continuous monitoring of common IEQ parameters (see [31] for a detailed overview). Measurement data for thermal comfort, lighting, acoustics and indoor air quality are collected and transmitted back to base at five-minute intervals. The image in Figure 4-1 shows that the SAMBA device is separated into two units, with the smaller unit measuring the four physical thermal comfort parameters—air temperature, globe temperature, air velocity and relative humidity. The two personal factors for PMV model—metabolic rate and clothing level—are either fixed (1.1 met for office work) or estimated using the dynamic predictive clothing model proposed by [259] and endorsed by ASHRAE 55-2017. These six factors are used to calculate the PMV and PPD indices for every five-minute sample. Laboratory tests proved that the accuracy of   74  the SAMBA device closely aligns with the ‘desired’ equipment classification in ISO 7726 (test results summarized in Table 4-1, see [260] for more details). Over 200 SAMBA devices have been installed in 46 office buildings in Australia, mostly within Sydney, since the phased roll-out began in 2016. The monitored buildings were nominated by participating industry partners and are generally representative of the premium-grade commercial building stock in Australia. By July 2019, the SAMBA database contained a total of 13.5 million observations of thermal comfort parameters.   Table 4-1 Measurement accuracy of the SAMBA IEQ Monitoring device compared to the “desired” performance level specified in ISO 7726 [261]  SAMBA ISO 7726 desired Tested range Average standard error of estimate Air temperature 18-27 °C 0.26 °C ±0.2 °C Globe temperature 18-27 °C 0.16 °C ±0.2 °C Air speed 0.00-0.40 m/s 0.015 m/s ±0.02 m/s Relative humidity 20-70% 1.04% ±2%  4.2.3 Data preparation Records from both the BOSSA and SAMBA databases are time-stamped and spatially tagged to either a zone or a floor of a building to allow spatiotemporal pairing of subjective data with physical measurements. For this study, we matched responses from BOSSA campaigns to SAMBA measurements made on the same floor of the same building at close-to-or-during the time of the BOSSA survey campaign. We identified 33 pairings of BOSSA campaigns and SAMBA time series data (Table 4-2) in four office buildings in the central business district in Sydney, Australia. To avoid any potential impact on the building owners and the companies that occupied the buildings, we anonymized the building names and addresses by using generic labels. We used a total of 970 individual survey responses and 2.3 million physical measurements of thermal comfort parameters in our analysis. Table 4-2 summarizes the time of the BOSSA survey campaign, number of responses for each survey campaign, periods of physical monitoring, and   75  the number of workdays (excluding weekends) of monitoring. Four survey campaigns on level 28 in building D had less than ten responses, and subsequent analyses were designed to address the statistical significance from such low numbers. This will be discussed further in the results section. Comfort standards do not give clear guidelines to determine the minimum or ideal range of continuous measurements to use for this type of analysis. Long-term assessment criteria found in ISO 7730, EN 16798 and ASHRAE 55 simply state that the monitoring period should be representative of the conditions overall. To best address this, we prioritised instances where there was an entire year of SAMBA data (i.e. Building D) as the total variance in the thermal environment is captured over the typical annual certification period used by most rating systems. For buildings without a full year of measurements, we used any available SAMBA data beginning or ending within one month of a BOSSA campaign (i.e. Building A, B, and C). After data preparation, there was one year of measurements for Building A and D, 87 working days for Building B, and 108 workdays for Building C. We wondered if the shorter monitoring periods in Building B and C would sufficiently characterize the long-term conditions experienced by the occupants, and whether IEQ measurements collected after the BOSSA campaign are useful given that the long-term satisfaction questions are retrospective. Visual inspection of the SAMBA data from the building zones included in our analysis showed relatively stable conditions throughout the monitoring period (𝑇𝑎 variance of a dataset is 0.76 °C ± 0.22 °C). Comparing air temperature before and after the survey campaigns in the 10 pairs of datasets where the BOSSA campaigns overlapped the SAMBA measurements, the average difference in mean air temperature was 0.27 °C ± 0.26 °C (standard deviation). These small differences indicate that the shorter datasets available in Buildings B and C are sufficient for characterising the low temperature variance expected in premium-grade air-conditioned offices. They also suggest that retrospective survey responses are likely to be just as relevant to prospective physical measurements. We therefore included these 33 datasets in our analysis.    76  Table 4-2 Matched BOSSA survey campaigns and SAMBA measurements included in our analyses Building Floor No. of survey responses BOSSA start BOSSA end SAMBA start SAMBA end Workdays of SAMBA A Level 19 20 2017-03-23 2017-03-30 2017-04-03 2019-03-15 270 B Level 10 107 2018-04-20 2018-05-18 2018-07-16 2018-12-12 87 B Level 9 74 2018-04-20 2018-05-17 2018-07-16 2018-12-06 86 C Level 12 19 2018-02-22 2018-03-07 2018-01-26 2019-05-14 302 C Level 2 31 2018-09-26 2018-10-08 2018-06-29 2019-04-29 217 C Level 21 48 2016-11-21 2016-12-01 2017-02-01 2018-02-01 262 C Level 21 45 2017-06-09 2017-06-22 2017-02-01 2018-02-01 262 C Level 6 39 2016-11-21 2016-11-30 2016-09-09 2017-05-17 108 C Level 6 44 2017-06-07 2017-11-27 2016-09-09 2017-05-17 108 D Level 25 20 2016-12-06 2016-12-15 2016-09-15 2017-09-29 261 D Level 25 11 2018-03-12 2018-03-14 2017-03-01 2018-03-20 186 D Level 25 14 2018-08-14 2018-09-06 2017-08-01 2018-09-10 180 D Level 25 18 2019-05-01 2019-05-08 2018-05-17 2019-07-02 287 D Level 26 19 2016-12-06 2016-12-19 2016-09-15 2017-09-29 266 D Level 26 15 2018-03-12 2018-03-15 2017-03-01 2018-03-20 275 D Level 26 21 2018-08-14 2018-08-22 2017-08-01 2018-08-30 258 D Level 26 22 2019-05-01 2019-05-20 2018-05-01 2019-07-16 230 D Level 27 30 2016-12-06 2016-12-19 2016-10-11 2017-09-29 233 D Level 27 19 2018-03-12 2018-03-16 2017-03-01 2018-03-20 257 D Level 27 30 2018-08-14 2018-09-03 2017-08-01 2018-09-06 235 D Level 27 10 2019-05-01 2019-05-06 2018-05-01 2019-07-17 219 D Level 28 7 2016-12-06 2016-12-19 2016-10-10 2017-09-29 249 D Level 28 3 2018-03-12 2018-03-14 2017-03-01 2018-03-20 275 D Level 28 8 2018-08-14 2018-09-06 2017-08-01 2018-09-10 290 D Level 28 3 2019-05-01 2019-05-06 2018-05-01 2019-07-16 316 D Level 29 34 2016-12-06 2016-12-27 2016-10-10 2017-09-29 252 D Level 29 25 2018-03-12 2018-03-19 2017-03-01 2018-03-20 275 D Level 29 29 2018-08-14 2018-09-10 2017-08-01 2018-09-10 262 D Level 29 23 2019-05-01 2019-05-06 2018-05-01 2019-07-16 272 D Level 30 49 2016-12-02 2016-12-19 2016-08-09 2017-09-29 298 D Level 30 40 2018-03-04 2018-03-19 2017-03-01 2018-03-20 275 D Level 30 46 2018-08-14 2018-08-27 2017-08-01 2018-09-10 289 D Level 30 47 2019-05-01 2019-05-27 2018-05-01 2019-07-16 315  To ensure reliable time series data from the SAMBA devices, the dataset was cleaned following these steps:   77  1. A subset of the SAMBA database was selected to include time series data from buildings and floors with paired BOSSA surveys.  2. The SAMBA data was filtered, keeping records where 0 < 𝑇𝑎 < 50, 0 < 𝑇𝑔 < 50, 0 <𝑇𝑟 < 50 , −3 ≤ 𝑃𝑀𝑉 ≤ 3  to remove erroneous measurements caused by device malfunction. 3. For each record, the operative temperature 𝑇𝑜 was calculated as the average of 𝑇𝑎 and 𝑇𝑟. ASHRAE 55 states that simple averaging is an appropriate method when air speed is below 0.2 m/s, which was true in 95% of our dataset. 4. Cases where |𝑇𝑜 − 𝑇𝑎| ≥ 4 were removed. Time series plots of |𝑇𝑜 − 𝑇𝑎| in each zone showed occasional spikes over 4 °C and up to 10 °C. It is difficult to determine why this occurred in each instance, but it is likely attributable to noise from equipment error or highly localized perturbations near the device that was not representative of the actual physical environment of the zone. We performed additional filtering and calculations required to conduct later analyses. In many cases, there are multiple SAMBA devices in different zones of a given floor, so the average of all the zones was used to summarise the thermal environment of the entire floor. Then, occupied hours (7:00 to 19:00 on weekdays) were determined by measured CO2 level and used to remove data outside of occupancy. The hourly mean of 𝑇𝑎, 𝑇𝑜, PMV, and PPD was calculated because most of the indices require use of hourly values. Finally, seasons were assigned to the dataset, with May to October labelled as winter and November to April as summer. Figure 4-3 shows an example of the time series plot of air temperature in one of the 33 datasets after data preparation and cleaning.   78   Figure 4-3 An example time series plot of the air temperature in a one-year dataset on Level 30 of Building D after data cleaning  4.2.4 Long-term physical indices ISO 7730, EN 16798, and ASHRAE 55 recommend a number of physical indices to evaluate a thermal environment over time. For each of the 33 SAMBA datasets, we calculated existing indices recommended by ISO 7730, EN 16798, and ASHRAE 55, and five new types of indices that we brought to test (Table 4-3).  Table 4-3 Existing and new physical indices tested in this study for long-term thermal comfort evaluation  Index ISO 7730 EN 16798 ASHRAE 55 Percentage of time outside a PMV range • • • Percentage of time outside an operative temperature range • • • Degree-hours • •  PPD-weighted • •  Average PPD •   Sum PPD •   Mean temperature Newly proposed New temperature ranges for percentage-hour and degree-hours Temperature variance Daily range outlier Combined index   79  4.2.4.1 Existing long-term physical indices  The six existing types of indices recommended by comfort standards contain a total of 23 individual physical indices. The calculation methods for them are given in Equations 4-1 – 4-10. 1) Percentage of time outside a PMV range 𝑖𝑛𝑑𝑒𝑥 =  𝑛𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 ℎ𝑜𝑢𝑟𝑠 𝑡ℎ𝑎𝑡 |𝑃𝑀𝑉| > 𝑙𝑖𝑚𝑖𝑡 𝑡𝑜𝑡𝑎𝑙 𝑛𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝑜𝑐𝑐𝑢𝑝𝑖𝑒𝑑 ℎ𝑜𝑢𝑟𝑠× 100 (4-1) ISO 7730 and EN 16798 prescribe three PMV classes (Table 3-1), leading to three indices in this type: %|𝑃𝑀𝑉| > 0.2, %|𝑃𝑀𝑉| > 0.5, and %|𝑃𝑀𝑉| > 0.7. 2) Percentage of time outside a specified operative temperature range 𝑖𝑛𝑑𝑒𝑥 =𝑛𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 ℎ𝑜𝑢𝑟𝑠 𝑡ℎ𝑎𝑡 𝑇𝑜 𝑜𝑢𝑡𝑠𝑖𝑑𝑒 𝑡ℎ𝑒 𝑟𝑎𝑛𝑔𝑒𝑡𝑜𝑡𝑎𝑙 𝑛𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝑜𝑐𝑐𝑢𝑝𝑖𝑒𝑑 ℎ𝑜𝑢𝑟𝑠× 100 (4-2) ISO 7730 and EN 16798 recommend operative temperature ranges for different building types (i.e. different activities) and seasons (i.e. different clothing level). Table 4-4 shows the operative temperature ranges recommended for offices, and thus there are six indices corresponding to six temperature ranges.  Table 4-4 Comfort classifications based on operative temperature ranges for office buildings in standards   Summer (°C) Winter (°C) ISO 7730 class A 23.5 – 25.5, 𝑇𝑜.𝑜𝑝𝑡𝑖𝑚𝑎𝑙 = 24.5 21 – 23, 𝑇𝑜.𝑜𝑝𝑡𝑖𝑚𝑎𝑙 = 22 ISO 7730 class B 23 – 26, 𝑇𝑜.𝑜𝑝𝑡𝑖𝑚𝑎𝑙 = 24.5 20 – 24, 𝑇𝑜.𝑜𝑝𝑡𝑖𝑚𝑎𝑙 = 22 ISO 7730 class C 22 – 27, 𝑇𝑜.𝑜𝑝𝑡𝑖𝑚𝑎𝑙 = 24.5 19 – 25, 𝑇𝑜.𝑜𝑝𝑡𝑖𝑚𝑎𝑙 = 22 EN 16798 class I <= 25.5 >= 21 EN 16798 class II <= 26 >= 20 EN 16798 class III <= 27 >= 19  3) Degree-hours The degree-hours index is calculated as the product sum of the weighting factors and exposure time (Equation 4-5). The weighting factor for each hour is associated with the   80  exceedance magnitude of operative temperature beyond the specified range. The weighting factor is calculated differently in ISO 7730 (Equation 4-3) and EN 16798 (Equation 4-4). This type includes six indices corresponding to the six different ranges in Table 4-4. 𝑤𝑓𝐼𝑆𝑂 = {1 +|𝑇𝑜 − 𝑇𝑜.𝑙𝑖𝑚𝑖𝑡||𝑇𝑜.𝑜𝑝𝑡𝑖𝑚𝑎𝑙 − 𝑇𝑜.𝑙𝑖𝑚𝑖𝑡|, 𝑇𝑜 ≥ 𝑇𝑜.𝑙𝑖𝑚𝑖𝑡.𝑢𝑝𝑝𝑒𝑟 or 𝑇𝑜 ≤ 𝑇𝑜.𝑙𝑖𝑚𝑖𝑡.𝑙𝑜𝑤𝑒𝑟0, 𝑇𝑜.𝑙𝑖𝑚𝑖𝑡.𝑙𝑜𝑤𝑒𝑟 < 𝑇𝑜 < 𝑇𝑜.𝑙𝑖𝑚𝑖𝑡.𝑢𝑝𝑝𝑒𝑟(4-3) 𝑤𝑓𝐸𝑁 = {|𝑇𝑜 − 𝑇𝑜.𝑙𝑖𝑚𝑖𝑡|, 𝑇𝑜 > 𝑇𝑜.𝑙𝑖𝑚𝑖𝑡.𝑢𝑝𝑝𝑒𝑟 𝑜𝑟 𝑇𝑜 < 𝑇𝑜.𝑙𝑖𝑚𝑖𝑡.𝑙𝑜𝑤𝑒𝑟0, 𝑇𝑜.𝑙𝑖𝑚𝑖𝑡.𝑙𝑜𝑤𝑒𝑟 ≤ 𝑇𝑜 ≤ 𝑇𝑜.𝑙𝑖𝑚𝑖𝑡.𝑢𝑝𝑝𝑒𝑟(4-4) 𝑖𝑛𝑑𝑒𝑥 = ∑ 𝑤𝑓 ∙ 𝑡 (4-5) 4) PPD-weighted The hours during which PMV exceeds the range are summed and weighted by a factor determined by PPD. The calculation of weighting factors is different between ISO 7730 (Equation 4-6) and EN 16798 (Equation 4-7) but the formula for the PPD-weighted index is identical—product sum of the weighting factors through time (Equation 4-8). There are three PMV classes and two calculation formulae resulting in six indices for this type. 𝑤𝑓𝐼𝑆𝑂 = {𝑃𝑃𝐷𝑃𝑀𝑉.𝑎𝑐𝑡𝑢𝑎𝑙𝑃𝑃𝐷𝑃𝑀𝑉.𝑙𝑖𝑚𝑖𝑡, |𝑃𝑀𝑉𝑎𝑐𝑡𝑢𝑎𝑙| ≥ |𝑃𝑀𝑉𝑙𝑖𝑚𝑖𝑡|0, |𝑃𝑀𝑉𝑎𝑐𝑡𝑢𝑎𝑙| < |𝑃𝑀𝑉𝑙𝑖𝑚𝑖𝑡|(4-6) 𝑤𝑓𝐸𝑁 = {𝑃𝑃𝐷𝑃𝑀𝑉.𝑎𝑐𝑡𝑢𝑎𝑙𝑃𝑃𝐷𝑃𝑀𝑉.𝑙𝑖𝑚𝑖𝑡, |𝑃𝑀𝑉𝑎𝑐𝑡𝑢𝑎𝑙| > |𝑃𝑀𝑉𝑙𝑖𝑚𝑖𝑡|0, |𝑃𝑀𝑉𝑎𝑐𝑡𝑢𝑎𝑙| ≤ |𝑃𝑀𝑉𝑙𝑖𝑚𝑖𝑡|(4-7) 𝑖𝑛𝑑𝑒𝑥 = ∑ 𝑤𝑓 ∙ 𝑡 (4-8) 𝑃𝑃𝐷𝑃𝑀𝑉.𝑎𝑐𝑡𝑢𝑎𝑙  is the PPD corresponding to the actual PMV. 𝑃𝑃𝐷𝑃𝑀𝑉.𝑙𝑖𝑚𝑖𝑡  is the PPD corresponding to 𝑃𝑀𝑉𝑙𝑖𝑚𝑖𝑡 as listed in Table 3-1.  5) Average PPD 𝑖𝑛𝑑𝑒𝑥 =∑ 𝑃𝑃𝐷𝑡𝑜𝑡𝑎𝑙 𝑛𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝑜𝑐𝑐𝑢𝑝𝑖𝑒𝑑 ℎ𝑜𝑢𝑟𝑠(4-9)   81  6) Sum PPD 𝑖𝑛𝑑𝑒𝑥 = ∑ 𝑃𝑃𝐷 (4-10) The different time series lengths of the 33 SAMBA datasets effects the calculation of the time-dependent indices i.e. degree-hours, PPD-weighted, and Sum PPD. To address this, we normalized those indices by dividing them by the total number of hours measured. 4.2.4.2 New long-term physical indices We brainstormed and tested a variety of new physical indices grouped across five new index types to determine the best-performing, practical measure. Existing indices found in standards use operative temperature as an input, but air temperature is more readily available in almost any building. For this reason, we decided to test the performance of the new indices using both operative and air temperature. The calculation of the new indices are as follows. 1) Mean temperature This type of indices uses the mean 𝑇𝑎 and mean 𝑇𝑜 of each SAMBA dataset.  𝑖𝑛𝑑𝑒𝑥 =∑ 𝑇𝑎 𝑜𝑟 ∑ 𝑇𝑜 𝑡𝑜𝑡𝑎𝑙 𝑛𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝑜𝑐𝑐𝑢𝑝𝑖𝑒𝑑 ℎ𝑜𝑢𝑟𝑠(4-11) 2) New temperature ranges In this category, percentage-hour and degree-hour indices are calculated using Equations 4-2, 4-4 and 4-5 but with different temperature ranges to those defined in ISO 7730 and EN 16798. A total of ten new temperature ranges for 𝑇𝑎 and 𝑇𝑜 are derived using percentiles, mean, and standard deviation of the SAMBA time series data as shown in Table 4-5 with different temperature ranges specified for summer and winter seasons. A total of 20 indices of this type were calculated. It is interesting to note that measured temperatures were quite stable in the monitored offices, so the derived temperature ranges are similar.    82  Table 4-5 Temperature ranges derived from time series data for the calculation of the new comfort indices   Operative temperature °C Air temperature °C Range name Meaning Summer Winter Summer Winter P20 The 40th to 60th percentile 23.6 – 24.0 23.3 – 23.6 23.3 – 23.7 23.1 – 23.5 P40 The 30th to 70th percentile 23.4 – 24.2 23.1 – 23.8 23.1 – 23.9 22.9 – 23.7 P60 The 20th to 80th percentile 23.2 – 24.5 22.9 – 24.0 22.9 – 24.2 22.7 – 23.9 P80 The 10th to 90th percentile 22.9 – 25.0 22.6 – 24.4 22.6 – 24.8 22.4 – 24.3 1sd Mean ± 1sd 22.9 – 25.0 22.7 – 24.2 22.6 – 24.8 22.5 – 24.2 3) Temperature variance This index is based on the sample variance of the hourly average temperature for each SAMBA dataset and calculates the metric as: 𝑖𝑛𝑑𝑒𝑥 =∑ (𝑇𝑎,𝑖 − 𝑇𝑎̅̅ ̅)2𝑛𝑖=1𝑛 − 1 𝑜𝑟 ∑ (𝑇𝑜,𝑖 − 𝑇?̅?)2𝑛𝑖=1𝑛 − 1(4-12) where n = total number of occupied hours; 𝑇𝑎̅̅ ̅ and 𝑇?̅? are the sample mean temperatures. 4) Daily range outlier The range of temperatures measured over each business day is used to calculate the index as the percentage of days where that range exceeds a nominal threshold. 𝑖𝑛𝑑𝑒𝑥 =𝑛𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝑑𝑎𝑦𝑠 𝑡ℎ𝑎𝑡 𝑇𝑎 𝑜𝑟 𝑇𝑜 𝑑𝑎𝑖𝑙𝑦 𝑟𝑎𝑛𝑔𝑒 > 𝑎 𝑡ℎ𝑟𝑒𝑠ℎ𝑜𝑙𝑑 𝑡𝑜𝑡𝑎𝑙 𝑛𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝑜𝑐𝑐𝑢𝑝𝑖𝑒𝑑 𝑑𝑎𝑦𝑠× 100 (4-13) For this analysis, we set the threshold based on percentiles of the observed daily ranges in the SAMBA time series data. Ten different values were tested (Table 4-6) to determine the threshold with the strongest correlation to thermal satisfaction. Weekly ranges were tested but reported weaker correlations than daily variance and were therefore dropped from the analysis. Table 4-6 Percentile thresholds tested for the daily range outlier indices. Stable conditions in the monitored offices result in a daily variance of less than 2.5 °C in most cases. Percentile 50th  60th  70th  80th  90th  𝑻𝒂 daily range (°C) 1.31 1.48 1.69 2.00 2.48 𝑻𝒐 daily range (°C) 1.20 1.36 1.56 1.83 2.29    83  5) Combined index We combined the best-performing existing index and the best-performing new index in Equation 4-14. This normalised index from 0 to 100 considers if the absolute temperature is within an acceptable range and whether the daily variance in temperature daily exceeds the percentile threshold.  𝑖𝑛𝑑𝑒𝑥= (𝑛𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 ℎ𝑜𝑢𝑟𝑠 𝑡ℎ𝑎𝑡 𝑇𝑎 𝑜𝑟 𝑇𝑜 𝑜𝑢𝑡𝑠𝑖𝑑𝑒 𝐼𝑆𝑂 𝐵 𝑟𝑎𝑛𝑔𝑒𝑠 𝑡𝑜𝑡𝑎𝑙 𝑛𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝑜𝑐𝑐𝑢𝑝𝑖𝑒𝑑 ℎ𝑜𝑢𝑟𝑠+𝑛𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝑑𝑎𝑦𝑠 𝑡ℎ𝑎𝑡 𝑇𝑎 𝑜𝑟 𝑇𝑜 𝑑𝑎𝑖𝑙𝑦 𝑟𝑎𝑛𝑔𝑒 > 𝑡ℎ𝑒 80𝑡ℎ 𝑝𝑒𝑟𝑐𝑒𝑛𝑡𝑖𝑙𝑒 𝑡𝑜𝑡𝑎𝑙 𝑛𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝑜𝑐𝑐𝑢𝑝𝑖𝑒𝑑 𝑑𝑎𝑦𝑠) × 100/2 (4-14) 4.2.5 Correlation analysis Pearson correlation analysis was used to investigate the linear relationship between the metric of thermal satisfaction votes and the long-term thermal comfort indices. We chose to report the Pearson coefficient (𝑟) because it is independent of the unit of measurement and is symmetric between X and Y, removing the need to scale the input data. There is no consensus on what is considered a strong linear relationship. Interpretation of the Pearson coefficient varies between fields and depends on the stated aims of the study. For clinicians, 0.2, 0.5, and 0.8 were suggested as the thresholds to differentiate weak, moderate, and strong associations [262]. For psychological analysis, even lower thresholds may be used i.e. 0.1, 0.2, and 0.3 [263]. In general statistics, threholds can be 0.3, 0.5, and 0.7 [264], or 0.1, 0.3, and 0.5 [265], or 0.3 and 0.5 [266]. Regardless of the differences between disciplines, a higher correlation coefficient is more desirable as it indicates a stronger relationship between the long-term comfort indices and the subjective evaluation of the thermal environment. In this analysis, we use absolute coefficient values of 0.3, 0.5, and 0.7 (the 30th, 50th, and 90th percentiles of the resulted 59 coefficients) to indicate weak, moderate, and strong linear relationships respectively, and 𝑝 < 0.05 as indicating statistical significance of the correlation.   84  Given the small sample size (N = 33) we used 10-repeated-10-fold-cross-validated linear regressions for all correlation analyses to improve the robustness of the results. Mean Absolute Error (MAE), the average absolute difference between observed and predicted outcomes, measures the performance of the indices such that the lower the MAE, the better the index. All analyses were performed using R Studio and R version 3.6.1 (2019-07-05) [267], along with the dplyr v0.8.3 [268], tidyr v0.8.3 [269], reshape2 v1.4.3 [270], lubridate v 1.7.4 [271], zoo v 1.8.6 [272], caret v6.0.84 [273], ggplot2 v3.2.0 [274], ggpubr v0.2.1 [275], and gridExtra v2.3 [276] packages. 4.3 Results Figure 4-4 presents the Pearson correlation coefficients between the physical thermal comfort indices and the subjective index. The physical indices may be understood as types of severity measures, where higher values indicate greater dissatisfaction with the thermal environment. For this reason, we expect negative correlations between the physical indices and the subjective index.  Of the existing indices found in thermal comfort standards, only two—degree-hours To outside ISO B and %To  outside ISO B—have a strong linear relationship with the satisfaction measure. This indicates that the more times and the larger deviations that To  was outside a specified temperature range, the more occupants felt dissatisfied over time. Indices based on PMV/PPD all reported weak linear relationships with thermal satisfaction. The performance of the new indices shown in Figure 4-4 indicate that those based on mean temperatures, modified temperature ranges, and overall temperature variance have weak to moderate linear relationships with the subjective comfort measure. However, the daily range outlier indices show strong negative relationships to thermal satisfaction and out-perform most of the existing indices. The highest correlation coefficient is 0.8 for the daily air temperature variance above the 80th percentile (2 C). In other words, increases in the daily occurrences of an air temperature range greater than 2 C are highly correlated with lower occupant thermal satisfaction for this dataset.    85    Figure 4-4 The statistical relationships between the long-term comfort physical indices and reported thermal satisfaction subjective index for 23 existing indices on the left (purple) and 36 new indices on the right (green). Indices are grouped by type, and the Pearson correlation coefficients for each index are given. Darker shading denotes statistical significance of the correlation (p<0.05).    86  Combining the best-performing existing index with the best-performing new range outlier index for both To and Ta resulted in slightly lower correlation coefficients than the range outlier index alone. However, they still report strong linear relationship for both the operative (𝑟 =−0.74) and air temperature (𝑟 = −0.71) variants. The slight decrease in correlation strength in the air temperature metric may be due to ISO class B ranges being specified for operative temperatures. Nevertheless, the new combined indices outperform any of the existing indices and have the advantage of defining a static temperature range modulated by a dynamic component.  Table 4-7 Results of the simple linear regression and cross-validated linear regression of long-term comfort indices and thermal satisfaction measure. Underlined is the best performing existing index as the baseline. Bold is better performance than baseline. Index Type Parameter Used Linear Regression 10 Repeated 10-Fold Cross-Validation of Linear Regression Models Strongest Index Name r Strongest Index Name MAE Existing indices  𝑇𝑎   %𝑇𝑎  outside ISO B -0.45 %𝑇𝑎  outside ISO B 0.385 𝑇𝑜  %𝑇𝑜 outside ISO B -0.63 Degree-hours 𝑇𝑜 outside ISO B 0.325 Mean temperature 𝑇𝑎   𝑇𝑎  mean -0.49 𝑇𝑎  mean 0.357 𝑇𝑜  𝑇𝑜 mean -0.43 𝑇𝑜 mean 0.366 New temperature range 𝑇𝑎   Degree-hours 𝑇𝑎  outside P20 -0.55 Degree-hours 𝑇𝑎  outside P40 0.354 𝑇𝑜  Degree-hours 𝑇𝑜 outside P20 -0.59 Degree-hours 𝑇𝑜 outside P20 0.318 Temperature variance 𝑇𝑎   𝑇𝑎  variance -0.5 𝑇𝑎  variance 0.373 𝑇𝑜  𝑇𝑜 variance -0.37 𝑇𝑜 variance 0.366 Daily range outlier 𝑇𝑎   %𝑇𝑎  daily range > 2 °C -0.80 %𝑇𝑎  daily range > 2 °C 0.266 𝑇𝑜  %𝑇𝑜 daily range > 1.83 °C -0.74 %𝑇𝑜daily range > 1.83 °C 0.268 Combined index 𝑇𝑎   %𝑇𝑎  outside ISO B + %𝑇𝑎  daily range > 2 °C -0.71 %𝑇𝑎  outside ISO B + %𝑇𝑎  daily range > 2 °C 0.326 𝑇𝑜  %𝑇𝑜 outside ISO B + %𝑇𝑜 daily range > 1.83 °C -0.74 %𝑇𝑜 outside ISO B + %𝑇𝑜 daily range > 1.83 °C 0.286  To ensure robustness and further validate the results of the correlation analysis, we performed 10-repeated-10-fold-cross-validated linear regression on both the existing indices and the newly proposed indices. The results of the linear regressions are shown alongside the best   87  performing indices from each index type to aid comparison in Table 4-7. The ranked performances of the old and new indices are the same in both regression analyses, indicating that the simple linear regression results are robust.  Another common concern for linear regression is the influence of outliers. We investigated the effect of outliers on the results of the linear regression using the Cook’s distance test [277] to identify high leverage points. The Cook’s distance measures the change in regression models when each of the observations is removed. Higher values in the Cook’s distance indicate that removing a given observation will lead to a large change in the regression. When the Cook’s distance is greater than 4/N (N = 33 in our analysis), the observation is deemed a high leverage point, also known as an outlier. Figure 4-5 presents scatterplots for the 12 best performing indices from each index type listed in Table 4-7. Coefficients are inset for both the regression with all data points (r) and with outliers removed (r.new) based on Cook’s distance. Circles are data points used in both regressions and the red crosses mark high leverage points that were removed for the “r.new” regressions. As expected, removing high leverage points increased correlation strength in most cases, e.g. the best-performing new index based on daily range outliers increased from 𝑟 = −0.80 to r = −0.83 after removing outliers. Though not marked in Figure 4-5, most outliers are from building D level 28 where the numer of survey responses was small.   88   Figure 4-5 Scatterplots of the best-performing indices for each type (N = 33)   89  4.4 Discussion In the following section we address some of the key findings that emerged from this analysis and then discuss interpretations of the results along with the limitations of the study. 4.4.1 Preparing time series Data Removing any IEQ measurements made during unoccupied hours seems an obvious part of the data preparation procedure but is surprisingly absent from any of the reviewed standards or guidelines. We found that applying rules to filter out unoccupied hours in the time series data meaningfully changed the correlation analysis results. If data from occupied hours (7:00 to 19:00) including weekends are used, then the best-performing existing index has a Pearson coefficient of -0.48. If the complete time series data is used without any time filters, then the coefficient reduces further to -0.35. In both cases, the relationship is weaker than the one we reported using occupied hours from weekdays only (-0.63). The reason for this is because buildings management systems are programmed to tightly control indoor environments during occupied hours but then cut back as occupant loads reduce, i.e. at night and on weekends. It is therefore important to carefully select the occupied hours when calculating long-term comfort indices for pairing with subjective evaluations so that the IEQ measurements reflect the actual conditions experienced by occupants. 4.4.2 Use of air or operative temperature Most standards list operative temperature, 𝑇𝑜 , as the input parameter for long-term comfort indices on the grounds that it is a better characterisation of the thermal environment that occupants are exposed to than air temperature, 𝑇𝑎. Operative temperature encompasses air temperature and mean radiant temperature. However, the results of the linear regressions in Table 4-7 show that half of the indices better predict occupants’ long-term satisfaction using 𝑇𝑎 and the other half using 𝑇𝑜. This finding raises the important question of which temperature to use when evaluating the long-term performance of a thermal environment.  It is difficult to explain why the correlation coefficients differ, albeit slightly, when using 𝑇𝑎 or 𝑇𝑜. Figure 4-6 shows the comparison of both temperatures in the four monitored office   90  buildings. Differences between 𝑇𝑎  and 𝑇𝑜  were small for most floors—less than 0.5 °C—with greater variance in 𝑇𝑎  than 𝑇𝑜 . An analysis [278] of field measurements in ASHRAE Global Thermal Comfort Database II [30] and additional laboratory testing reported similar differences in air and radiant temperatures, and they suggested 𝑇𝑎  as an appropriate estimate of mean radiant temperature when it is not readily available. Moreover, recent studies have reported systematic errors when using traditional globe thermometers to measure radiant temperature [279,280]. For these reasons, it seems 𝑇𝑎 is sufficient as an input parameter for calculating long-term indices when 𝑇𝑜 has not been measured. This has the added advantage of enabling the use of continuous temperature records from building management systems for long-term evaluation of existing buildings.   Figure 4-6 Comparison of air temperature and operative temperature in the studied SAMBA datasets. Red dashed line is zero. The lower and upper hinges of the boxplots correspond to the 25th and 75th percentiles. The middle black bars are the medians.   91  4.4.3 Specifying index thresholds We reported a higher correlation between the middle temperature range specified in ISO 7730 (ISO class B) and occupants’ thermal satisfaction compared to both the narrower (class A) and the wider range (class C). A similar pattern was observed for the new daily range outlier indices also, where the correlation with the 80th percentile was stronger than with the 70th or 90th percentiles. One possible explanation for this finding is that the metric used to characterise the physical environment needs to have an appropriate level of sensitivity to distinguish periods when thermal conditions in an office are satisfactory from periods when it is unsatisfactory. If the temperature range or the threshold for the daily range outlier indices are too narrow, then the likelihood of a false negative classification is increased (i.e. conditions are measured to be outside the range or threshold even though occupants report satisfaction). Conversely, setting the boundaries too wide increases the likelihood of a false positive classification (i.e., conditions are measured to be within the range or threshold even though occupants report dissatisfaction).  Results from the monitored buildings in this study showed that ISO class B temperature range and a 2 °C threshold for 𝑇𝑎 daily range are the best indices to estimate occupant thermal satisfaction. It is possible, however, that the most appropriate index range or thresholds may vary with occupancy type, floor layout, and/or building design. For this reason, we caution against prematurely prescribing those specific range or threshold values as design guidelines for all buildings. Doing so would encourage excessive HVAC energy use to maintain stable indoor conditions during building operation. Instead, careful attention should be given to correctly specifying the index thresholds to best align with the thermal expectations of occupants for a given building. 4.4.4 Occupant adaptation and sensitivity to variation The results of the correlation analysis showed mean operative temperature has a moderate linear relationship with the subjective evaluation (𝑟 = −0.43), yet the frequency that operative temperature is outside a range is a better predictor of thermal (dis)satisfaction (𝑟 = −0.63). Furthermore, absolute variation in operative temperature has a lower correlation with   92  the long-term thermal satisfaction ( 𝑟 =  −0.37 ) than the frequency with which daily temperature changes exceed a wide range (𝑟 = −0.74). These findings indicate the possibility of more extreme excursions beyond some acceptable temperature range holding greater influence over occupants’ long-term satisfaction than the average experience over time. Although the monitored buildings are centrally conditioned, the following section will explore the results within the framework of adaptive comfort theory. Doing so allows us to connect our findings to the idea that occupants’ thermal expectations and the availability of adaptive opportunities can shape long-term thermal satisfaction in a building. One of the central tenants of adaptive comfort theory is that occupants actively respond to changing indoor environments by adjusting their behaviors and expectations [234]. The efficacy of those adjustments is influenced by a number of factors ranging from building type and design, workplace culture, and thermal physiology. Occupants are generally forgiving of moderate temperature variations because they are able to successfully regulate their personal environment to achieve thermal comfort. A common example is putting on a sweater when it is cool or initiating a desk fan when it is warm. However, instances in which the magnitude of the variation exceeds the adaptive ability of occupants is much more likely to lead to expressions of dissatisfaction. In such cases, the indoor environment did not meet the expectations of the occupant nor their capacity to adapt.  The results suggested that more extreme deviations in comfort may dominate occupants’ long-term evaluation of the space. The evidence supporting this statement is the strong negative relationship reported between the frequency that 𝑇𝑎 varies greater than 2 °C in a day and the reported thermal satisfaction. Interestingly, this threshold is identical to field observations made by Humphreys in 1970s [281] who reported very similar findings. However, the four monitored buildings are premium-grade offices with HVAC systems designed to deliver a narrow temperature range. Measurements of air temperature reported in Table 4-5 and Table 4-6 show that there is little variation in normal daily conditions in these offices. The majority of BOSSA survey respondents (76%) had worked in their building for more than six months, so it is reasonable to assume that they had come to expect uniform temperatures considering the   93  reported impact of thermal history on thermal expectations [282]. When unexpected deviations in those conditions occurred, those building occupants reported lower satisfaction. It is possible, then, that the source of dissatisfaction is not with the variation in temperature from some absolute target range but rather with the fact that the indoor environment did not deliver the conditions that building occupants had come to expect.  The question emerging from the results of our study is whether dissatisfaction arises from variability in temperature per se, or if it is because occupants were not able to properly respond and adapt to conditions that did not meet their expectations. Simply concluding that occupants of these buildings prefer stable daily temperatures would contradict a large amount of extant literature on adaptation and variability. For example, a meta-analysis of ASHRAE Global Thermal Comfort Database II [283] showed that acceptable temperatures extend across a wide range of conditions, and vary depending on building type and climate/culture. And there is emerging evidence that building occupants adapt to the indoor temperatures they experience on a day-to-day basis irrespective of climate or building conditioning strategy [284]. These studies all highlight the importance of occupants’ expectations of a building in defining their comfort temperatures. Our results support this idea by showing that thermal satisfaction is less about the absolute indoor temperature than it is about exceeding some variability threshold. Or put another way, instances of dissatisfaction occur when the magnitude of variation exceeds the adaptive opportunities available to occupants. It may be that variability is only a problem when building occupants have come to expect constant, stable conditions that are afforded by modern HVAC systems. One practical solution would be to relax tight setpoint control and provide occupants with personal comfort systems to augment their ability to respond to variations in zone temperatures [242]. 4.4.5 Proposed use of new index in standards Given that most of the existing indices found in current international comfort standards do not correlate well with long-term thermal satisfaction, there is a need to propose new indices that better predict occupants’ evaluations of indoor environments. Although the daily range outlier indices showed the strongest correlation coefficients, they do not explicitly set reasonable   94  limits on permissible absolute indoor temperatures. Measurements from the monitored buildings clearly show that indoor temperatures fell within what most people would consider a comfortable range. However, it is unlikely that an indoor temperature of 10 °C controlled within ±1 °C would result in higher thermal satisfaction. Therefore, the most logical method for standards bodies to adopt would be the combined index that consider both the comfort temperature range and the daily variability. Equation 4-15 shows the general form of the recommended new index. This index can be used to evaluate the actual performance of a thermal environment over time for either building certification or comparison with other buildings including benchmarking. 𝑖𝑛𝑑𝑒𝑥 =%𝑇𝑎 𝑜𝑢𝑡𝑠𝑖𝑑𝑒 𝑠𝑝𝑒𝑐𝑖𝑓𝑖𝑒𝑑 𝑟𝑎𝑛𝑔𝑒𝑠 + %𝑇𝑎 𝑑𝑎𝑖𝑙𝑦 𝑟𝑎𝑛𝑔𝑒 > 𝑎 𝑡ℎ𝑟𝑒𝑠ℎ𝑜𝑙𝑑2(4-15) For centrally conditioned office buildings, where occupants have less adaptive opportunities (respondents in our sample were on average slightly dissatisfied with their adaptive freedom), ISO class B temperature ranges of 23 °C to 26 °C in summer and 20 °C to 24 °C in winter are suitable for the temperature range component of the new index (Equation 4-16). 𝑖𝑛𝑑𝑒𝑥 = (𝑛𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 ℎ𝑜𝑢𝑟𝑠 𝑡ℎ𝑎𝑡 𝑇𝑎 𝑜𝑢𝑡𝑠𝑖𝑑𝑒 𝐼𝑆𝑂 𝑐𝑙𝑎𝑠𝑠 𝐵 𝑟𝑎𝑛𝑔𝑒𝑠 𝑡𝑜𝑡𝑎𝑙 𝑛𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝑜𝑐𝑐𝑢𝑝𝑖𝑒𝑑 ℎ𝑜𝑢𝑟𝑠+𝑛𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝑑𝑎𝑦𝑠 𝑡ℎ𝑎𝑡 𝑇𝑎 𝑑𝑎𝑖𝑙𝑦 𝑟𝑎𝑛𝑔𝑒 > 2 °𝐶𝑡𝑜𝑡𝑎𝑙 𝑛𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝑜𝑐𝑐𝑢𝑝𝑖𝑒𝑑 𝑑𝑎𝑦𝑠) × 100/2 (4-16) 4.4.6 Study limitations The most obvious limitation of our study is the homogeneous building sample in our dataset. The four buildings are all centrally conditioned, premium-grade offices located in the same city. Similar analyses performed for different building types, designs, and operations across other locations, climates, and cultures are necessary before generalizing the findings, particularly the value of the variance threshold. Another limitation is the slight mismatch between the records in BOSSA and SAMBA databases due to the phased roll out, resulting in only four buildings with paired subjective and objective measurements. Furthermore, the BOSSA survey recorded the floor on which the respondents were working but not the zone. This necessitated the   95  aggregation and averaging of both survey responses and physical measurements at the floor level. An ideal research design would directly measure the conditions at each occupant’s desk over a year and routinely solicit surveys designed to evaluate their long-term thermal comfort. This presents significant logistical challenges and costs and may become increasingly difficult considering the growing popularity of activity-based office design. Averaging by location is a simplified but realistic approach for a study of this kind. Future research efforts should aim to increase the sampling granularity by spatiotemporally tagging responses and IEQ measurements at the zone level of a building to improve the robustness of subsequent correlation analyses. 4.5 Conclusion This chapter presents the results of correlation analyses between continuous indoor thermal comfort measurements and long-term occupant thermal satisfaction in four air-conditioned office buildings in Sydney, Australia. We tested the performance of 23 indices found in international comfort standards and 36 newly proposed indices. The analysis yielded the following findings: 1. Existing indices based on the PMV heat-balance model and the associated PPD do not correlate well with long-term subjective evaluations of the thermal environment (|𝑟| ≤ 0.22).  2. The best-performing existing index is the percentage of time that operative temperature falls outside the ISO 7730 Class B temperature ranges (𝑟 = −0.63).  3. The mean and overall variance of temperature had moderate correlations with the thermal satisfaction measure (|𝑟| ≤ 0.5). 4. A newly proposed index based on daily temperature range had the strongest correlation (𝑟 = −0.8) and outperformed all existing indices. The frequency of daily temperature range exceeding 2 °C was a good measure of thermal (dis)satisfaction in this dataset. 5. Standards bodies should endorse a combined index to evaluate the long-term thermal comfort of indoor environments based on continuous monitoring of air   96  temperature. The proposed combined index (Equation 4-15) includes both the frequency of temperature falling outside a specified range and the daily variation in temperature beyond a threshold. As far as we are aware, this is the first evaluation of existing long-term thermal comfort indices using data collected in real office buildings. The results suggest that occupants’ thermal satisfaction with a space is dominated by the frequency and severity of temperature excursions outside an acceptable range and beyond a daily variability threshold. This implies that building managers should limit the number of days where temperatures move outside the range that occupants have come to expect. It may be possible to reduce HVAC energy consumption by providing greater adaptive freedom to occupants and promote a culture of self-resilience so that expected temperature ranges can be wider than the ones reported in this study. Finally, we suggest removing PMV/PPD-based long-term comfort indices from comfort standards and to include the proposed combined index based on temperature range and daily range exceedance (Equation 4-15) for long-term comfort evaluations during building operation phase. The threshold value in the combined index should be context-specific, and we encourage researchers to conduct similar correlation analyses for other locations and building types to improve the robustness of this novel method.    97  Chapter 5: Conclusion 5.1 Summary of results The review presented in Paper 1 demonstrates that the evaluation of building performance and occupant satisfaction in the post-occupancy phase has attracted increasing research attention over the past decade. It emerged from the review and analysis of extant literature that the most popular research targets are residential buildings followed by office buildings and evaluating occupant satisfaction has become the most common purpose of post-occupancy evaluations. Most POE projects that measured indoor environmental quality included some type of observations of the thermal conditions, and nearly half of the surveyed POE projects assessed the thermal environment. These findings highlight the importance of measuring occupants’ actual thermal comfort during the post-occupancy phase of a building. Paper 1 also summarized the practices for IEQ measurements including physical measurements and occupant surveys. Paper 2 used a global database of point-in-time thermal measurements and occupant surveys to assess the current point-in-time compliance criteria—three tiered PMV classes. The observed thermal satisfaction in real buildings showed no significant difference between the three classes, suggesting that tiered classifications based on PMV is not an appropriate compliance criterion. In fact, in the range of -1 < PMV < 1, there was negligible difference in the observed percentage of dissatisfied (OPD). It is clear from the results that narrower PMV ranges promote increased energy-intensive HVAC use without ensuring higher occupant satisfaction. Our analysis further indicates the inaccuracy of the PMV model by comparing OPD-PMV relationships to PPD-PMV relationship. OPD-PMV curves with an almost-flat bottom were less sensitive than the PPD-PMV curve, and the lowest dissatisfaction level (approximately 20%) was higher than the ideal 5% dissatisfaction predicted. In light of the poor performance of PMV-based metrics, we compared two statistical methods to derive new compliance criteria—acceptable temperature ranges—directly from the thermal comfort field study database. Using different data-driven methods yielded varying acceptable temperature ranges, so we performed a thorough methodological discussion to   98  compare two data-driven methods. The best-performing method consisted of two steps: 1) calculating individual neutral temperatures from corresponding air temperatures and thermal sensation votes; and 2) determining the acceptable temperature range from the 10th percentile to the 90th percentile of the neutral temperatures. The resulting acceptable temperature ranges (7.4 °C – 12.2 °C) are wider than the ISO 7730 (2 °C – 6 °C) and EN 16798 (maximum 26 °C and minimum 20 °C) mandate, and the reason may be three-fold: inaccuracy of PMV-PPD model, variance in the input variables of PMV model, and the generalization of the European context where ISO was predominantly used. This paper also discussed some problems with thermal comfort field studies, such as the challenges in using different psychometric scales, the question of the appropriateness of the 80% acceptability threshold for office buildings, and the importance of context on thermal comfort. Paper 2 also found it more difficult to achieve higher thermal satisfaction for occupants of offices compared to other building types like homes because people usually have lower levels of control over their office environments. Paper 3 therefore focused on this most difficult scenario—centrally conditioned offices where occupants lack thermal adaptation opportunities—and investigated the conditions that impacted occupants’ long-term thermal comfort. This analysis of current long-term compliance criteria—the long-term thermal comfort indices suggested in standards—used continuous in-situ thermal measurements and paired occupant feedback in four premium-grade centrally air-conditioned offices in Sydney, Australia. The Pearson correlation analyses between the 23 existing indices and the subjective evaluations demonstrated that those indices based on PMV and PPD do not correlate well with the long-term subjective evaluations. Two existing indices based on the ISO 7730 Class B temperature ranges showed a moderate-to-high linear relationship with the subjective evaluations.  To improve the long-term comfort indices, we tested 36 new indices (grouped by five types) that generally outperformed the existing indices. Specifically, the mean and overall variance of temperature had moderate correlations with the thermal satisfaction measure, and a newly proposed index based on daily temperature range had the strongest correlation (𝑟 =−0.8). The results suggest that occupants’ thermal satisfaction with a space is dominated by   99  pronounced excursions beyond some acceptable temperature range and large variations in daily temperature rather than the average experience over time. This paper also emphasized the importance of data filtering in data preparation, encouraged the use of air temperature instead of operative temperature, discussed the challenge in specifying the index thresholds, and attempted to explain the theories behind the results. Finally, we propose the use of a new combined index that considers both the frequency of temperature falling outside a specified range and the frequency of daily temperature variation beyond a threshold (Equation 4-15 shows the formula of the proposed index). 5.2 Contributions This dissertation explored what conditions are acceptable or desirable to occupants in real buildings from an unusual perspective—the investigation of the thermal comfort compliance criteria. The main contributions include 1. a bibliometric analysis to complement the existing literature on measurements in real buildings; 2. evidence to indicate the invalidity of current point-in-time compliance criteria and a recommended new data-driven method to derive new compliance criteria for point-in-time assessment; and  3. evidence to indicate the invalidity of current long-term compliance criteria and a proposed new index for long-term assessment. Specifically, Paper 1 analyzed 146 recent POE projects in the literature and summarized their evaluation targets, purposes, and methods used, identified 16 existing POE protocols, summarized three emerging research topics related to POE, and proposed five directions for future POE research. Paper 2 provided evidence to indicate the invalidity of current thermal comfort point-in-time compliance criteria in international standards—the tiered PMV classes—based on the largest-to-date, contemporary thermal comfort field study database (ASHRAE Global Thermal Comfort Database II). In recognition of the rising focus on data-driven methods, this paper for   100  the first time (to the author’s knowledge) discussed the differences between data-driven techniques from a methodological perspective in terms of their derived new compliance criteria. This discussion led to a recommendation of a universally applicable data-driven method for deriving acceptable temperature ranges from measured air temperatures and occupant thermal sensation votes. A comparison between the newly derived acceptable temperature ranges and the recommended comfort temperature ranges in ISO 7730 and EN 16798 suggests that some improvements to the standards could be made. Paper 3 is the first evaluation (as far as we are aware) of existing long-term thermal comfort indices. These indices, found throughout dominant international comfort standards, were proposed conceptually in the past without validation based on data collected in real buildings. This paper contributes to the domain knowledge by performing a validation exercise on long-term comfort indices. Analysis of continuous in-situ thermal measurements and occupant feedback led to proposed improvements in long-term comfort indices found in international standards, with a newly proposed combined index that was found to outperform all existing indices. Through correlation analyses between a variety of new indices and subjective thermal evaluations of building occupants, this paper also hypothesised the drivers of long-term thermal satisfaction are the extreme events rather than the averages. This marks an important contribution to our understanding of long-term thermal perception in commercial office buildings. 5.3 Practical implications The thorough introduction to post-occupancy evaluations in Paper 1 will help to facilitate beginners in this area and allow them to quickly grasp the concepts of POE. It will also serve to inform experienced investigators about the trends, gaps, and potential future directions in POE research. To create a bandwagon effect, POE should be mandated by building codes/standards, and this would lead to the mandate of installing slightly more advanced sensors in buildings other than the basic sensors for building automation systems. For Canada in particular, considering the climates in Canada (cold winter and mild summer in most places) and thus the huge amount of energy used for space heating (56% of total building energy consumption [15]) rather than space   101  cooling (5% of total building energy consumption [15]), it would be beneficial if future POEs in Canada focus more on the evaluation of the buildings with technologies to reduce space heating energy consumption, such as highly-insulated envelopes, ground heat exchangers, etc. Paper 2 calls for wider comfort temperature ranges in standards in the interests of reducing HVAC energy consumption without sacrificing occupant comfort, particularly when offering a high degree of freedom in thermal adaptation. However, we do not encourage direct adoption of the absolute temperature ranges emerging from the analysis of the entire ASHRAE database due to important contextual differences, e.g., people in hot climates tend to be more accepting of higher temperatures than those in mild climates. Instead, researchers and practitioners are encouraged to develop context-specific compliance criteria that are suitable for inclusion in relevant comfort standards e.g. a specific region, type of building etc. In addition, we advocate for a standardization of the use of psychometric scales in thermal comfort field studies. We propose the use of “thermal acceptability” scale for thermal comfort compliance assessments, “thermal sensation” scale for research related to the human thermoregulatory system, and “thermal preference” for building control applications. This would reduce the assumptions made when converting between metrics and increase the efficiency and effectiveness of meta-analyses of large databases.  Paper 3 suggests the endorsement of a newly proposed long-term comfort index (Equation 4-15) which considers both the comfort temperature range and the daily temperature variance in future amendments of thermal comfort standards. For centrally conditioned office buildings where occupants have relatively low adaptive freedom, we recommend using Equation 4-16, where ISO class B temperature ranges of 23 °C to 26 °C in summer and 20 °C to 24 °C in winter are used as the prescribed comfort temperature ranges and the daily temperature variance component takes a threshold of 2 °C. However, the results do not directly lead to a recommendation of a 2 °C daily temperature band for everyday building operation. Although the correlation results show that the frequency with which the daily temperature range varies more than 2 °C is highly correlated with thermal dissatisfaction over time, it is possible that this is the result of occupants’ raised expectations due to the past stable operations of these premium-  102  grade offices. Given the wide acceptable temperature ranges reported in Paper 2, it is unlikely that occupants in naturally ventilated buildings or offices would be dissatisfied when daily temperatures swing more than 2 °C. Therefore, a more compelling interpretation of the results would be that daily variability is only a problem when building occupants have come to expect constant conditions, but the actual environment does not meet their expectations, and the magnitude of variation exceeds the adaptive opportunities available to them. Practically speaking, to ensure occupants’ long-term thermal comfort, HVAC systems should keep the indoor temperatures within a moderately wide but comfortable range and provide occupants with adaptive freedom to augment their ability to respond to temperature variations. In summary, we suggest the following amendments to current thermal comfort standards:  1. removal of tiered PMV classes as point-in-time compliance criteria, and instead use field data to derive context-based acceptable temperature ranges; and  2. removal of PMV/PPD-based long-term comfort indices and the inclusion of the proposed combined index based on temperature range and daily range exceedance.  A practical solution for building managers to reduce energy consumption and increase human comfort is to relax the tight setpoint control and provide occupants greater adaptive freedom to promote a culture of self-resilience. The role of the facility managers would then be to operate the building within the range of occupants’ expectations. 5.4 Limitations and future research Although Paper 1 provided a comprehensive review of literature that included the term “post-occupancy evaluation”, it is likely that some POE projects were not included in our bibliometric analysis due to the use of terms other than POE. Nevertheless, Paper 1 suggested five transitions for future POE research based on the literature review: from one-off to continuing, high-level to detailed, research-oriented to owner/occupant-oriented, from academia to industry, and from independent to integrated. For Papers 2 and 3, the major limitation is the use of secondary data as the basis for the analyses, so there was limited control over the data acquisition protocols and experiment design.   103  The ASHRAE database used in Paper 2 contains field studies from various locations, climates, and cultures. It is difficult to know if those data are representative of normal everyday conditions experienced by the occupants of those building. Furthermore, important contextual and information missing from the dataset makes it difficult to promote the wholesale adoption of the derived acceptable temperature ranges. We therefore suggest the main contribution of Paper 2 is the method to derive new compliance criteria rather than the derived temperature ranges themselves. We encourage researchers around the world to apply this method to their context-specific datasets to derive acceptable temperature ranges. Unlike the database used in Paper 2 which was too heterogeneous, the dataset in Paper 3 was too homogeneous—the same type of building in the same location. The observed trend, especially the 2 °C threshold for daily temperature variance, may not be appropriate for other contexts or building types. Therefore, we recommend a general form of the newly proposed long-term comfort index and encourage researchers to conduct similar correlation analyses in the future using data collected in other types of buildings and in other locations. In general, in order to provide real thermal comfort to occupants, we need more fundamental future research to understand the differences between contexts and the reasons/mechanisms behind the differences which could be gender, age, culture, or other factors. And to enable effective and efficient conditioning, researchers can exploit advanced technologies such as robots and new materials for personalized comfort systems, artificial intelligence and machine learning algorithms for operation optimization, etc.    104  Bibliography [1] International Energy Agency, Energy Efficiency: Buildings, (2019). https://www.iea.org/topics/energyefficiency/buildings/ (accessed October 9, 2019). [2] T. Ramesh, R. Prakash, K.K. Shukla, Life cycle energy analysis of buildings : An overview, Energy Build. 42 (2010) 1592–1600. doi:10.1016/j.enbuild.2010.05.007. [3] N.E. Klepeis, W.C. Nelson, W.R. Ott, J.P. Robinson, A.M. Tsang, P. Switzer, J. V Behar, S.C. Hern, W.H. Engelmann, The National Human Activity Pattern Survey (NHAPS): a resource for assessing exposure to environmental pollutants, J. Expo. Anal. Environ. Epidemiol. 11 (2001) 231–252. doi:10.1038/sj.jea.7500165. [4] World Green Building Council, Health, Wellbeing & Productivity in Offices: The next chapter for green building, 2015. [5] Federal Facilities Council, Learning from our buildings: A state-of-the-practice summary of Post-occupancy evaluation, 2002. [6] ASHRAE, ANSI/ASHRAE Standard 55-2017 Thermal Environmental Conditions for Human Occupancy, (2017). [7] M. Frontczak, P. Wargocki, Literature survey on how different factors influence human comfort in indoor environments, Build. Environ. 46 (2011) 922–937. doi:10.1016/j.buildenv.2010.10.021. [8] C. Karmann, S. Schiavon, E. Arens, Percentage of commercial buildings showing at least 80% occupant satisfied with their thermal comfort, in: 10th Wind. Conf. Rethink. Comf., Windsor, UK, 2018: pp. 1–7. [9] J. Kim, R. de Dear, Nonlinear relationships between individual IEQ factors and overall workspace satisfaction, Build. Environ. 49 (2012) 33–40. doi:10.1016/j.buildenv.2011.09.022. [10] L. Fang, G. Clausen, P.O. Fanger, Impact of Temperature and Humidity on the Perception of, Indoor Air. 8 (1998) 80–90. [11] D. Heinzerling, S. Schiavon, T. Webster, E. Arens, Indoor environmental quality assessment models: a literature review and a proposed weighting and classification scheme, Intern. Report, Cent. Built Environ. UC Berkeley. (2013).   105  http://www.cbe.berkeley.edu/research/commissioning.htm. [12] Australian Government - Department of Industry, HVAC Energy Breakdown Factsheet, (2013). http://industry.gov.au/Energy/EnergyEfficiency/Non-residentialBuildings/HVAC/FactSheets/Documents/HVACFSEnergyBreakdown.pdf. [13] U.S. Energy Information Administration, Energy use in commercial buildings, (2017). https://www.eia.gov/energyexplained/index.php?page=us_energy_commercial (accessed August 22, 2018). [14] U.S. Energy Information Administration, Heating and cooling no longer majority of U.S. home energy use, (2013). https://www.eia.gov/todayinenergy/detail.php?id=10271 (accessed August 22, 2018). [15] Natural Resources Canada, HVAC & Energy Systems, (2016). https://www.nrcan.gc.ca/energy/efficiency/data-research-and-insights-energy-efficiency/housing-innovation/hvac-energy-systems/3937 (accessed February 14, 2020). [16] International Energy Agency, The Future of Cooling, 2018. [17] Passivhaus Institut, Building envelope, (2019). https://passipedia.org/planning/thermal_protection (accessed February 14, 2020). [18] ISO, ISO/FDIS 7730:2005 Ergonomics of the thermal environment — Analytical determination and interpretation of thermal comfort using calculation of the PMV and PPD indices and local thermal comfort criteria, (2005). [19] European Committee for Standardization (CEN), EN 16798-2:2019 Energy performance of buildings - Ventilation for buildings - Part 2: Interpretation of the requirements in EN 16798-1 - Indoor environmental input parameters for design and assessment of energy performance of buildings addressing indoor a, (2019). [20] P.O. Fanger, Thermal comfort. Analysis and applications in environmental engineering., Copenhagen: Danish Technical Press., 1970. [21] M.A. Humphreys, J. Fergus Nicol, The validity of ISO-PMV for predicting comfort votes in every-day thermal environments, Energy Build. 34 (2002) 667–684. doi:10.1016/S0378-7788(02)00018-X. [22] T. Cheung, S. Schiavon, T. Parkinson, P. Li, G. Brager, Analysis of the accuracy on PMV –   106  PPD model using the ASHRAE Global Thermal Comfort Database II, Build. Environ. Submitted (2018). [23] P.O. Fanger, J. Toftum, Extension of the PMV model to non-air-conditioned buildings in warm climates, Energy Build. 34 (2002) 533–536. [24] R.J. de Dear, G.S. Brager, Developing an adaptive model of thermal comfort and preference, ASHRAE Trans. 104 (1998) 1–18. https://escholarship.org/uc/item/4qq2p9c6. [25] J.F. Nicol, M.A. Humphreys, Adaptive thermal comfort and sustainable thermal standards for buildings, Energy Build. 34 (2002) 563–572. doi:10.1016/S0378-7788(02)00006-3. [26] F. Nicol, M. Humphreys, Derivation of the adaptive equations for thermal comfort in free-running buildings in European standard EN15251, Build. Environ. 45 (2010) 11–17. doi:10.1016/j.buildenv.2008.12.013. [27] E. Arens, M.A. Humphreys, R. de Dear, H. Zhang, Are “class A” temperature requirements realistic or desirable?, Build. Environ. 45 (2010) 4–10. doi:10.1016/j.buildenv.2009.03.014. [28] S. Roaf, F. Nicol, M. Humphreys, P. Tuohy, A. Boerstra, Twentieth century standards for thermal comfort: Promoting high energy buildings, Archit. Sci. Rev. 53 (2010) 65–77. doi:10.3763/asre.2009.0111. [29] F.R. d’Ambrosio Alfano, B.I. Palella, G. Riccio, The role of measurement accuracy on the thermal environment assessment by means of PMV index, Build. Environ. 46 (2011) 1361–1369. doi:10.1016/j.buildenv.2011.01.001. [30] V. Földváry Ličina, T. Cheung, H. Zhang, R. de Dear, T. Parkinson, E. Arens, C. Chun, S. Schiavon, M. Luo, G. Brager, P. Li, S. Kaam, M.A. Adebamowo, M.M. Andamon, F. Babich, C. Bouden, H. Bukovianska, C. Candido, B. Cao, S. Carlucci, D.K.W. Cheong, J.H. Choi, M. Cook, P. Cropper, M. Deuble, S. Heidari, M. Indraganti, Q. Jin, H. Kim, J. Kim, K. Konis, M.K. Singh, A. Kwok, R. Lamberts, D. Loveday, J. Langevin, S. Manu, C. Moosmann, F. Nicol, R. Ooka, N.A. Oseland, L. Pagliano, D. Petráš, R. Rawal, R. Romero, H.B. Rijal, C. Sekhar, M. Schweiker, F. Tartarini, S. ichi Tanabe, K.W. Tham, D. Teli, J. Toftum, L. Toledo, K. Tsuzuki, R. De Vecchi, A. Wagner, Z. Wang, H. Wallbaum, L. Webb, L. Yang, Y. Zhu, Y. Zhai, Y. Zhang, X. Zhou, Development of the ASHRAE Global Thermal Comfort Database II, Build. Environ. 142 (2018) 502–512. doi:10.1016/j.buildenv.2018.06.022.   107  [31] T. Parkinson, A. Parkinson, R. de Dear, Continuous IEQ monitoring system: Context and development, Build. Environ. 149 (2019) 15–25. doi:10.1016/j.buildenv.2018.12.010. [32] C. Candido, J. Kim, R. de Dear, L. Thomas, BOSSA: a multidimensional post-occupancy evaluation tool, Build. Res. Inf. 44 (2016) 214–228. doi:10.1080/09613218.2015.1072298. [33] U.S. Energy Information Administration, Buildings sector energy consumption, 2016. doi:www.eia.gov/forecasts/ieo/pdf/0484(2016).pdf. [34] U.S. Energy Information Administration, Energy consumption by sector, (2017). https://www.eia.gov/totalenergy/data/monthly/#consumption (accessed October 16, 2017). [35] M.O. Sanni-Anibire, M.A. Hassanain, A.-M. Al-Hammad, Post-Occupancy Evaluation of Housing Facilities: Overview and Summary of Methods, J. Perform. Constr. Facil. 30 (2016). doi:10.1061/(ASCE)CF.1943-5509.0000868. [36] J.H.K. Lai, C.S. Man, J.H.K. Lai, C.S. Man, Developing a performance evaluation scheme for engineering facilities in commercial buildings : state-of-the-art review, 9179 (2017). doi:10.3846/1648715X.2016.1247304. [37] A. Leaman, F. Stevenson, B. Bordass, Building evaluation: Practice and principles, Build. Res. Inf. 38 (2010) 564–577. doi:10.1080/09613218.2010.495217. [38] K. Hadjri, C. Crozier, Post-occupancy evaluation: purpose, benefits and barriers, Facilities. 27 (2009) 21–33. doi:10.1108/02632770910923063. [39] I. Cooper, Post-occupancy evaluation - Where are you?, Build. Res. Inf. 29 (2001) 158–163. doi:10.1080/09613210010016820. [40] B. Birt, G.R. Newsham, Post-occupancy evaluation of energy and indoor environment quality in green buildings : a review, in: 3rd Int. Conf. Smart Sustain. Built Environ., 2009: pp. 1–7. http://www.sasbe2009.com/proceedings/documents/SASBE2009_paper_POST-OCCUPANCY_EVALUATION_OF_ENERGY_AND_INDOOR_ENVIRONMENT_QUALITY_IN_GREEN_BUILDINGS_-_A_REVIEW.pdf. [41] G. Herda, V. Autio, C. Lalande, Building Sustainability Assessment and Benchmarking - An Introduction, 2017. doi:10.1002/mrdd.20080. [42] C. Turner, M. Frankel, Energy Performance of LEED ® for New Construction Buildings,   108  (2008). [43] G.R. Newsham, S. Mancini, B.J. Birt, Do LEED-certified buildings save energy? Yes, but…, Energy Build. 41 (2009) 897–905. doi:10.1016/j.enbuild.2009.03.014. [44] J.H. Scofield, Do LEED-certified buildings save energy? Not really…, Energy Build. 41 (2009) 1386–1390. doi:10.1016/j.enbuild.2009.08.006. [45] J.H. Scofield, Efficacy on LEED-certification in reducing energy consumption and greenhouse gas emissions for large New York City office buildings, Energy Build. 67 (2013) 517–524. doi:10.1016/j.enbuild.2013.08.032. [46] S. Altomonte, S. Schiavon, Occupant satisfaction in LEED and non-LEED certified buildings, Build. Environ. 68 (2013) 66–76. doi:10.1016/j.buildenv.2013.06.008. [47] S. Altomonte, S. Schiavon, M.G. Kent, G. Brager, Indoor environmental quality and occupant satisfaction in green-certified buildings, Build. Res. Inf. (2017). doi:10.1080/09613218.2018.1383715. [48] Z. Gou, S.S.-Y. Lau, Z. Zhang, a Comparison of Indoor Environmental Satisfaction Between Two Green Buildings and a Conventional Building in China, J. Green Build. 7 (2012) 89–104. doi:10.3992/jgb.7.2.89. [49] G. Newsham, B.J. Birt, C. Arsenault, A.J.L. Thompson, J. a. Veitch, S. Mancini, A.D. Galasiu, B.N. Gover, I. a. Macdonald, G.J. Burns, Do ‘green’ buildings have better indoor environments? New evidence, Build. Res. Inf. 41 (2013) 415–434. doi:10.1080/09613218.2013.789951. [50] H.H. Liang, C.P. Chen, R.L. Hwang, W.M. Shih, S.C. Lo, H.Y. Liao, Satisfaction of occupants toward indoor environment quality of certified green office buildings in Taiwan, Build. Environ. 72 (2014) 232–242. doi:10.1016/j.buildenv.2013.11.007. [51] A. Hedge, L. Miller, J.A. Dorsey, Occupant comfort and health in green and conventional university buildings, Work. 49 (2014) 363–372. doi:10.3233/WOR-141870. [52] International Living Future Institute, Living Building Challenge 3.0, 2014. http://living-future.org/sites/default/files/reports/FINAL LBC 3_0_WebOptimized_low.pdf. [53] Delos Living LLC., The WELL Building Standard v1, 2015. [54] BOMA, ABOUT BOMA BEST, (2017). http://bomacanada.ca/bomabest/aboutbomabest/   109  (accessed August 18, 2017). [55] AASHE, STARS Overview, (2017). https://stars.aashe.org/pages/about/stars-overview.html (accessed August 18, 2017). [56] Office of Environment and Heritage on behalf of Federal State and Territory governments, National Australian Built Environment Rating System (NABERS), (2017). https://nabers.gov.au/public/webpages/home.aspx (accessed October 16, 2017). [57] W.F.E. Preiser, Building Performance Assessment—From POE to BPE, A Personal Perspective, Archit. Sci. Rev. 48 (2005) 201–204. doi:10.3763/asre.2005.4826. [58] P. Manning, Office Design: A Study of Environment, The Research Unit, 1965. [59] T.A. Markus, The role of building performance measurement and appraisal in design method, Archit. J. 146 (1967) 1567–1573. [60] J. Daish, J. Gray, D. Kernohan, A. Salmond, Post occupancy evaluation in New Zealand, Des. Stud. 3 (1982) 77–83. doi:10.1016/0142-694X(82)90052-7. [61] G. Davis, F.T. Ventre, Performance of Buildings and Serviceability of Facilities, American Society for TEsting and Materials, 1990. [62] W.F.E. Preiser, E. White, H. Rabinowitz, Post-Occupancy Evaluation, 1988. [63] G. Baird, Building Evaluation Techniques, McGraw-Hill, 1996. [64] J.E. Reckermann, CIRS pre-occupancy evaluation : inhabitant feedback processes and possibilities for a regenerative place, The University of British Columbia, 2014. [65] S. Coleman, Normalizing Sustainability in a Regenerative Building: the Social Practice of Being at CIRS, The University of British Columbia, 2016. [66] W.F.E. Preiser, Post-occupancy evaluation: how to make buildings work better, Facilities. 13 (1995) 19–28. doi:10.1108/02632779510097787. [67] S. Kalantari, R. Snell, Post-Occupancy Evaluation of a Mental Healthcare Facility Based on Staff Perceptions of Design Innovations, HERD Heal. Environ. Res. Des. J. 10 (2017) 193758671668771. doi:10.1177/1937586716687714. [68] L. Callaway, K. Tregloan, G. Williams, R. Clark, Evaluating Access and Mobility within a New Model of Supported Housing for People with Neurotrauma: A Pilot Study, Brain Impair. 17 (2016) 64–76. doi:10.1017/BrImp.2016.7.   110  [69] N. Dikmen, S.T. Elias-Ozkan, Housing after disaster: A post occupancy evaluation of a reconstruction project, Int. J. Disaster Risk Reduct. 19 (2016) 167–178. doi:10.1016/j.ijdrr.2016.08.020. [70] T. Wongbumru, B. Dewancker, Post-occupancy evaluation of user satisfaction: a case study of “old” and “new” public housing schemes in Bangkok, Archit. Eng. Des. Manag. 12 (2016) 107–124. doi:10.1080/17452007.2015.1106399. [71] S. Grangaard, C. Ryhl, Vandhalla - A Sport Centre and a Successful Example of First-Generation Universal Design., Stud. Health Technol. Inform. 229 (2016) 243–245. http://search.ebscohost.com/login.aspx?direct=true&db=mnh&AN=27534310&site=ehost-live&scope=site. [72] J. Dorsey, A. Hedge, Re-evaluation of a LEED Platinum Building: Occupant experiences of health and comfort, Work. 57 (2017) 31–41. doi:10.3233/WOR-172535. [73] A.S. Ali, Shirley Jin Lin Chua, M.E.-L. Lim, The effect of physical environment comfort on employees’ performance in office buildings: A case study of three public universities in Malaysia, Struct. Surv. 33 (2015) 294–308. doi:10.1108/09574090910954864. [74] E. Mlecnik, T. Schütze, S.J.T. Jansen, G. De Vries, H.J. Visscher, A. Van Hal, End-user experiences in nearly zero-energy houses, Energy Build. 49 (2012) 471–478. doi:10.1016/j.enbuild.2012.02.045. [75] T. Hwang, J.T. Kim, Effects of indoor lighting on occupants’ visual comfort and eye health in a green building, Indoor Built Environ. 20 (2010) 75–90. doi:10.1177/1420326X10392017. [76] A. Martinez-Molina, P. Boarin, I. Tort-Ausina, J.L. Vivancos, Post-occupancy evaluation of a historic primary school in Spain: Comparing PMV, TSV and PD for teachers’ and pupils’ thermal comfort, Build. Environ. 117 (2017) 248–259. doi:10.1016/j.buildenv.2017.03.010. [77] S. Leder, G.R. Newsham, J. a. Veitch, S. Mancini, K.E. Charles, Effects of office environment on employee satisfaction: a new analysis, Build. Res. Inf. 3218 (2015) 1–17. doi:10.1080/09613218.2014.1003176. [78] F. Martellotta, A. Simone, S. Della Crociata, M. D’Alba, Global comfort and indoor environment quality attributes for workers of a hypermarket in Southern Italy, Build.   111  Environ. 95 (2016) 355–364. doi:10.1016/j.buildenv.2015.09.029. [79] N. Khair, H.M. Ali, I. Sipan, N.H. Juhari, S.Z. Daud, Post occupancy evaluation of physical environment in public low-cost housing, J. Teknol. 75 (2015) 155–162. doi:10.11113/jt.v75.5284. [80] M.P. Deuble, R.J. de Dear, Green occupants for green buildings: The missing link?, Build. Environ. 56 (2012) 21–27. doi:10.1016/j.buildenv.2012.02.029. [81] M.F. Silva, S. Maas, H.A. de Souza, A.P. Gomes, Post-occupancy evaluation of residential buildings in Luxembourg with centralized and decentralized ventilation systems, focusing on indoor air quality (IAQ). Assessment by questionnaires and physical measurements, Energy Build. 148 (2017) 119–127. doi:10.1016/j.enbuild.2017.04.049. [82] C. Brown, The power of qualitative data in post-occupancy evaluations of residential high-rise buildings, J. Hous. Built Environ. 31 (2016) 605–620. doi:10.1007/s10901-015-9481-2. [83] M. Ferri, D.A. Zygun, A. Harrison, H.T. Stelfox, Evidence-based design in an intensive care unit: end-user perceptions., BMC Anesthesiol. 15 (2015) 57. doi:10.1186/s12871-015-0038-4. [84] J. Mundo-Hernández, M.C. Valerdi-Nochebuena, J. Sosa-Oliver, Post-occupancy evaluation of a restored industrial building: A contemporary art and design gallery in Mexico, Front. Archit. Res. 4 (2015) 330–340. doi:10.1016/j.foar.2015.09.003. [85] W.O. Collinge, A.E. Landis, A.K. Jones, L.A. Schaefer, M.M. Bilec, Productivity metrics in dynamic LCA for whole buildings: Using a post-occupancy evaluation of energy and indoor environmental quality tradeoffs, Build. Environ. 82 (2014) 339–348. doi:10.1016/j.buildenv.2014.08.032. [86] M.M. Agha-Hossein, S. El-Jouzi, A.A. Elmualim, J. Ellis, M. Williams, Post-occupancy studies of an office environment: Energy performance and occupants’ satisfaction, Build. Environ. 69 (2013) 121–130. doi:10.1016/j.buildenv.2013.08.003. [87] M.-C. Zheng, M.-S. Chen, P.-Y. Li, Post-Occupancy Evaluation of Information Signs and Pre-Boarding Behavior in a Historic Railroad Station, J. Asian Archit. Build. Eng. 9 (2010) 177–184. doi:10.3130/jaabe.9.177. [88] O. Guerra-Santin, N. Romero Herrera, E. Cuerda, D. Keyson, Mixed methods approach to   112  determine occupants’ behaviour – Analysis of two case studies, Energy Build. 130 (2016) 546–566. doi:10.1016/j.enbuild.2016.08.084. [89] B. Sodagar, D. Starkey, The monitored performance of four social houses certified to the Code for Sustainable Homes Level 5, Energy Build. 110 (2016) 245–256. doi:10.1016/j.enbuild.2015.11.016. [90] M.Y. Abbas, M. Othman, & Puteri, Z. Megat, A. Rahman, Pre-school Classroom Environment: Significant upon Childrens’ Play Behaviour?, Procedia -Social Behav. Sci. 49 (2012) 47–65. doi:10.1016/j.sbspro.2012.07.005. [91] C. Brown, M. Gorgolewski, Understanding the role of inhabitants in innovative mechanical ventilation strategies, Build. Res. Inf. 0 (2014) 1–13. doi:10.1080/09613218.2015.963350. [92] R. Bozovic-Stamenovic, N. Kishnani, B.K. Tan, D. Prasad, F. Faizal, Assessment of awareness of Green Mark (GM) rating tool by occupants of GM buildings and general public, Energy Build. 115 (2016) 55–62. doi:10.1016/j.enbuild.2015.01.003. [93] X. Xuan, Effectiveness of indoor environment quality in LEED-certified healthcare settings, Indoor Built Environ. 0 (2015) 1–13. doi:10.1177/1420326X15587564. [94] K.J. Watson, J. Evans, A. Karvonen, T. Whitley, Capturing the social value of buildings: The promise of Social Return on Investment (SROI), Build. Environ. 103 (2016) 289–301. doi:10.1016/j.buildenv.2016.04.007. [95] C. Filippín, S.F. Larsen, L. Marek, Experimental monitoring and post-occupancy evaluation of a non-domestic solar building in the central region of Argentina, Energy Build. 92 (2015) 267–281. doi:10.1016/j.enbuild.2015.01.053. [96] R. Gupta, L. Barnfield, T. Hipwood, Impacts of community-led energy retrofitting of owner-occupied dwellings, Build. Res. Inf. 42 (2014) 446–461. doi:10.1080/09613218.2014.894742. [97] W. Glad, Housing renovation and energy systems: the need for social learning, Build. Res. Inf. 40 (2012) 274–289. doi:10.1080/09613218.2012.690955. [98] M.S. Rashwan, M. Duhoux, Benchmarking energy performance for LEED residential homes in Manitoba, 2015 IEEE Electr. Power Energy Conf. Smarter Resilient Power Syst. EPEC 2015. (2016) 87–92. doi:10.1109/EPEC.2015.7379932.   113  [99] R. V. Jones, S. Goodhew, P. De Wilde, Measured indoor temperatures, thermal comfort and overheating risk: Post-occupancy evaluation of low energy houses in the UK, Energy Procedia. 88 (2016) 714–720. doi:10.1016/j.egypro.2016.06.049. [100] M. Lakeridou, M. Ucci, A. Marmot, I. Ridley, The potential of increasing cooling set-points in air-conditioned offices in the UK, Appl. Energy. 94 (2012) 338–348. doi:10.1016/j.apenergy.2012.01.064. [101] N. Mathiasen, A.K. Frandsen, How to frame universal workspace lighting, Stud. Health Technol. Inform. 229 (2016) 379–381. doi:10.3233/978-1-61499-684-2-379. [102] H. Alzoubi, S. Al-Rqaibat, R.F. Bataineh, Pre-versus post-occupancy evaluation of daylight quality in hospitals, Build. Environ. 45 (2010) 2652–2665. doi:10.1016/j.buildenv.2010.05.027. [103] Z. Wang, H. Zhao, B. Lin, Y. Zhu, Q. Ouyang, J. Yu, Investigation of indoor environment quality of Chinese large-hub airport terminal buildings through longitudinal field measurement and subjective survey, Build. Environ. 94 (2015) 593–605. doi:10.1016/j.buildenv.2015.10.014. [104] J.N. Hill, S.L. LaVela, Noise levels in patient rooms and at nursing stations at three VA medical centers, Heal. Environ. Res. Des. J. 9 (2015) 54–63. doi:10.1177/1937586715592635. [105] Y. Ning, J. Chen, Improving residential satisfaction of university dormitories through post-occupancy evaluation in China: A socio-technical system approach, Sustain. 8 (2016). doi:10.3390/su8101050. [106] M.O. Sanni-Anibire, M.A. Hassanain, Quality assessment of student housing facilities through post-occupancy evaluation, Archit. Eng. Des. Manag. 12 (2016) 367–380. doi:10.1080/17452007.2016.1176553. [107] S. Yildiz, C. Polatoglu, Evaluating the Built Environment in the Context of Barrier Free Tourism, a Case Study in Istanbul, 2nd Int. Sci. Conf. Tour. South. East. Eur. 2013 Cris. - a Chall. Sustain. Tour. Dev. 2 (2013) 435–446. [108] M.M. Shepley, Z. Rybkowski, J. Aliber, C. Lange, Ambulatory infusion suite: pre- and post-occupancy evaluation, Build. Res. Inf. 40 (2012) 700–712.   114  doi:10.1080/09613218.2012.709372. [109] H.N. Husin, A.H. Nawawi, F. Ismail, N. Khalil, Correlation Analysis of Occupants’ Satisfaction and Safety Performance Level in Low Cost Housing, Procedia - Soc. Behav. Sci. 168 (2015) 238–248. doi:10.1016/j.sbspro.2014.10.229. [110] N. Bento Pereira, R. Calejo Rodrigues, P. Fernandes Rocha, Post-Occupancy Evaluation Data Support for Planning and Management of Building Maintenance Plans, Buildings. 6 (2016) 45. doi:10.3390/buildings6040045. [111] S.H. Kwon, C. Chun, R.Y. Kwak, Relationship between quality of building maintenance management services for indoor environmental quality and occupant satisfaction, Build. Environ. 46 (2011) 2179–2185. doi:10.1016/j.buildenv.2011.04.028. [112] K. Strelets, E. Perlova, M. Platonova, A. Pankova, M. Romero, M.S. Al-Shabab, Post Occupancy Evaluation (POE) and Energy Conservation Opportunities (ECOs) Study for Three Facilities in SPbPU in Saint Petersburg, Procedia Eng. 165 (2016) 1568–1578. doi:10.1016/j.proeng.2016.11.895. [113] A.O. Abisuga, I.O. Famakin, O.S. Oshodi, Educational building conditions and the health of users, Constr. Econ. Build. 16 (2016) 19–34. doi:: http://dx.doi.org/10.5130/AJCEB.v%25vi%25i.4979. [114] R.S. McLeod, M. Swainson, Chronic overheating in low carbon urban developments in a temperate climate, Renew. Sustain. Energy Rev. 74 (2017) 201–220. doi:10.1016/j.rser.2016.09.106. [115] R. Gupta, M. Kapsali, Empirical assessment of indoor air quality and overheating in low-carbon social housing dwellings in England, UK, Adv. Build. Energy Res. 2549 (2015) 1–23. doi:10.1080/17512549.2015.1014843. [116] T.O. Adekunle, M. Nikolopoulou, Thermal comfort, summertime temperatures and overheating in prefabricated timber housing, Build. Environ. 103 (2016) 21–35. doi:10.1016/j.buildenv.2016.04.001. [117] J. Day, J. Theodorson, K. Van Den Wymelenberg, Understanding controls, behaviors and satisfaction in the daylit perimeter office: A daylight design case study, J. Inter. Des. 37 (2012) 17–34. doi:10.1111/j.1939-1668.2011.01068.x.   115  [118] L.E. Thomas, Evaluating design strategies, performance and occupant satisfaction: a low carbon office refurbishment, Build. Res. Inf. 38 (2010) 610–624. doi:10.1080/09613218.2010.501654. [119] C.P. DeClercq, G. Cranz, Moving Beyond Seating-centered Learning Environments: Opportunities and Challenges Identified in a POE of a Campus Library, J. Acad. Librariansh. 40 (2014) 574–584. doi:10.1016/j.acalib.2014.08.005. [120] L. Guinther, A. Carll-White, K. Real, One Size Does Not Fit All: A Diagnostic Post-Occupancy Evaluation Model for an Emergency Department, HERD Heal. Environ. Res. Des. J. 7 (2014) 15–37. doi:10.1177/193758671400700303. [121] C. Spataru, M. Gillott, The Use of Intelligent Systems for Monitoring Energy Use and Occupancy in Existing Homes, Smart Innov. Syst. Technol. 7 (2011) 247–256. doi:10.1007/978-3-642-17387-5_25. [122] J. Zuo, X.L. Yuan, S. Pullen, Post Occupancy Evaluation Study in Hospital Buildings – a Pilot Study, Appl. Mech. Mater. 94–96 (2011) 2248–2256. doi:10.4028/www.scientific.net/AMM.94-96.2248. [123] D.A. Guerin, J.K. Brigham, H.-Y. Kim, S. Choi, A. Scott, Post-Occupancy Evaluation of Employees’ Work Performance and Satisfaction As Related To Sustainable Design Criteria and Workstation Type, J. Green Build. 7 (2012) 85–99. doi:10.3992/jgb.7.4.85. [124] R.A. Mangkuto, K.A. Kurnia, D.N. Azizah, R.T. Atmodipoero, F.X.N. Soelami, Determination of discomfort glare criteria for daylit space in Indonesia, Sol. Energy. 149 (2017) 151–163. doi:10.1016/j.solener.2017.04.010. [125] P. Xue, C.M. Mak, Y. Huang, Quantification of luminous comfort with dynamic daylight metrics in residential buildings, Energy Build. 117 (2016) 99–108. doi:10.1016/j.enbuild.2016.02.026. [126] M.B. Hirning, G.L. Isoardi, S. Coyne, V.R. Garcia Hansen, I. Cowling, Post occupancy evaluations relating to discomfort glare: A study of green buildings in Brisbane, Build. Environ. 59 (2013) 349–357. doi:10.1016/j.buildenv.2012.08.032. [127] J.H. Choi, V. Loftness, A. Aziz, Post-occupancy evaluation of 20 office buildings as basis for future IEQ standards and guidelines, Energy Build. 46 (2012) 167–175.   116  doi:10.1016/j.enbuild.2011.08.009. [128] R. Smutny, M. Treberspurg, Sustainability Monitoring of Viennese Housing Estates. Post-Occupancy-Evaluation, Energy Monitoring and Cost Analysis of Passive and Low Energy Housing Estates, Cesb 10 Cent. Eur. Towar. Sustain. Build. - From Theory To Pract. (2010) 341–344. [129] T. Williamson, V. Soebarto, A. Radford, Comfort and energy use in five Australian award-winning houses: regulated, measured and perceived, Build. Res. Inf. 38 (2010) 509–529. doi:10.1080/09613218.2010.494890. [130] L. Thomas, Combating overheating: mixed-mode conditioning for workplace comfort, Build. Res. Inf. 45 (2017) 176–194. doi:10.1080/09613218.2017.1252617. [131] R. Schiano-Phan, Post-occupancy evaluation of non-domestic buildings using passive downdraught evaporative cooling in south-west USA, Archit. Sci. Rev. 55 (2012) 320–340. doi:10.1080/00038628.2012.725535. [132] M. Berge, H.M. Mathisen, Perceived and measured indoor climate conditions in high-performance residential buildings, Energy Build. 127 (2016) 1057–1073. doi:10.1016/j.enbuild.2016.06.061. [133] J. Damiens, M. Li, Z. Pei, Y. Liu, Y. Zhu, Using Natural Ventilation in Office Buildings Under Subtropical Climate: A case Study in Shenzhen, in: Proc. 8th Int. Symp. Heating, Vent. Air Cond., Springer, Berlin, Heidelberg, 2014: pp. 381–388. doi:10.1007/978-3-642-39581-9. [134] L.P. and K.T. Mark Perepelitza, Performance validation case study: Federal office building with an integrated facade, J. Build. Phys. Vol. 39(6) (2016) 542–569. doi:10.1177/1744259115611871. [135] X. Deng, G. Kokogiannakis, Z. Ma, P. Cooper, Thermal Comfort Evaluation of a Mixed-mode Ventilated Office Building with Advanced Natural Ventilation and Underfloor air Distribution Systems, Energy Procedia. 111 (2017) 520–529. doi:10.1016/j.egypro.2017.03.214. [136] M.B. Hirning, G.L. Isoardi, V.R. Garcia-Hansen, Prediction of discomfort glare from windows under tropical skies, Build. Environ. 113 (2017) 107–120. doi:10.1016/j.buildenv.2016.08.005.   117  [137] A.C. Menezes, A. Cripps, D. Bouchlaghem, R. Buswell, Predicted vs. actual energy performance of non-domestic buildings: Using post-occupancy evaluation data to reduce the performance gap, Appl. Energy. 97 (2012) 355–364. doi:10.1016/j.apenergy.2011.11.075. [138] A.M. Radwan, M.H. Issa, An Evaluation of Indoor Environmental Quality and Occupant Well-Being in Three Southern Rural Manitoba School Buildings, J. Green Build. (2014) 123–141. [139] T. Hwang, J.T. Kim, Assessment of Indoor Environmental Quality in Open-Plan Offices, Indoor Built Environ. . 22 (2013) 139–156. doi:10.1177/1420326X12470280. [140] J.H. Choi, J. Moon, Impacts of human and spatial factors on user satisfaction in office environments, Build. Environ. 114 (2017) 23–35. doi:10.1016/j.buildenv.2016.12.003. [141] C. Alvaro, A.J. Wilkinson, S.N. Gallant, D. Kostovski, P. Gardner, Evaluating Intention and Effect: The Impact of Healthcare Facility Design on Patient and Staff Well-Being, Heal. Environ. Res. Des. J. 9 (2016) 82–104. doi:10.1177/1937586715605779. [142] A. Wagner, W. O’Brien, B. Dong, Exploring Occupant Behavior in Buildings: Methods and Challenges, 2018. doi:10.1007/978-3-319-61464-9. [143] CIBSE, PROBE - Post Occupancy Studies, (n.d.). http://www.cibse.org/building-services/building-services-case-studies/probe-post-occupancy-studies (accessed August 10, 2017). [144] CIBSE, Probe 2 Research methods, CIBSE J. (1997) 1–5. [145] A. Leaman, B. Bordass, Assessing building performance in use 4: The Probe occupant surveys and their implications, Build. Res. Inf. 29 (2001) 129–143. doi:10.1080/09613210010008045. [146] L. Zagreus, C. Huizenga, E. Arens, D. Lehrer, Listening to the occupants: a Web-based indoor environmental quality survey., Indoor Air. 14 (2004) 65–74. doi:10.1111/j.1600-0668.2004.00301.x. [147] D. Heinzerling, Tom Webster, S. Schiavon, G. Anwar, D. Dickerhoff, A prototype toolkit for evaluating indoor environmental quality in commercial buildings, Intern. Report, Cent. Built Environ. UC Berkeley. (2013) 8. https://escholarship.org/uc/item/7jh9h72t.   118  [148] J.A. Veitch, K.E. Charles, G.R. Newsham, C.J.G. Marquardt, J. Geerts, Environmental Satisfaction in Open-Plan Environments: 5. Workstation and Physical Condition Effects, Res. Rep. (National Res. Counc. Canada. Inst. Res. Constr. (2003). doi:10.4224/20386149. [149] G.R. Newsham, J.A. Veitch, K.E. Charles, Risk factors for dissatisfaction with the indoor environment in open-plan offices: An analysis of COPE field study data, Indoor Air. 18 (2008) 271–282. doi:10.1111/j.1600-0668.2008.00525.x. [150] C. Cox, Health optimisation protocol for energy-efficient buildings (HOPE)- final report, 2005. [151] P.M. Bluyssen, M. Aries, P. van Dommelen, Comfort of workers in office buildings: The European HOPE project, Build. Environ. 46 (2011) 280–288. doi:10.1016/j.buildenv.2010.07.024. [152] J. PARK, A. AZIZ, K. LI, C. COVINGTON, Energy performance modeling of an office building and its evaluation, in: 18th Int. Conf. Comput. Archit. Des. Res. Asia (CAADRIA 2013), Singpore, 2013: pp. 209–218. [153] Center for Building Performance and Diagnostics, National Environmental Assessment Toolkit Poster, n.d. [154] K.M. Fowler, K.L. Spees, A.R. Kora, E.M. Rauch, J.E. Hathaway, A.E. Solana, Whole Building Cost and Performance Measurement: Data Collection Protocol Revision 2, 2009. doi:10.2172/990595. [155] Keen Engineering, POST OCCUPANCY EVALUATION PROJECT REPORT COMPLETION OF PHASE 1 : POE PROTOCOL DEVELOPMENT, 2006. [156] ASHRAE, Performance measurement protocols for commercial buildings, 2010. [157] M. Gillott, L. Taranto, C. Spataru, Low-carbon housing design informed by research, Proc. ICE-Engineering Sustain. 163 (2010) 77–87. doi:10.1680/ensu.2010.163. [158] G. Newsham, B. Birt, C. Arsenault, Do green buildings outperform conventional buildings? Indoor environment and energy performance in North American offices, 2012. doi:http://dx.doi.org/10.4224/20857897. [159] Z. Pei, B. Lin, Y. Liu, Y. Zhu, Comparative study on the indoor environment quality of green office buildings in China with a long-term field measurement and investigation, Build.   119  Environ. 84 (2015) 80–88. doi:10.1016/j.buildenv.2014.10.015. [160] B. Lin, Y. Liu, Z. Wang, Z. Pei, M. Davies, Measured energy use and indoor environment quality in green office buildings in China, Energy Build. 129 (2016) 9–18. doi:10.1016/j.enbuild.2016.07.057. [161] M. Gorgolewski, C. Brown, A. Chu, A. Turcato, K. Bartlett, G. Ebrahimi, M. Hodgson, S. Mallory-hill, M. Ouf, L. Scannell, Performance of Sustainable Buildings in Colder Climates, J. Green Build. 11 (2016) 131–153. doi:10.3992/jgb.11.4.131.1. [162] A. Bartlett, Karen; Brown, Craig; Chu, Anne-Mareike; Ebrahimi, Ghazal; Gorgolewski, Mark; Hodgson, Murray; Issa, Mohamed; Mallory-Hill, Shauna; Ouf, Mohamed; Scannell, Leila; Turcato, Do our green buildings perform as intended ?, in: World Sustain. Build. Conf., 2014. [163] N. Alborz, U. Berardi, A post occupancy evaluation framework for LEED certified U.S. higher education residence halls, Procedia Eng. 118 (2015) 19–27. doi:10.1016/j.proeng.2015.08.399. [164] Open Green Building Society, POST-OCCUPANCY EVALUATION FOR MULTI-UNIT RESIDENTIAL BUILDINGS, 2016. [165] Arup, BUS methodology, (2017). http://www.busmethodology.org.uk (accessed August 24, 2017). [166] CBE, Occupant Indoor Environmental Quality (IEQ) Survey and Building Benchmarking, (2017). http://www.cbe.berkeley.edu/research/briefs-survey.htm. [167] H. Kim, J.S. Haberl, Field-test of the new ASHRAE/CIBSE/USGBC performance measurement protocols for commercial buildings: Basic level, ASHRAE Trans. 118 (2012) 135–142. [168] C. Steinke, L. Webster, M. Fontaine, Evaluating building performance in healthcare facilities: An organizational perspective, Heal. Environ. Res. Des. J. 3 (2010) 63–83. doi:10.1177/193758671000300207. [169] G. Schiller, E. Arens, F. Bauman, C. Benton, M. Fountain, T. Doherty, K. Craik, A field study of thermal environments and comfort in office buildings, ASHRAE Trans. 94 (1988). [170] F. Bauman, T. Webster, S. Schiavon, H. Zhang, E. Arens, Advanced Design and   120  Commissioning Tools for Energy-Efficient Building Technologies, 2012. [171] C. Benton, F. Bauman, M. Fountain, A Field Measurement System for the Study of Thermal Comfort, ASHRAE Trans. 96 (1990) 623–633. [172] T. Webster, F. Bauman, G. Anwar, CBE Portable Wireless Monitoring System (PWMS): UFAD Systems Commissioning Cart Design Specifications and Operating Manual, Intern. Report, Cent. Built Environ. UC Berkeley. (2007) 4. https://escholarship.org/uc/item/8v1347vp. [173] Bank of America, Average costs of industrial Internet of Things (IoT) sensors from 2004 to 2020, Statista. (2016). https://www.statista.com/statistics/682846/vr-tethered-hmd-average-selling-price/ (accessed February 14, 2020). [174] Y. Geng, B. Lin, J. Yu, H. Zhou, W. Ji, H. Chen, Z. Zhang, Y. Zhu, Indoor environmental quality of green office buildings in China: Large-scale and long-term measurement, Build. Environ. 150 (2019) 266–280. doi:10.1016/j.buildenv.2019.01.014. [175] C. Dykes, G. Baird, A review of questionnaire-based methods used for assessing and benchmarking indoor environmental quality, Intell. Build. Int. 5 (2013) 135–149. doi:10.1080/17508975.2013.783457. [176] Ö . Göçer, Y. Hua, K. Göçer, Completing the missing link in building design process: Enhancing post-occupancy evaluation method for effective feedback for building performance, Build. Environ. 89 (2015) 14–27. doi:10.1016/j.buildenv.2015.02.011. [177] A. Galatioto, G. Leone, D. Milone, S. Pitruzzella, V. Franzitta, Indoor Environmental Quality Survey: A Brief Comparison between Different Post Occupancy Evaluation Methods, Adv. Mater. Res. 864–867 (2013) 1148–1152. doi:10.4028/www.scientific.net/AMR.864-867.1148. [178] R. Gupta, S. Chandiwala, Understanding occupants: feedback techniques for large-scale low-carbon domestic refurbishments, Build. Res. Inf. 38 (2010) 530–548. doi:10.1080/09613218.2010.495216. [179] Y. Hua, Ö . Göçer, K. Göçer, Spatial mapping of occupant satisfaction and indoor environment quality in a LEED platinum campus building, Build. Environ. 79 (2014) 124–137. doi:10.1016/j.buildenv.2014.04.029.   121  [180] P. Patlakas, H. Santacruz, H. Altan, Development and Evaluation of a Prototype Software Application for the Visualization of Environmental Data, 1 (2013) 137–146. http://cumincad.architexturez.net/system/files/pdf/ecaade2013_029.content.pdf. [181] P. Patlakas, H. Becerra-Santacruz, H. Altan, Visualising the environmental conditions of buildings, Proc. Inst. Civ. Eng. 167 (2014) 9. doi:http://dx.doi.org/10.1680/cien.13.00014. [182] M. Pigman, H. Zhang, A. Honnekeri, E. Arens, G. Brager, Visualizing the results of thermal comfort field studies: putting publicly accessible data in the hands of practitioners, in: Proc. 8th Wind. Conf. Count. Cost Comf. a Chang. World, Windsor, UK, 2014. doi:10.1080/09613218.2011.556008. [183] G. Baird, A. Leaman, J. Thompson, A comparison of the performance of sustainable buildings with conventional buildings from the point of view of the users, Archit. Sci. Rev. 55 (2012) 135–144. doi:10.1080/00038628.2012.670699. [184] S. Abbaszadeh, L. Zagreus, D. Lehrer, C. Huizenga, Occupant Satisfaction with Indoor Environmental Quality in Green Buildings, in: Heal. Build., 2006: pp. 365–370. http://escholarship.org/uc/item/9rf7p4bs. [185] A. Leaman, B. Bordass, Are users more tolerant of “green” buildings?, Build. Res. Inf. 35 (2007) 662–673. doi:10.1080/09613210701529518. [186] C. Dykes, G. Baird, Performance benchmarks for non-domestic buildings: towards user perception benchmarks, Build. Res. Inf. 42 (2014) 62–71. doi:10.1080/09613218.2014.832103. [187] J. Kim, R. de Dear, Workspace satisfaction: The privacy-communication trade-off inopen-plan offices, J. Environ. Psychol. 36 (2013) 18–26. doi:10.1016/j.jenvp.2013.06.007. [188] J. Kim, R. de Dear, C. Candido, H. Zhang, E. Arens, Gender differences in office occupant perception of indoor environmental quality (IEQ), Build. Environ. 70 (2013) 245–256. doi:10.1016/j.buildenv.2013.08.022. [189] J. Choi, A. Aziz, V. Loftness, Investigation on the impacts of different genders and ages on satisfaction with thermal environments in office buildings, Build. Environ. 45 (2010) 1529–1535. doi:10.1016/j.buildenv.2010.01.004. [190] M. Frontczak, S. Schiavon, J. Goins, E. Arens, H. Zhang, P. Wargocki, Quantitative   122  relationships between occupant satisfaction and satisfaction aspects of indoor environmental quality and building design, Indoor Air. 22 (2012) 119–131. doi:10.1111/j.1600-0668.2011.00745.x. [191] S. Schiavon, S. Altomonte, Influence of factors unrelated to environmental quality on occupant satisfaction in LEED and non-LEED certified buildings, Build. Environ. 77 (2014) 148–159. doi:10.1016/j.buildenv.2014.03.028. [192] J. Yang, M. Santamouris, S.E. Lee, C. Deb, Energy performance model development and occupancy number identification of institutional buildings, Energy Build. 123 (2016) 192–204. doi:10.1016/j.enbuild.2015.12.018. [193] X. Liang, T. Hong, G.Q. Shen, Improving the accuracy of energy baseline models for commercial buildings with occupancy data, Appl. Energy. 179 (2016) 247–260. doi:10.1016/j.apenergy.2016.06.141. [194] S. Niu, W. Pan, Y. Zhao, A virtual reality integrated design approach to improving occupancy information integrity for closing the building energy performance gap, Sustain. Cities Soc. 27 (2016) 275–286. doi:10.1016/j.scs.2016.03.010. [195] B. Vale, R. Vale, Domestic energy use, lifestyles and POE: past lessons for current problems, Build. Res. Inf. 38 (2010) 578–588. doi:10.1080/09613218.2010.481438. [196] J. Yang, M. Santamouris, S.E. Lee, Review of occupancy sensing systems and occupancy modeling methodologies for the application in institutional buildings, Energy Build. 121 (2016) 344–349. doi:10.1016/j.enbuild.2015.12.019. [197] E. Naghiyev, M. Gillott, R. Wilson, Three unobtrusive domestic occupancy measurement technologies under qualitative review, Energy Build. 69 (2014) 507–514. doi:10.1016/j.enbuild.2013.11.033. [198] J. Wang, X. Zhang, Q. Gao, H. Yue, H. Wang, Device-free wireless localization and activity recognition with deep learning, IEEE Trans. Veh. Technol. 66 (2017) 6258–6267. doi:10.1109/PERCOMW.2016.7457118. [199] Sensible Building Science, Solutions, (2017). http://sensiblebuildingscience.com/solutions/ (accessed September 6, 2017). [200] M. Aftab, C. Chen, C. Chau, T. Rahwan, Automatic HVAC Control with Real-time Occupancy   123  Recognition and Simulation-guided Model Predictive Control in Low-cost Embedded System, Energy Build. 154 (2017) 141–156. doi:10.1016/j.enbuild.2017.07.077. [201] J.C. Vischer, Towards a user-centred theory of the built environment, Build. Res. Inf. 36 (2008) 231–240. doi:10.1080/09613210801936472. [202] D. Pati, S. Pati, Methodological issues in conducting post-occupancy evaluations to support design decisions, Heal. Environ. Res. Des. J. 6 (2013) 157–163. doi:10.1177/193758671300600312. [203] E.M. Rogers, Diffusion of Innovations, 1st ed., New York: Free Press of Glencoe, 1962. [204] J.C. Vischer, Post-Occupancy Evaluation: A Multifaceted Tool for Building Improvement, in: Learn. From Our Build. A State-of-the-Practice Summ. Post-Occupancy Eval., 2001: p. 23. [205] A. Zimmerman, M. Martin, Post-occupancy evaluation: Benefits and barriers, Build. Res. Inf. 29 (2001) 168–174. doi:10.1080/09613210010016857. [206] G.A. Moore, Crossing the Chasm: Marketing and Selling High-Tech Products to Mainstream Customers, 1st ed., 1991. [207] Y. Cao, T. Wang, X. Song, An energy-aware, agent-based maintenance-scheduling framework to improve occupant satisfaction, Autom. Constr. 60 (2015) 49–57. doi:10.1016/j.autcon.2015.09.002. [208] R.J. de Dear, A global database of thermal comfort field experiments, ASHRAE Trans. 104(1) (1998) 1141–1152. [209] H. Zhang, E. Arens, W. Pasut, Air temperature thresholds for indoor comfort and perceived air quality, Build. Res. Inf. 39 (2011) 134–144. doi:10.1080/09613218.2011.552703. [210] J.H. Ryu, W.H. Hong, H.C. Seo, Y.K. Seo, Determination of an acceptable comfort zone for apartment occupants in South Korea: An empirical analysis of cooling operation, Build. Environ. 125 (2017) 484–501. doi:10.1016/j.buildenv.2017.09.019. [211] I. Griffiths, Thermal Comfort Studies in Buildings with Passive Solar Features. Field Studies. Report to the Commission of the European Community, ENS35 090 UK., 1990. [212] M.A. Humphreys, J.F. Nicol, I.A. Raja, Field studies of indoor thermal comfort and the progress of the adaptive approach, Adv. Build. Energy Res. 1 (2007) 55–88.   124  doi:10.1080/17512549.2007.9687269. [213] S. Shahzad, J. Brennan, D. Theodossopoulos, J.K. Calautit, B.R. Hughes, Does a neutral thermal sensation determine thermal comfort?, Build. Serv. Eng. Res. Technol. 39 (2018) 183–195. doi:10.1177/0143624418754498. [214] G.S. Brager, M.E. Fountain, C.C. Benton, E.A. Arens, F.S. Bauman, A Comparison of Methods for Assessing Thermal Sensation and Acceptability in the Field, in: Therm. Comf. Past, Present Futur., 1993: pp. 17–39. [215] J. Van Hoof, Forty years of Fanger’s model of thermal comfort: Comfort for all?, Indoor Air. 18 (2008) 182–201. doi:10.1111/j.1600-0668.2007.00516.x. [216] S. Gauthier, the Role of Environmental and Personal Variables in Influencing Thermal Comfort Indices Used in Building Review of the Current Thermal Comfort Models and, 13th Conf. Int. Build. Perform. Simul. Assoc. (2013) 2320–2325. [217] M. Luo, Z. Wang, K. Ke, B. Cao, Y. Zhai, X. Zhou, Human metabolic rate and thermal comfort in buildings: The problem and challenge, Build. Environ. 131 (2018) 44–52. doi:10.1016/j.buildenv.2018.01.005. [218] L.M. Chamra, W.G. Steele, K. Huynh, The uncertainty associated with thermal comfort, ASHRAE Trans. 109 (2003) 356–365. https://www.scopus.com/inward/record.uri?eid=2-s2.0-17144440166&partnerID=40&md5=ceb087a42c94ec03ff12bbe504e8990a. [219] G. Havenith, I. Holmér, K. Parsons, Personal factors in thermal comfort assessment: Clothing properties and metabolic heat production, Energy Build. 34 (2002) 581–591. doi:10.1016/S0378-7788(02)00008-7. [220] N.H. Wong, S.S. Khoo, Thermal comfort in classrooms in the tropics, Energy Build. 35 (2003) 337–351. doi:10.1016/S0378-7788(02)00109-3. [221] H. Feriadi, N.H. Wong, Thermal comfort for naturally ventilated houses in Indonesia, Energy Build. 36 (2004) 614–626. doi:10.1016/j.enbuild.2004.01.011. [222] F. Nicol, Adaptive thermal comfort standards in the hot-humid tropics, Energy Build. 36 (2004) 628–637. doi:10.1016/j.enbuild.2004.01.016. [223] T. Wu, B. Cao, Y. Zhu, A field study on thermal comfort and air-conditioning energy use in an office building in Guangzhou, Energy Build. 168 (2018) 428–437.   125  doi:10.1016/j.enbuild.2018.03.030. [224] M. Schweiker, X. Fuchs, S. Becker, M. Shukuya, M. Dovjak, M. Hawighorst, J. Kolarik, Challenging the assumptions for thermal sensation scales, Build. Res. Inf. 45 (2017) 572–589. doi:10.1080/09613218.2016.1183185. [225] M.A. Humphreys, “ Why did the piggy bark ?” Some effects of language and context on the interpretation of words used in scales of warmth and thermal preference ., Air Cond. Low Carbon Cool. Chall. (2008) 27–29. http://nceub.org.uk. [226] Comfy, Comfy, (2018). https://www.comfyapp.com (accessed October 1, 2018). [227] M. Luo, Z. Wang, G. Brager, B. Cao, Y. Zhu, Indoor climate experience, migration, and thermal comfort expectation in buildings, Build. Environ. 141 (2018) 262–272. doi:10.1016/j.buildenv.2018.05.047. [228] T. Parkinson, R. De Dear, Thermal pleasure in built environments: Physiology of alliesthesia, Build. Res. Inf. 43 (2015) 288–301. doi:10.1080/09613218.2015.989662. [229] T. Parkinson, R. De Dear, C. Candido, Thermal pleasure in built environments: Alliesthesia in different thermoregulatory zones, Build. Res. Inf. 44 (2016) 20–33. doi:10.1080/09613218.2015.1059653. [230] B.R. Kingma, A.J. Frijns, L. Schellen, W.D. van Marken Lichtenbelt, Beyond the classic thermoneutral zone, Temperature. 1 (2014) 142–149. doi:10.4161/temp.29702. [231] R. de Dear, J. Kim, T. Parkinson, Residential adaptive comfort in a humid subtropical climate—Sydney Australia, Energy Build. 158 (2018) 1296–1305. doi:10.1016/J.ENBUILD.2017.11.028. [232] Z. Wang, R. de Dear, M. Luo, B. Lin, Y. He, A. Ghahramani, Y. Zhu, Individual difference in thermal comfort: A literature review, Build. Environ. 138 (2018) 181–193. doi:10.1016/j.buildenv.2018.04.040. [233] R.J. De Dear, T. Akimoto, E.A. Arens, G. Brager, C. Candido, K.W.D. Cheong, B. Li, N. Nishihara, S.C. Sekhar, S. Tanabe, J. Toftum, H. Zhang, Y. Zhu, Progress in thermal comfort research over the last twenty years, Indoor Air. 23 (2013) 442–461. doi:10.1111/ina.12046. [234] G.S. Brager, R. de Dear, Thermal adaptation in the built environment: a literature review, Energy Build. 27 (1998) 83–96. doi:10.1016/S0378-7788(97)00053-4.   126  [235] M. Paciuk, The role of personal control of the environment in thermal comfort and satisfaction at the workplace, The University of Wisconsin, Milwaukee, 1989. [236] M. Luo, B. Cao, W. Ji, Q. Ouyang, B. Lin, Y. Zhu, The underlying linkage between personal control and thermal comfort: Psychological or physical effects?, Energy Build. 111 (2016) 56–63. doi:10.1016/j.enbuild.2015.11.004. [237] G.S. Brager, G. Paliaga, R.J. de Dear, Operable Windows, Personal Control, and Occupant Comfort, ASHRAE Trans. 110 Part 2 (2004) 17–35. [238] T. Hoyt, E. Arens, H. Zhang, Extending air temperature setpoints: Simulated energy savings and design considerations for new and retrofit buildings, Build. Environ. 88 (2014) 89–96. doi:10.1016/j.buildenv.2014.09.010. [239] H. Zhang, E. Arens, M. Taub, D. Dickerhoff, F. Bauman, M. Fountain, W. Pasut, D. Fannon, Y. Zhai, M. Pigman, Using footwarmers in offices for thermal comfort and energy savings, Energy Build. 104 (2015) 233–243. doi:10.1016/j.enbuild.2015.06.086. [240] W. Pasut, H. Zhang, E. Arens, Y. Zhai, Energy-efficient comfort with a heated/cooled chair: Results from human subject tests, Build. Environ. 84 (2015) 10–21. doi:10.1016/j.buildenv.2014.10.026. [241] M. Luo, E. Arens, H. Zhang, A. Ghahramani, Z. Wang, Thermal comfort evaluated for combinations of energy-efficient personal heating and cooling devices, Build. Environ. 143 (2018) 206–216. doi:10.1016/j.buildenv.2018.07.008. [242] H. Zhang, E. Arens, Y. Zhai, A review of the corrective power of personal comfort systems in non-neutral ambient environments, Build. Environ. 91 (2015) 15–41. doi:10.1016/j.buildenv.2015.03.013. [243] J.F. Nicol, M.A. Humphreys, New standards for comfort and energy use in buildings, Build. Res. Inf. 37 (2009) 68–73. doi:10.1080/09613210802611041. [244] Y. Wang, Meta-Cooling Textile, Univ. Maryl. (2014). https://arpa-e.energy.gov/?q=slick-sheet-project/meta-cooling-textile (accessed September 6, 2018). [245] R. Radermacher, Roving Comforter (RoCo) – A Personal Cooling and Heating Device, Univ. Maryl. (2016). [246] A. Ghahramani, C. Tang, B. Becerik-Gerber, An online learning approach for quantifying   127  personalized thermal comfort via adaptive stochastic modeling, Build. Environ. 92 (2015) 86–96. doi:10.1016/j.buildenv.2015.04.017. [247] A. Ghahramani, F. Jazizadeh, B. Becerik-Gerber, A knowledge based approach for selecting energy-aware and comfort-driven HVAC temperature set points, Energy Build. 85 (2014) 536–548. doi:10.1016/j.enbuild.2014.09.055. [248] J. Kim, S. Schiavon, G. Brager, Personal comfort models – A new paradigm in thermal comfort for occupant-centric environmental control, Build. Environ. 132 (2018) 114–124. doi:10.1016/j.buildenv.2018.01.023. [249] J. Kim, Y. Zhou, S. Schiavon, P. Raftery, G. Brager, Personal comfort models: Predicting individuals’ thermal preference using occupant heating and cooling behavior and machine learning, Build. Environ. 129 (2018) 96–106. doi:10.1016/j.buildenv.2017.12.011. [250] International WELL Building Institute, WELL v2, (2018). https://www.wellcertified.com/certification/v2/ (accessed September 6, 2019). [251] RESET, RESET Air Certification Process for Commercial Interiors v2.0, (2018). [252] S. Carlucci, L. Pagliano, A review of indices for the long-term evaluation of the general thermal comfort conditions in buildings, Energy Build. 53 (2012) 194–205. doi:10.1016/j.enbuild.2012.06.015. [253] S. Borgeson, G. Brager, Comfort standards and variations in exceedance for mixed-mode buildings Comfort standards and variations in exceedance for mixed-mode buildings, Build. Res. Inf. 39 (2011) 118–133. doi:10.1080/09613218.2011.556345. [254] Chartered Institution of Building Services Engineers (CIBSE), CIBSE Guide A, 7th ed., London, 2006. [255] J.F. Nicol, J. Hacker, B. Spires, H. Davies, J.F. Nicol, J. Hacker, B. Spires, H.D. Suggestion, J.F. Nicol, J. Hacker, B. Spires, H. Davies, Suggestion for new approach to overheating diagnostics Suggestion for new approach to overheating diagnostics, 3218 (2009). doi:10.1080/09613210902904981. [256] D. Robinson, F. Haldi, Model to predict overheating risk based on an electrical capacitor analogy, 40 (2008) 1240–1245. doi:10.1016/j.enbuild.2007.11.003. [257] L. Mainetti, L. Patrono, A. Vilei, Evolution of wireless sensor networks towards the Internet   128  of Things: A survey, in: 2011 Int. Conf. Software, Telecommun. Comput. Networks, SoftCOM 2011, IEEE, 2011: pp. 16–21. [258] P. Li, T.M. Froese, G. Brager, Post-occupancy evaluation: State-of-the-art analysis and state-of-the-practice review, Build. Environ. 133 (2018) 187–202. doi:10.1016/j.buildenv.2018.02.024. [259] S. Schiavon, K.H. Lee, Dynamic predictive clothing insulation models based on outdoor air and indoor operative temperatures, Build. Environ. 59 (2013) 250–260. doi:10.1016/j.buildenv.2012.08.024. [260] ISO, ISO 7726 Ergonomics of the thermal environment instruments for measuring physical quantities, (1998). [261] T. Parkinson, A. Parkinson, R. de Dear, Continuous IEQ monitoring system: Performance specifications and thermal comfort classification, Build. Environ. 149 (2019) 241–252. doi:10.1016/j.buildenv.2018.12.010. [262] C.J. Ferguson, An Effect Size Primer : A Guide for Clinicians and Researchers, Prof. Psychol. Res. Pract. 40 (2009) 532–538. doi:10.1037/a0015808. [263] D.C. Funder, D.J. Ozer, Evaluating Effect Size in Psychological Research : Sense and Nonsense, Adv. Methods Pract. Psychol. Sci. 2 (2019) 156–168. doi:10.1177/2515245919847202. [264] D.J. Rumsey, How to Interpret a Correlation Coefficient r, in: Stat. Dummies, 2nd ed., 2016. https://www.dummies.com/store/product/Statistics-For-Dummies-2nd-Edition.productCd-1119293529.html. [265] Laerd Statistics, Pearson Product-Moment Correlation, (2018). https://statistics.laerd.com/statistical-guides/pearson-correlation-coefficient-statistical-guide.php (accessed September 9, 2019). [266] Statistics Solutions, Pearson’s Correlation Coefficient, (2019). https://www.statisticssolutions.com/pearsons-correlation-coefficient/ (accessed September 9, 2019). [267] R Core Team, R: A language and environment for statistical computing., (2019). https://www.r-project.org/.   129  [268] H. Wickham, R. François, L. Henry, K. Müller, dplyr: A Grammar of Data Manipulation, (2019). https://cran.r-project.org/package=dplyr. [269] H. Wickham, L. Henry, tidyr: Easily Tidy Data with “spread()” and “gather()” Functions, (2019). https://cran.r-project.org/package=tidyr. [270] H. Wickham, Reshaping Data with the reshape Package, Stat. Softw. 21 (2017) 1–20. http://www.jstatsoft.org/v21/i12/. [271] G. Grolemund, H. Wickham, Dates and Times Made Easy with lubridate, Stat. Softw. 40 (2011) 1–25. http://www.jstatsoft.org/v40/i03/. [272] A. Zeileis, G. Grothendieck, zoo: S3 Infrastructure for Regular and Irregular Time Series, Stat. Softw. 14 (2005) 1–27. doi:10.18637/jss.v014.i06. [273] M. Kuhn, J. Wing, S. Weston, A. Williams, C. Keefer, A. Engelhardt, T. Cooper, Z. Mayer, B. Kenkel, M. Benesty, R. Lescarbeau, A. Ziem, L. Scrucca, Y. Tang, C. Candan, T. Hunt, caret: Classification and Regression Training, (2019). https://cran.r-project.org/package=caret. [274] H. Wickham, ggplot2: Elegant Graphics for Data Analysis, Springer-Verlag New York, 2016. https://ggplot2.tidyverse.org. [275] A. Kassambara, ggpubr: “ggplot2” Based Publication Ready Plots, (2019). https://cran.r-project.org/package=ggpubr. [276] B. Auguie, gridExtra: Miscellaneous Functions for “Grid” Graphics, (2019). https://cran.r-project.org/package=gridExtra. [277] R.D. Cook, Detection of Influential Observation in Linear Regression, Technometrics. 19 (1977) 15–18. doi:10.1080/00401706.1977.10489493. [278] M. Dawe, P. Raftery, J. Woolley, S. Schiavon, F. Bauman, Comparison of mean radiant and air temperatures in mechanically-conditioned commercial buildings from over 200,000 field and laboratory measurements, Cent. Built Environ. UC Berkeley. (2019). https://escholarship.org/uc/item/2sn4v9xr. [279] H. Guo, E. Teitelbaum, N. Houchois, M. Bozlar, F. Meggers, Revisiting the use of globe thermometers to estimate radiant temperature in studies of heating and ventilation, Energy Build. 180 (2018) 83–94. doi:10.1016/j.enbuild.2018.08.029. [280] E. Teitelbaum, K.W. Chen, F. Meggers, H. Guo, N. Houshois, J. Pantelic, A. Rysanek, Globe   130  thermometer free convection error potentials, Nat. Sci. Reports. in review (2019). doi:10.13140/RG.2.2.13530.90564. [281] M.A. Humphreys, The variation of comfortable temperatures, Energy Res. 3 (1979) 13–18. [282] M. Luo, R. De Dear, W. Ji, B. Cao, B. Lin, Q. Ouyang, Y. Zhu, The dynamics of thermal comfort expectations : The problem , challenge and impication, Build. Environ. 95 (2016) 322–329. doi:10.1016/j.buildenv.2015.07.015. [283] P. Li, T. Parkinson, G. Brager, S. Schiavon, T.C.T. Cheung, T. Froese, A data-driven approach to defining acceptable temperature ranges in buildings, Build. Environ. 153 (2019) 302–312. doi:10.1016/j.buildenv.2019.02.020. [284] T. Parkinson, R. de Dear, G. Brager, Nudging the adaptive thermal comfort model, Energy Build. 206 (2020). doi:10.1016/j.enbuild.2019.109559.  

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            data-media="{[{embed.selectedMedia}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
https://iiif.library.ubc.ca/presentation/dsp.24.1-0388823/manifest

Comment

Related Items