International Construction Specialty Conference of the Canadian Society for Civil Engineering (ICSC) (5th : 2015)

A multi-perspective assessment method for measuring leading indicatiors in capital project benchmarking Choi, Jiyong; Yun, Sungmin; Mulva, Stephen P.; Oliveira, Daniel; Kang, Youngcheol Jun 30, 2015

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata

Download

Media
52660-Choi_J_et_al_ICSC15_145_Measure_Leading_Indicators.pdf [ 929.17kB ]
52660-Choi_J_et_al_ICSC15_145_Measure_Leading_Indicators_slides.pdf [ 1.86MB ]
Metadata
JSON: 52660-1.0076367.json
JSON-LD: 52660-1.0076367-ld.json
RDF/XML (Pretty): 52660-1.0076367-rdf.xml
RDF/JSON: 52660-1.0076367-rdf.json
Turtle: 52660-1.0076367-turtle.txt
N-Triples: 52660-1.0076367-rdf-ntriples.txt
Original Record: 52660-1.0076367-source.json
Full Text
52660-1.0076367-fulltext.txt
Citation
52660-1.0076367.ris

Full Text

5th International/11th Construction Specialty Conference 5e International/11e Conférence spécialisée sur la construction    Vancouver, British Columbia June 8 to June 10, 2015 / 8 juin au 10 juin 2015   A MULTI-PERSPECTIVE ASSESSMENT METHOD FOR MEASURING LEADING INDICATORS IN CAPITAL PROJECT BENCHMARKING Jiyong Choi1, Sungmin Yun2,4, Stephen Mulva2, Daniel Oliveira2, and Youngcheol Kang3 1 Department of Civil, Architectural and Environmental Engineering, University of Texas at Austin, USA 2 Construction Industry Institute, University of Texas at Austin, USA 3 International School of Urban Science, University of Seoul, South Korea 4 smyun@utexas.edu Abstract: This paper presents a new multi-perspective assessment method for measuring leading indicators deployed in the 10-10 Performance Assessment System that the Construction Industry Institute (CII) has recently launched. The CII 10-10 Performance Assessment System adopted a multi-perspective assessment approach for evaluating leading indicators that represent various management input measures throughout capital project delivery process. The leading indicators consist of 10 input measures, including four fundamental management functions such as planning, organizing, leading, and controlling as well as major management practices such as design efficiency, human resources, quality, sustainability, supply chain, and safety. This paper provides the theoretical background for the method through extensive review of existing benchmarking theories. Then it describes the development process for the assessment method. After this, it presents how the method was deployed to evaluate the system’s 10 leading indicators. Finally, this paper discusses how to practically utilize the input measure scores acquired from the method for performance improvement. The assessment method in the system will help project management teams to diagnose their project’s performances and thus allow them to set up proactive strategies for the subsequent phases of the project. 1 INTRODUCTION Since 1996, Construction Industry Institute (CII) has initiated various industry specific performance assessment programs in order to reliably measure an organization’s performance against recognized leaders for the purpose of determining best practices that lead to better performance (CII 2015). As a big data repository of capital projects in the construction industry across the world, the CII Performance Assessment database has been used for benchmarking of capital projects collected from the CII member companies and various research efforts such as pre-project planning (Gibson et al. 2006) and impact of technology use on project performance (Kang et al. 2013). The CII 10-10 Performance Assessment Program (the 10-10 Program), as the newest initiative, builds on this legacy by providing the industry with a benchmarked set of leading indicators for several project types and industry sectors (CII 2014a). All CII research studies and the existing CII Performance Assessment surveys were investigated in developing the questionnaires. Moreover, extensive industry experts’ inputs were also reflected on the 10-10 Program, which was acquired from a number of CII activities and events held from 2012 to 2014 such as CII’s Board of Advisors (BoA) meetings, CII Performance Assessment Community of Practice (PACOP) and CII Performance Assessment Workshop (PAW). The knowledge from industry expertise and previous academic researches thus forms the basis of the program. 145-1 There are three different sets of industry specific questionnaires in the 10-10 Program. Each set consists of five phase level questionnaires, which are front end planning (or programming), engineering (or design), procurement, construction, and startup (or commissioning). Thus, the 10-10 Program was designed to collect capital project data by project phase instead of at project closeout when previous performance assessment programs are typically conducted (CII 2014b). Notably, the 10-10 Program was developed to survey members of a project’s management team regarding their project’s performance, team dynamics, and organizational relationships (Kang et al. 2014). Since 10-10 surveys by phase using simple statement-based questions, 10 leading indicators (i.e., input measures) are obtained throughout a project’s development, which can identify projects’ impending problems. On the other hand, 10 output measures (i.e., lagging indicators) is to provide certainty that the project is proceeding on target through various metrics related to cost, schedule, capacity, quantity, and safety. Together, these measures are the basis of the new program’s name of 10-10 (CII 2014b). For the input measure section, the 10-10 Program adopted a multi-perspective assessment approach for evaluating leading indicators that represent various management efforts throughout capital project delivery process. The input measures consist of 10 leading indicators including four fundamental management functions such as planning, organizing, leading, and controlling as well as major management practices such as design efficiency, human resources, quality, sustainability, supply chain, and safety. Accordingly, questions in the input measure section are used to measure 10 scores representing 10 leading indicators so that they can be compared with the other similar projects. However, a structured process is required in order to obtain and benchmark the 10 input measures in an appropriate manner. This paper provides the theoretical background for the multi-perspective assessment method through extensive review of existing benchmarking theories. It then describes the development process for the assessment method.  2 RESEARCH BACKGROUND One of critical issues in contemporary benchmarking is the lack of information supporting project decisions and influencing on project performance during project planning and execution (Kang et al. 2014). As project data is submitted to CII performance assessment database after completion of projects, the outcomes of benchmarking cannot directly benefit the projects being benchmarked. Rather, the results are typically used for their future projects. The fact motivated CII to develop a new program for gathering the information supporting project decisions and influencing on capital project performance through comprehensive review on references from academia and industry.  At the outset, CII benchmarking legacies were thoroughly reviewed including CII’s research publications (CII 1987, CII 1989, CII 1997, CII 2006a, CII 2006b, CII 2006c, CII 2008, CII 2010, CII 2010d, CII 2011b), implementation tools (CII 1995, CII 2003a, CII 2003b, CII 2011c), and survey instruments (CII 2011a, CII 2012a; CII 2012c). All existing survey questions created to capture the extent of implementation of CII best practices, which are defined as a process and method that leads to enhanced project performance when executed effectively, are thoroughly investigated (CII 2015).  Also, research results on performance assessment and benchmarking conducted in academia were reviewed (Kasunic 2008, Zhang 2005, Yu et al. 2005). In addition to that, survey instruments for the performance assessment developed by industrial practitioners were examined. Once the draft of the questionnaires were first generated from these multiple sources, questions were expanded, filtered, and combined later by industry practitioners’ expertise collected at several CII events and activities (Kang et al. 2014). Upon the completion of the questionnaires, the 10-10 Program intends to assess capital project performance by phase so that a capital project can identify if the project is properly positioned for success in that specific phase, as well as in subsequent phase (CII 2014a).  3 METHOD FOR MEASURING LEADING INDICATORS Leading indicators are defined as the measurements of processes, activities, and conditions that define performance and can predict future results (CII 2012b). Moreover, leading indicators allow for proactive 145-2 management to impact project outcomes, revealed in a timely manner (CII 2006b). After thorough reviews on publications, tools, and survey instruments developed by CII, academia, and industry, the selected questions were organized and classified by leading indicators, which can be utilized during project planning and execution. Each leading indicator has linkage with potential CII resources for improvement of project performance (CII 2013). For example, when the organizing score of a project is low, this situation can be improved by looking at its linkage to applicable best practices and resources. This linkage will soon help the project identify which implementation resources should be considered for improvement (CII 2013). Among 10 leading indicators, planning, organizing, leading, and controlling have long been recognized as core management functions in a business organization (Tsoukas 1994). The other leading indicators have been accepted by the literature review and industry experts’ feedbacks (Kang et al. 2014). The first focus of this research was to define each of the 10 input measures so that each of the questions in the input measure section were grouped into the leading indicators with regard to industry group and phase. Figure 1 illustrates 10 leading indicators designed to perform a multi-perspective assessment framework for capital project benchmarking by industry group and phase.    Figure 1. Multi-Perspective Assessment Framework in the 10-10 Program The 10 leading indicators are defined as below: • Planning is the work a manager performs to predetermine a course of action. The function of planning includes the activities such as forecasting, objective Setting, program development, scheduling, budgeting, and policies and procedures development. • Organizing is the work a manager performs to arrange and relate the work to be done so people can perform it most effectively. The function of organizing includes the activities such as   development of organization structure, delegation of responsibility and authority, and establishment of relationships. • Leading is the work a manager performs to cause people to take effective action. The activities involved in the function of leading include decision-making, communications, motivation, selection of people, and development of people. • Controlling is the work a manager performs to assess and regulate work in progress and completed.  Management controls are achieved through the activities such as establishment of performance standards, measurement of performance, evaluation of performance, and correction of performance. • Design Efficiency measures if the project team is exhausting all techniques to optimize the design in its use of material quantities to provide maximum capacity at minimum cost. • Human Resources examines if the project is staffed correctly, with a minimum amount of staff turnover and appropriate training, and measures if people are capable of achieving project goals. 145-3 • Quality measures if the project team is strictly conforming to project requirements.  Analyzes if programs are pursued to assure the delivery of material goods as intended. • Sustainability evaluates steps taken by the project team to reduce the environmental impact of the project during construction and operation. • Supply Chain examines the strategies used by the project team to promote enhanced working relationships amongst all project stakeholders including those in the project supply chain. • Safety measures the practices followed by the project team to eliminate any possibility of personal injury or property damage on the project. In order to effectively measure these 10 leading indicators as the multi-perspective assessment for capital project benchmarking, the 10-10 Program was designed to obtain the input measures by asking various types of questions that include yes/no, single/multiple selections, numeric open-ended, and Likert response scale from ‘strongly agree’ to ‘strongly disagree’ as presented in Figure 2. Fifteen 10-10 questionnaires have different numbers and types of questions so that they can measure phase- and industry- specific project performances.   Figure 2: Example of Types of Questions in the CII 10-10 Input Measures Significantly, most questions in the input measure section were structured to be subjective on purpose. This approach makes respondents to invest less effort in data entry than when asked for real values that need additional effort to search and gather information such as actual project cost, duration, or number of cases (CII 2013). Often, statement-based assessment is criticized because of the presence of inconsistency in responses due to respondents’ subjective perceptions. For this reason, the 10-10 Program was designed to assess input measures by various members of the project’s management team (Kang et al. 2014). When numerous responses from a single project are collected, inconsistencies expected to be effectively reduced (CII 2013). Moreover, the assessment by multiple responses for a project benefits to identify what extent project’s team members are aligned during a project planning and execution (Kang et al. 2014).  The 10-10 Program can evaluate the level of managerial efforts for a capital project committed to implementing attributes of each leading indicator through a single representative value and compare the project with similar projects. These leading indicators then can be used for industry practitioners to identify where opportunities for improvement exist in the subsequent phases or next projects. A detailed procedure for quantifying the 10 leading indicators was established. The procedure can be applied to all industry groups and project phases as shown in Figure 3. The calculation procedure consists of four major steps; 1) score calculation of individual input measure question, 2) weighted score calculation, 3) aggregation of the weighted individual input measure scores, and 4) normalization of the aggregated input measure scores.  145-4 Table 1: A Five-point Scales Used for the Likert-scale Question Scale Strongly Disagree Disagree Neutral Agree Strongly Agree Point 0 1 2 4 5 Differing from single choice questions such as yes/no and Likert-scale, multiple selection questions requires diverse point scales as positive and negative statements are mix-provided for the selection. Accordingly, positive and negative statements in the multiple choice questions are coded as negative and positive scales respectively, considering relative influence of statements in a given question. Again, the 10-10 Program surveys multiple project team members from a project. To mitigate inconsistencies and take a representative single score for the project, average scores are used to measure the level of implementation described in each question. Accordingly, the sum of point values acquired from multiple responses of a single project is divided by the number of the participants to evaluate the average score. Missing data is ignored when calculating the score, which indicates that the score is produced by measuring a project’s average score for the variable for which respondents in the project have provided answers (De Vaus 2001). However, in case that nobody answers to a question, the point value for the question is recorded as zero based on the assumption that the project did not perform practice or implementation asked by the question.   3.2 Weighting and Aggregation of Individual Score Answers on certain questions might have more negative or positive impacts on one or more input measures than those on the other questions might have. Weights were thus considered to address relative difference in influence of questions on the input measures. Additionally, in order to generate and then benchmark single scores of each leading indicator, each input measure question needs to be grouped into relevant leading indicator(s). Determining weights and classifying questions into 10 leading indicators were conducted simultaneously. To facilitate the process, all questions and 10 leading indicators were listed on a single column and a row in a spreadsheet in a matrix table according to phase and industry group. From there, industry experts’ inputs were collected at CII activities and events in 2013 and 2014. Each participant provided opinion regarding relationship between questions and 10 leading indicators with relative strength of each relationship. Based on these feedbacks from the industry experts, the weights and linkage between questions and leading indicators finally were determined. The weights are used to calculate weighted individual question score by multiplying the score and weight. Thereafter, the weighted scores are aggregated in order to produce a single value.  3.3 Normalized Scores and Report While the total weighted score can be used for benchmarking, it is hard to understand the exact implementation level of input measure without normalization. This is because different number of questions and weights were used for generating total weighted score of each leading indicator. Hence, each leading indicator has different scale by phase and industry group. In order to adjust total weighted score measured by different scale to a common scale, score adjustment is conducted so that the total weighted scores are normalized to a total of 100. The normalized scores of leading indicators are obtained by dividing total weighted score by the weights used for calculation in the total weighted score.  As a final score of each leading indicator ranges from 0% to 100%, high scores close to 100% represents a better implementation of relevant leading indicator than those of low scores. For the benchmarking purpose, the distribution of leading indicator scores of similar projects is necessary and the final score will be indicated in the distribution. To remove the scores differing greatly from the majority of a set of the other projects’ scores, values excluding extreme outlying scores are only considered in the charts. Since the program is based on project phase, comparisons are made at the industry group and phase level. When necessary, further comparison can be made by secondarily respondent and project type. The difference in processes and characteristics of project and respondent types suggests that appropriate grouping is crucial for performance comparison (Hwang et al. 2007). The distributions of 10 leading indicators are illustrated by quartile information within comparison group. The fourth quartile is composed 145-6 of the 25% of the projects with the high input measure scores and the first quartile is populated with the 25% with the low scores. Result of a project is marked with a black dot within the quartile it belongs to. Through the information, projects can easily identify leading indicators they need to improve. 3.4 Case Study To present how the 10-10 input measure report can be interpreted, a sample report was generated based on responses having collected from one of chemical manufacturing projects. Five project team members of the project responded the engineering phase of industrial projects’ 10-10 questionnaire. Score distributions of leading indicators in Figure 4 are presented within those of the other chemical manufacturing projects among all projects participated the engineering phase of industrial questionnaires. Thus, sample size (n), min (minimum), max (maximum), 1Q, 2Q, and 3Q (quartile) in the report are calculated by the scores of the similar project in terms of industry sector, phase, and project type. As can be seen in Figure 4, six leading indicators are placed in the fourth quartile. However, design efficiency, human resources, sustainability, and supply chain are located in the first or third quartiles. In particular, it appears that design efficiency in the engineering phase was poorly implemented rather than the other projects and it should be improved for the subsequent phases such as construction or procurement phases. Also, the project team needs to take attention on human resources, sustainability, and supply chain which is still better not the best in comparison to the other similar projects. To improve the situation, the project is recommended to implement relevant CII best practices and tools in order to enhance project performance and importantly, linkages between 10-10 leading indicators and CII resources can help the project to easily find which implementation resources should be considered for improvement (CII 2013).  Figure 4: The Input Measure Report of the Case Project 145-7 Although extreme outlying scores were already removed, relatively larger variations of supply chain and sustainability scores are identified in Figure 4. This fact indicates that management levels of the two areas have large deviations among chemical manufacturing projects during engineering phase. It implies that management skills and practices concerning supply chain and sustainability have not been well established in the chemical manufacturing projects. 4 CONCLUSION AND PATH FORWARD The CII 10-10 Program adopted a multi-perspective assessment approach for evaluating leading indicators that represent various management input measures throughout capital project delivery process. The purpose of the study described in this paper was to develop a new assessment method for 10 leading indicators deployed in the 10-10 Program for benchmarking. To achieve this goal, attempts to transfer respondent’s answer to numeric values were made and score awarding criteria were developed. The concepts of 10 leading indicators are then defined. From there, each individual question was classified into 10 leading indicator(s) and weights of questions were determined with regard to the question’s level of influence on related leading indicator(s). Using the determined values, the study presented how to produce a representative single score of a leading indicator and how to report the outcomes with the most similar projects submitted to the 10-10 Program database. When necessary, projects are grouped by respondent and respondent types for performance comparison through leading indicators because project and respondent types involve highly different processes and characteristics of projects.  The method was developed based on a wide range of literature review as well as industry experts’ knowledge acquired from the CII events. The finding of this research is meaningful leading indicators can be evaluated and should be of significant value for capital projects performance assessment. Since 10-10 surveys by phase using simple questions, 10 leading indicators can be gathered throughout a project execution that can help projects to identify impending problems. More importantly, as established 10 input measures are based on the CII’s knowledge areas utilized during project execution and thus benchmarking outcomes direct the projects to easily find CII resources for mitigation of issues which projects can refer to. For the future studies on the topic, continuous modification of the method should be considered with more data analyses. As data accumulate, more detailed validation on the accuracy of established assessment method will be possible. Also, the 10-10 Program questionnaires will be updated whenever required to ensure that they reliably measure the right leading indicators of all types of capital projects. Additionally, the relationship between input measure scores and output measure metrics in the 10-10 Program needs to be thoroughly examined. The finding can provide projects with which leading indicator should be conducted well to achieve better specific outcomes. For example, supply chain measure might have significant impacts on schedule performance in the procurement phase. In this case, project driven by schedule should focus on better supply chain measures. Overall, the developed method is expected to help project management teams clearly understand how their projects’ leading indicators are measured and also enable them to diagnose their project’s performances. Finally, it is strongly believed that the outcomes will allow them to set up proactive strategies for the subsequent phases of the project.  References Construction Industry Institute (CII). 1987. Model Planning and Controlling System for EPC of Industrial Projects. Publication 6-3, The University of Texas at Austin, Austin, TX. Construction Industry Institute (CII). 1989. Measuring the Cost of Quality in Design and Construction. Publication 10-2, The University of Texas at Austin, Austin, TX. Construction Industry Institute (CII). 1995. Project Definition Rating Index (PDRI) for Industrial Projects. The University of Texas at Austin, Austin, TX. Construction Industry Institute (CII). 1997. Team Alignment During Pre-Project Planning of Capital Facilities. RR 113-12, The University of Texas at Austin, Austin, TX. 145-8 Construction Industry Institute (CII). 2003a. Development of the International Project Risk Assessment (IPRA) Tool. RR181-11, The University of Texas at Austin, Austin, TX. Construction Industry Institute (CII). 2003b. Development of the Value Management Toolkit. RR184-11, The University of Texas at Austin, Austin, TX. Construction Industry Institute (CII). 2006a. Forecasting Potential Risks through Leading Indicators to Project Outcome. RR 220-11, The University of Texas at Austin, Austin, TX. Construction Industry Institute (CII). 2006b. Leading Indicators during Project Execution. RS 220-1, The University of Texas at Austin, Austin, TX. Construction Industry Institute (CII). 2006c. The Owner's Role in Project Success. RS 204-1, The University of Texas at Austin, Austin, TX. Construction Industry Institute (CII). 2008. Optimizing Construction Input in Front End Planning. RS 241-1, The University of Texas at Austin, Austin, TX. Construction Industry Institute (CII). 2010. Building Information Modeling - Project Execution Planning for Building Information Modeling (BIM). RS RES-CPF 2010-1, The University of Texas at Austin, Austin, TX. Construction Industry Institute (CII). 2011a. Benchmarking & Metrics Project Level Survey: Small Project Questionnaire. The University of Texas at Austin, Austin, TX. Construction Industry Institute (CII). 2011b. Front Ending Planning for Renovation and Revamp Projects. RS 242-1, The University of Texas at Austin, Austin, TX. Construction Industry Institute (CII). 2011c. Front Ending Planning Tool: PDRI for Infrastructure Projects. RS 268-1, The University of Texas at Austin, Austin, TX. Construction Industry Institute (CII). 2011d. Global Procurement and Materials Management. IR 257-2, The University of Texas at Austin, Austin, TX. Construction Industry Institute (CII). 2012a. Healthcare Benchmarking. Retrieved from http://www.healthcarebenchmarking.org/?page_id=371 (last accessed on 16 November 2014) Construction Industry Institute (CII). 2012b. Measuring Safety Performance with Active Safety Leading Indicators. RS 284-1, The University of Texas at Austin, Austin, TX. Construction Industry Institute (CII). 2012c. Benchmarking & Metrics Project Level Survey Version 11. Retrieved from https://www.construction-institute.org/nextgen/publications/pas/general/Large_ Project_ Version11_ Issued_092012.pdf (last accessed on 5 December 2014) Construction Industry Institute (CII). 2013. CII 10-10 Performance Assessment Campaign Booklet. Retrieved from https://www.construction-institute.org/nextgen/10-10/10-10_Campaign_Booklet.pdf (last accessed on 3 January 2015) Construction Industry Institute (CII). 2014a. Performance Assessment (2014 Edition). Retrieved from https://www.construction-institute.org/nextgen/publications/pas/10-10brochure.pdf (last accessed on 15 December 2014) Construction Industry Institute (CII). 2014b. CII 10-10 Performance Assessment Campaign. Retrieved from http://10-10program.org/overview.htm (last accessed on 5 January 2015) Construction Industry Institute (CII). 2015. CII Best Practices. Retrieved from //www.construction-institute.org/Store/CII/Publication_Pages/bp.cfm?section=orders (last accessed on 5 January 2015) De Vaus. D. A. 2001. Research design in social research. Sage, London, UK. Gibson, G. E., Wang, Y. R., Cho, C. S., and Pappas, M. P. 2006. What is preproject planning, anyway? Journal of Management in Engineering, ASCE, 22(1), 35-42. Hwang, B., Thomas, S.R., Degezelle, D. and Caldas, C.H. 2008. Development of a Benchmarking Framework for Pharmaceutical Capital Projects. Construction Management and Economics, Taylor & Francis, 26(2), 177-19. Kang, Y., Dai. J., Mulva, S., and Choi, J. 2014. The 10-10 Performance Assessment Campaign: New Theories Regarding the Benchmarking of Capital Project Performance. Construction Research Congress 2014, ASCE, Atlanta, Georgia, USA, pp. 2335-2344. Kang, Y., O’Brien, W., Dai, J., Mulva, S., Thomas, S., Chapman, R., and Butry, D. 2013. Interaction Effects of Information Technologies and Best Practices on Construction Project Performance. Journal of Construction Engineering and Management, ASCE,139(4), 361-371. Kasunic, M. 2008. A Data Specification for Software Project Performance Measures: Results of a Collaboration on Performance Measurement. CMU/SEI-2008-TR-012, Software Engineering INST. Pittsburgh, PA. 145-9 Tsoukas, H. 1994. What is management? An Outline of a Metatheory. British Journal of Management, 5(4), 289-301. Yu, A., Shen, Q., Kelly. J., and Hunter, K. 2006. Investigation of Critical Success Factors in Construction Project Briefing by Way of Content Analysis. Journal of Construction Engineering and Management, ASCE, 132(11): p. 1178-1186. Zhang, X. 2005.  Critical Success Factors for Public-Private Partnerships in Infrastructure Development. Journal of Construction Engineering and Management, ASCE, 131(1): p. 3-14.  145-10  5th International/11th Construction Specialty Conference 5e International/11e Conférence spécialisée sur la construction    Vancouver, British Columbia June 8 to June 10, 2015 / 8 juin au 10 juin 2015   A MULTI-PERSPECTIVE ASSESSMENT METHOD FOR MEASURING LEADING INDICATORS IN CAPITAL PROJECT BENCHMARKING Jiyong Choi1, Sungmin Yun2,4, Stephen Mulva2, Daniel Oliveira2, and Youngcheol Kang3 1 Department of Civil, Architectural and Environmental Engineering, University of Texas at Austin, USA 2 Construction Industry Institute, University of Texas at Austin, USA 3 International School of Urban Science, University of Seoul, South Korea 4 smyun@utexas.edu Abstract: This paper presents a new multi-perspective assessment method for measuring leading indicators deployed in the 10-10 Performance Assessment System that the Construction Industry Institute (CII) has recently launched. The CII 10-10 Performance Assessment System adopted a multi-perspective assessment approach for evaluating leading indicators that represent various management input measures throughout capital project delivery process. The leading indicators consist of 10 input measures, including four fundamental management functions such as planning, organizing, leading, and controlling as well as major management practices such as design efficiency, human resources, quality, sustainability, supply chain, and safety. This paper provides the theoretical background for the method through extensive review of existing benchmarking theories. Then it describes the development process for the assessment method. After this, it presents how the method was deployed to evaluate the system’s 10 leading indicators. Finally, this paper discusses how to practically utilize the input measure scores acquired from the method for performance improvement. The assessment method in the system will help project management teams to diagnose their project’s performances and thus allow them to set up proactive strategies for the subsequent phases of the project. 1 INTRODUCTION Since 1996, Construction Industry Institute (CII) has initiated various industry specific performance assessment programs in order to reliably measure an organization’s performance against recognized leaders for the purpose of determining best practices that lead to better performance (CII 2015). As a big data repository of capital projects in the construction industry across the world, the CII Performance Assessment database has been used for benchmarking of capital projects collected from the CII member companies and various research efforts such as pre-project planning (Gibson et al. 2006) and impact of technology use on project performance (Kang et al. 2013). The CII 10-10 Performance Assessment Program (the 10-10 Program), as the newest initiative, builds on this legacy by providing the industry with a benchmarked set of leading indicators for several project types and industry sectors (CII 2014a). All CII research studies and the existing CII Performance Assessment surveys were investigated in developing the questionnaires. Moreover, extensive industry experts’ inputs were also reflected on the 10-10 Program, which was acquired from a number of CII activities and events held from 2012 to 2014 such as CII’s Board of Advisors (BoA) meetings, CII Performance Assessment Community of Practice (PACOP) and CII Performance Assessment Workshop (PAW). The knowledge from industry expertise and previous academic researches thus forms the basis of the program. 145-1 There are three different sets of industry specific questionnaires in the 10-10 Program. Each set consists of five phase level questionnaires, which are front end planning (or programming), engineering (or design), procurement, construction, and startup (or commissioning). Thus, the 10-10 Program was designed to collect capital project data by project phase instead of at project closeout when previous performance assessment programs are typically conducted (CII 2014b). Notably, the 10-10 Program was developed to survey members of a project’s management team regarding their project’s performance, team dynamics, and organizational relationships (Kang et al. 2014). Since 10-10 surveys by phase using simple statement-based questions, 10 leading indicators (i.e., input measures) are obtained throughout a project’s development, which can identify projects’ impending problems. On the other hand, 10 output measures (i.e., lagging indicators) is to provide certainty that the project is proceeding on target through various metrics related to cost, schedule, capacity, quantity, and safety. Together, these measures are the basis of the new program’s name of 10-10 (CII 2014b). For the input measure section, the 10-10 Program adopted a multi-perspective assessment approach for evaluating leading indicators that represent various management efforts throughout capital project delivery process. The input measures consist of 10 leading indicators including four fundamental management functions such as planning, organizing, leading, and controlling as well as major management practices such as design efficiency, human resources, quality, sustainability, supply chain, and safety. Accordingly, questions in the input measure section are used to measure 10 scores representing 10 leading indicators so that they can be compared with the other similar projects. However, a structured process is required in order to obtain and benchmark the 10 input measures in an appropriate manner. This paper provides the theoretical background for the multi-perspective assessment method through extensive review of existing benchmarking theories. It then describes the development process for the assessment method.  2 RESEARCH BACKGROUND One of critical issues in contemporary benchmarking is the lack of information supporting project decisions and influencing on project performance during project planning and execution (Kang et al. 2014). As project data is submitted to CII performance assessment database after completion of projects, the outcomes of benchmarking cannot directly benefit the projects being benchmarked. Rather, the results are typically used for their future projects. The fact motivated CII to develop a new program for gathering the information supporting project decisions and influencing on capital project performance through comprehensive review on references from academia and industry.  At the outset, CII benchmarking legacies were thoroughly reviewed including CII’s research publications (CII 1987, CII 1989, CII 1997, CII 2006a, CII 2006b, CII 2006c, CII 2008, CII 2010, CII 2010d, CII 2011b), implementation tools (CII 1995, CII 2003a, CII 2003b, CII 2011c), and survey instruments (CII 2011a, CII 2012a; CII 2012c). All existing survey questions created to capture the extent of implementation of CII best practices, which are defined as a process and method that leads to enhanced project performance when executed effectively, are thoroughly investigated (CII 2015).  Also, research results on performance assessment and benchmarking conducted in academia were reviewed (Kasunic 2008, Zhang 2005, Yu et al. 2005). In addition to that, survey instruments for the performance assessment developed by industrial practitioners were examined. Once the draft of the questionnaires were first generated from these multiple sources, questions were expanded, filtered, and combined later by industry practitioners’ expertise collected at several CII events and activities (Kang et al. 2014). Upon the completion of the questionnaires, the 10-10 Program intends to assess capital project performance by phase so that a capital project can identify if the project is properly positioned for success in that specific phase, as well as in subsequent phase (CII 2014a).  3 METHOD FOR MEASURING LEADING INDICATORS Leading indicators are defined as the measurements of processes, activities, and conditions that define performance and can predict future results (CII 2012b). Moreover, leading indicators allow for proactive 145-2 management to impact project outcomes, revealed in a timely manner (CII 2006b). After thorough reviews on publications, tools, and survey instruments developed by CII, academia, and industry, the selected questions were organized and classified by leading indicators, which can be utilized during project planning and execution. Each leading indicator has linkage with potential CII resources for improvement of project performance (CII 2013). For example, when the organizing score of a project is low, this situation can be improved by looking at its linkage to applicable best practices and resources. This linkage will soon help the project identify which implementation resources should be considered for improvement (CII 2013). Among 10 leading indicators, planning, organizing, leading, and controlling have long been recognized as core management functions in a business organization (Tsoukas 1994). The other leading indicators have been accepted by the literature review and industry experts’ feedbacks (Kang et al. 2014). The first focus of this research was to define each of the 10 input measures so that each of the questions in the input measure section were grouped into the leading indicators with regard to industry group and phase. Figure 1 illustrates 10 leading indicators designed to perform a multi-perspective assessment framework for capital project benchmarking by industry group and phase.    Figure 1. Multi-Perspective Assessment Framework in the 10-10 Program The 10 leading indicators are defined as below: • Planning is the work a manager performs to predetermine a course of action. The function of planning includes the activities such as forecasting, objective Setting, program development, scheduling, budgeting, and policies and procedures development. • Organizing is the work a manager performs to arrange and relate the work to be done so people can perform it most effectively. The function of organizing includes the activities such as   development of organization structure, delegation of responsibility and authority, and establishment of relationships. • Leading is the work a manager performs to cause people to take effective action. The activities involved in the function of leading include decision-making, communications, motivation, selection of people, and development of people. • Controlling is the work a manager performs to assess and regulate work in progress and completed.  Management controls are achieved through the activities such as establishment of performance standards, measurement of performance, evaluation of performance, and correction of performance. • Design Efficiency measures if the project team is exhausting all techniques to optimize the design in its use of material quantities to provide maximum capacity at minimum cost. • Human Resources examines if the project is staffed correctly, with a minimum amount of staff turnover and appropriate training, and measures if people are capable of achieving project goals. 145-3 • Quality measures if the project team is strictly conforming to project requirements.  Analyzes if programs are pursued to assure the delivery of material goods as intended. • Sustainability evaluates steps taken by the project team to reduce the environmental impact of the project during construction and operation. • Supply Chain examines the strategies used by the project team to promote enhanced working relationships amongst all project stakeholders including those in the project supply chain. • Safety measures the practices followed by the project team to eliminate any possibility of personal injury or property damage on the project. In order to effectively measure these 10 leading indicators as the multi-perspective assessment for capital project benchmarking, the 10-10 Program was designed to obtain the input measures by asking various types of questions that include yes/no, single/multiple selections, numeric open-ended, and Likert response scale from ‘strongly agree’ to ‘strongly disagree’ as presented in Figure 2. Fifteen 10-10 questionnaires have different numbers and types of questions so that they can measure phase- and industry- specific project performances.   Figure 2: Example of Types of Questions in the CII 10-10 Input Measures Significantly, most questions in the input measure section were structured to be subjective on purpose. This approach makes respondents to invest less effort in data entry than when asked for real values that need additional effort to search and gather information such as actual project cost, duration, or number of cases (CII 2013). Often, statement-based assessment is criticized because of the presence of inconsistency in responses due to respondents’ subjective perceptions. For this reason, the 10-10 Program was designed to assess input measures by various members of the project’s management team (Kang et al. 2014). When numerous responses from a single project are collected, inconsistencies expected to be effectively reduced (CII 2013). Moreover, the assessment by multiple responses for a project benefits to identify what extent project’s team members are aligned during a project planning and execution (Kang et al. 2014).  The 10-10 Program can evaluate the level of managerial efforts for a capital project committed to implementing attributes of each leading indicator through a single representative value and compare the project with similar projects. These leading indicators then can be used for industry practitioners to identify where opportunities for improvement exist in the subsequent phases or next projects. A detailed procedure for quantifying the 10 leading indicators was established. The procedure can be applied to all industry groups and project phases as shown in Figure 3. The calculation procedure consists of four major steps; 1) score calculation of individual input measure question, 2) weighted score calculation, 3) aggregation of the weighted individual input measure scores, and 4) normalization of the aggregated input measure scores.  145-4 Table 1: A Five-point Scales Used for the Likert-scale Question Scale Strongly Disagree Disagree Neutral Agree Strongly Agree Point 0 1 2 4 5 Differing from single choice questions such as yes/no and Likert-scale, multiple selection questions requires diverse point scales as positive and negative statements are mix-provided for the selection. Accordingly, positive and negative statements in the multiple choice questions are coded as negative and positive scales respectively, considering relative influence of statements in a given question. Again, the 10-10 Program surveys multiple project team members from a project. To mitigate inconsistencies and take a representative single score for the project, average scores are used to measure the level of implementation described in each question. Accordingly, the sum of point values acquired from multiple responses of a single project is divided by the number of the participants to evaluate the average score. Missing data is ignored when calculating the score, which indicates that the score is produced by measuring a project’s average score for the variable for which respondents in the project have provided answers (De Vaus 2001). However, in case that nobody answers to a question, the point value for the question is recorded as zero based on the assumption that the project did not perform practice or implementation asked by the question.   3.2 Weighting and Aggregation of Individual Score Answers on certain questions might have more negative or positive impacts on one or more input measures than those on the other questions might have. Weights were thus considered to address relative difference in influence of questions on the input measures. Additionally, in order to generate and then benchmark single scores of each leading indicator, each input measure question needs to be grouped into relevant leading indicator(s). Determining weights and classifying questions into 10 leading indicators were conducted simultaneously. To facilitate the process, all questions and 10 leading indicators were listed on a single column and a row in a spreadsheet in a matrix table according to phase and industry group. From there, industry experts’ inputs were collected at CII activities and events in 2013 and 2014. Each participant provided opinion regarding relationship between questions and 10 leading indicators with relative strength of each relationship. Based on these feedbacks from the industry experts, the weights and linkage between questions and leading indicators finally were determined. The weights are used to calculate weighted individual question score by multiplying the score and weight. Thereafter, the weighted scores are aggregated in order to produce a single value.  3.3 Normalized Scores and Report While the total weighted score can be used for benchmarking, it is hard to understand the exact implementation level of input measure without normalization. This is because different number of questions and weights were used for generating total weighted score of each leading indicator. Hence, each leading indicator has different scale by phase and industry group. In order to adjust total weighted score measured by different scale to a common scale, score adjustment is conducted so that the total weighted scores are normalized to a total of 100. The normalized scores of leading indicators are obtained by dividing total weighted score by the weights used for calculation in the total weighted score.  As a final score of each leading indicator ranges from 0% to 100%, high scores close to 100% represents a better implementation of relevant leading indicator than those of low scores. For the benchmarking purpose, the distribution of leading indicator scores of similar projects is necessary and the final score will be indicated in the distribution. To remove the scores differing greatly from the majority of a set of the other projects’ scores, values excluding extreme outlying scores are only considered in the charts. Since the program is based on project phase, comparisons are made at the industry group and phase level. When necessary, further comparison can be made by secondarily respondent and project type. The difference in processes and characteristics of project and respondent types suggests that appropriate grouping is crucial for performance comparison (Hwang et al. 2007). The distributions of 10 leading indicators are illustrated by quartile information within comparison group. The fourth quartile is composed 145-6 of the 25% of the projects with the high input measure scores and the first quartile is populated with the 25% with the low scores. Result of a project is marked with a black dot within the quartile it belongs to. Through the information, projects can easily identify leading indicators they need to improve. 3.4 Case Study To present how the 10-10 input measure report can be interpreted, a sample report was generated based on responses having collected from one of chemical manufacturing projects. Five project team members of the project responded the engineering phase of industrial projects’ 10-10 questionnaire. Score distributions of leading indicators in Figure 4 are presented within those of the other chemical manufacturing projects among all projects participated the engineering phase of industrial questionnaires. Thus, sample size (n), min (minimum), max (maximum), 1Q, 2Q, and 3Q (quartile) in the report are calculated by the scores of the similar project in terms of industry sector, phase, and project type. As can be seen in Figure 4, six leading indicators are placed in the fourth quartile. However, design efficiency, human resources, sustainability, and supply chain are located in the first or third quartiles. In particular, it appears that design efficiency in the engineering phase was poorly implemented rather than the other projects and it should be improved for the subsequent phases such as construction or procurement phases. Also, the project team needs to take attention on human resources, sustainability, and supply chain which is still better not the best in comparison to the other similar projects. To improve the situation, the project is recommended to implement relevant CII best practices and tools in order to enhance project performance and importantly, linkages between 10-10 leading indicators and CII resources can help the project to easily find which implementation resources should be considered for improvement (CII 2013).  Figure 4: The Input Measure Report of the Case Project 145-7 Although extreme outlying scores were already removed, relatively larger variations of supply chain and sustainability scores are identified in Figure 4. This fact indicates that management levels of the two areas have large deviations among chemical manufacturing projects during engineering phase. It implies that management skills and practices concerning supply chain and sustainability have not been well established in the chemical manufacturing projects. 4 CONCLUSION AND PATH FORWARD The CII 10-10 Program adopted a multi-perspective assessment approach for evaluating leading indicators that represent various management input measures throughout capital project delivery process. The purpose of the study described in this paper was to develop a new assessment method for 10 leading indicators deployed in the 10-10 Program for benchmarking. To achieve this goal, attempts to transfer respondent’s answer to numeric values were made and score awarding criteria were developed. The concepts of 10 leading indicators are then defined. From there, each individual question was classified into 10 leading indicator(s) and weights of questions were determined with regard to the question’s level of influence on related leading indicator(s). Using the determined values, the study presented how to produce a representative single score of a leading indicator and how to report the outcomes with the most similar projects submitted to the 10-10 Program database. When necessary, projects are grouped by respondent and respondent types for performance comparison through leading indicators because project and respondent types involve highly different processes and characteristics of projects.  The method was developed based on a wide range of literature review as well as industry experts’ knowledge acquired from the CII events. The finding of this research is meaningful leading indicators can be evaluated and should be of significant value for capital projects performance assessment. Since 10-10 surveys by phase using simple questions, 10 leading indicators can be gathered throughout a project execution that can help projects to identify impending problems. More importantly, as established 10 input measures are based on the CII’s knowledge areas utilized during project execution and thus benchmarking outcomes direct the projects to easily find CII resources for mitigation of issues which projects can refer to. For the future studies on the topic, continuous modification of the method should be considered with more data analyses. As data accumulate, more detailed validation on the accuracy of established assessment method will be possible. Also, the 10-10 Program questionnaires will be updated whenever required to ensure that they reliably measure the right leading indicators of all types of capital projects. Additionally, the relationship between input measure scores and output measure metrics in the 10-10 Program needs to be thoroughly examined. The finding can provide projects with which leading indicator should be conducted well to achieve better specific outcomes. For example, supply chain measure might have significant impacts on schedule performance in the procurement phase. In this case, project driven by schedule should focus on better supply chain measures. Overall, the developed method is expected to help project management teams clearly understand how their projects’ leading indicators are measured and also enable them to diagnose their project’s performances. Finally, it is strongly believed that the outcomes will allow them to set up proactive strategies for the subsequent phases of the project.  References Construction Industry Institute (CII). 1987. Model Planning and Controlling System for EPC of Industrial Projects. Publication 6-3, The University of Texas at Austin, Austin, TX. Construction Industry Institute (CII). 1989. Measuring the Cost of Quality in Design and Construction. Publication 10-2, The University of Texas at Austin, Austin, TX. Construction Industry Institute (CII). 1995. Project Definition Rating Index (PDRI) for Industrial Projects. The University of Texas at Austin, Austin, TX. Construction Industry Institute (CII). 1997. Team Alignment During Pre-Project Planning of Capital Facilities. RR 113-12, The University of Texas at Austin, Austin, TX. 145-8 Construction Industry Institute (CII). 2003a. Development of the International Project Risk Assessment (IPRA) Tool. RR181-11, The University of Texas at Austin, Austin, TX. Construction Industry Institute (CII). 2003b. Development of the Value Management Toolkit. RR184-11, The University of Texas at Austin, Austin, TX. Construction Industry Institute (CII). 2006a. Forecasting Potential Risks through Leading Indicators to Project Outcome. RR 220-11, The University of Texas at Austin, Austin, TX. Construction Industry Institute (CII). 2006b. Leading Indicators during Project Execution. RS 220-1, The University of Texas at Austin, Austin, TX. Construction Industry Institute (CII). 2006c. The Owner's Role in Project Success. RS 204-1, The University of Texas at Austin, Austin, TX. Construction Industry Institute (CII). 2008. Optimizing Construction Input in Front End Planning. RS 241-1, The University of Texas at Austin, Austin, TX. Construction Industry Institute (CII). 2010. Building Information Modeling - Project Execution Planning for Building Information Modeling (BIM). RS RES-CPF 2010-1, The University of Texas at Austin, Austin, TX. Construction Industry Institute (CII). 2011a. Benchmarking & Metrics Project Level Survey: Small Project Questionnaire. The University of Texas at Austin, Austin, TX. Construction Industry Institute (CII). 2011b. Front Ending Planning for Renovation and Revamp Projects. RS 242-1, The University of Texas at Austin, Austin, TX. Construction Industry Institute (CII). 2011c. Front Ending Planning Tool: PDRI for Infrastructure Projects. RS 268-1, The University of Texas at Austin, Austin, TX. Construction Industry Institute (CII). 2011d. Global Procurement and Materials Management. IR 257-2, The University of Texas at Austin, Austin, TX. Construction Industry Institute (CII). 2012a. Healthcare Benchmarking. Retrieved from http://www.healthcarebenchmarking.org/?page_id=371 (last accessed on 16 November 2014) Construction Industry Institute (CII). 2012b. Measuring Safety Performance with Active Safety Leading Indicators. RS 284-1, The University of Texas at Austin, Austin, TX. Construction Industry Institute (CII). 2012c. Benchmarking & Metrics Project Level Survey Version 11. Retrieved from https://www.construction-institute.org/nextgen/publications/pas/general/Large_ Project_ Version11_ Issued_092012.pdf (last accessed on 5 December 2014) Construction Industry Institute (CII). 2013. CII 10-10 Performance Assessment Campaign Booklet. Retrieved from https://www.construction-institute.org/nextgen/10-10/10-10_Campaign_Booklet.pdf (last accessed on 3 January 2015) Construction Industry Institute (CII). 2014a. Performance Assessment (2014 Edition). Retrieved from https://www.construction-institute.org/nextgen/publications/pas/10-10brochure.pdf (last accessed on 15 December 2014) Construction Industry Institute (CII). 2014b. CII 10-10 Performance Assessment Campaign. Retrieved from http://10-10program.org/overview.htm (last accessed on 5 January 2015) Construction Industry Institute (CII). 2015. CII Best Practices. Retrieved from //www.construction-institute.org/Store/CII/Publication_Pages/bp.cfm?section=orders (last accessed on 5 January 2015) De Vaus. D. A. 2001. Research design in social research. Sage, London, UK. Gibson, G. E., Wang, Y. R., Cho, C. S., and Pappas, M. P. 2006. What is preproject planning, anyway? Journal of Management in Engineering, ASCE, 22(1), 35-42. Hwang, B., Thomas, S.R., Degezelle, D. and Caldas, C.H. 2008. Development of a Benchmarking Framework for Pharmaceutical Capital Projects. Construction Management and Economics, Taylor & Francis, 26(2), 177-19. Kang, Y., Dai. J., Mulva, S., and Choi, J. 2014. The 10-10 Performance Assessment Campaign: New Theories Regarding the Benchmarking of Capital Project Performance. Construction Research Congress 2014, ASCE, Atlanta, Georgia, USA, pp. 2335-2344. Kang, Y., O’Brien, W., Dai, J., Mulva, S., Thomas, S., Chapman, R., and Butry, D. 2013. Interaction Effects of Information Technologies and Best Practices on Construction Project Performance. Journal of Construction Engineering and Management, ASCE,139(4), 361-371. Kasunic, M. 2008. A Data Specification for Software Project Performance Measures: Results of a Collaboration on Performance Measurement. CMU/SEI-2008-TR-012, Software Engineering INST. Pittsburgh, PA. 145-9 Tsoukas, H. 1994. What is management? An Outline of a Metatheory. British Journal of Management, 5(4), 289-301. Yu, A., Shen, Q., Kelly. J., and Hunter, K. 2006. Investigation of Critical Success Factors in Construction Project Briefing by Way of Content Analysis. Journal of Construction Engineering and Management, ASCE, 132(11): p. 1178-1186. Zhang, X. 2005.  Critical Success Factors for Public-Private Partnerships in Infrastructure Development. Journal of Construction Engineering and Management, ASCE, 131(1): p. 3-14.  145-10  Multi-Perspective Assessment Method for Measuring Leading Indicators in Capital Project Benchmarking Jiyong Choi, Sungmin Yun*, Stephen Mulva, Daniel Oliveira, and Youngcheol Kang  Sungmin Yun, Ph.D.  Construction Industry Institute The University of Texas at Austin International Construction Specialty Conference 2015 Outline •  Introduction: CII’s 10-10 Program –  Concept of Phase-Based Benchmarking  –  Multi-Perspective Assessment Framework •  Challenges •  Framework for Measuring Leading Indicators •  Conclusion and Path Forward Introduction: CII’s 10-10 Program 3 Existing Benchmarking EPC	   OPS	  FEP	   SU	  E	   SU	  C	   OPS	  P	  FEP	  10-10 Program •  Process •  Practices •  Organization •  Process •  Practices New Project Benchmarking Platform 10-10 Program: Phase-Based Benchmarking 4 10-10 Program: Multi-Perspective Assessment Framework Basic Management Functions Construction-Specific Functions Phase-wide Assessment Phase-focused Assessment Challenges •  Multiple Respondents and Various Data Types  –  Type: Yes/No, single/multiple selections, numeric open-ended, and Likert scale –  Subjective nature: questions are intentionally subjective by design. (CII 2013)      (less effort in data entry rather than real data such as cost and duration) –  Data entry from multiple respondents for the section. (CII 2013, Kang et al. 2014)        : Reduce bias from respondents’ perceptions by collecting numerous responses Data Entry in Survey Instrument Individual Input Measures Framework: Overview •  How to generate representative scores of 10 Leading Indicators for a project based on respondents’ answers on the 10-10, ?  •  Then, how to provide the outcomes so that the project easily and reliably diagnose performance? Leading Indicators’ Scores Taking Industrial FEP 10-10 Surveys Project A Project B Leading Indicators’ Scores • • • Generate Numeric Values Project A Project B Diagnose  performance • • • 10-10 PAS Database Reporting Framework: Quantification Process for Leading Indicators Step 1: Scoring Step 2: Weighting Step 3: Aggregation Step 4: Normalization Framework: Reporting (Sample) Conclusion: Application of Leading Indicators 10 Conclusions and Path Forward •  Leading Indicators measure project organization and practices implemented throughout capital project delivery –  Help identify potential problems through Leading Indicators –  Allow project teams to set up proactive strategies for subsequent project phases based on linkage of Leading Indicators with CII-Project Execution Knowledge Structure  •  Future Studies –  Relationship between Leading Indicators and Performance Metrics –  Company-level dashboard for utilizing Leading Indicators for strategic planning –  Data-driven modification of Leading Indicators, when database is matured. Thank	  you!	  Research Background: 10 Leading Indicators 1.  Planning:  The work a manager performs to predetermine a course of action.  The function of planning includes the following activities: Forecasting, Objective Setting, Program Development, Scheduling, Budgeting, and Policies and Procedures Development. 2.  Organizing:  The work a manager performs to arrange and relate the work to be done so people can perform it most effectively. The function of organizing includes the following activities:  Development of Organization Structure, Delegation of Responsibility and Authority, and Establishment of Relationships. 3.  Leading:  The work a manager performs to cause people to take effective action. The activities involved in the function of leading include:  Decision-Making, Communications, Motivation, Selection of People, and Development of People. 4.  Controlling:  The work a manager performs to assess and regulate work in progress and completed.  Management controls are achieved through the following activities:  Establishment of Performance Standards, Measurement of Performance, Evaluation of Performance, and Correction of Performance. 13 •  Are based on CII’s knowledge areas utilized during project planning and execution (the CII-Project Execution Knowledge Structure (C-PEKS)) Research Background: 10 Leading Indicators 5.  Design Efficiency:  Measures if the project team is exhausting all techniques to optimize the design in its use of material quantities to provide maximum capacity at minimum cost. 6.  Human Resources:  Examines if the project is staffed correctly, with a minimum amount of staff turnover and appropriate training.  Measures if people are capable of achieving project goals. 7.  Quality:  Measures if the project team is strictly conforming to project requirements.  Analyzes if programs are pursued to assure the delivery of material goods as intended. 8.  Sustainability:  Evaluates steps taken by the project team to reduce the environmental impact of the project during construction and operation. 9.  Supply Chain Management:  Examines the strategies used by the project team to promote enhanced working relationships amongst all project stakeholders including those in the project supply chain. 10.  Safety:  Measures the steps followed by the project team to eliminate any possibility of personal injury or property damage on the project. 14 Framework: Score Calculation •  Step 1: Score Calculation –  Define point values for each question with regard to respondent’s answers –  Tendency to choose “agree” or “yes” indicates high degree of effort or better implementation for all questions –  Five Point Scales used for Likert-Scale questions (taking over 70%): penalty for negative answers –  Other types: relative influence of statement in a given question. (max 5, min 0) –  Not answered question: point value is recorded as zero (De Vaus 2001) –  Project’s average score for a question   ​Sum  Point  Values  /Number  of  Respondents = Individual Question Score* * Numeric value is obtained from multiple responses of a project  Framework: Weighting and Aggregation •  Question Mapping and Weighting: Grouping question into relevant LIs –  CII’s event and activities held in 2013 and 2014 (e.g., Performance Assessment Workshop and Benchmarking training) –  Each participant provide opinion regarding relationship between question and LI with relative strength of them (H, M, and L scales) –  All questions are grouped into at least one of LIs Individual Question Score × Weight* = Weighted Individual Score   * Level of Influence on the certain leading indicators •  Aggregation –  Weighted individual scores are summed up to produce scores of LI (total weighted score) –  Total weighted score (sum of weighted individual scores mapped into certain LI(s)) Framework: Normalized Scores and Report •  Normalized scores are needed –  Different number of questions and weights were used for generating total weighted score of each LI. –  Each LI has different scale by phase and industry group. –  Weighted scores are normalized to a total of 100 •  Report –  For benchmarking purpose, distribution of LI scores of similar projects is required –  Comparisons are made at the same industry group and phase level by default –  Difference in processes and characteristics of project and respondent types (appropriate grouping is crucial for performance comparison) (Hwang et al. 2007) –  When necessary, further comparison is made by secondarily respondent and project type (e.g., within projects of natural gas processing projects executed by owner companies) ​Total  Weighted  Score  /Total  Weights = Normalized Input Measure Score* 

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
https://iiif.library.ubc.ca/presentation/dsp.52660.1-0076367/manifest

Comment

Related Items