International Construction Specialty Conference of the Canadian Society for Civil Engineering (ICSC) (5th : 2015)

Data collection framework for construction safety research Chen, Yuting; Alderman, Emilie; McCabe, Brenda; Hyatt, Douglas Jun 30, 2015

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata

Download

Media
52660-Chen_Y_et_al_ICSC15_110_Data_rev_2015_07_20.pdf [ 159.31kB ]
52660-Chen_Y_et_al_ICSC15_110_Data_Collection_Framework_slides.pdf [ 2.43MB ]
Metadata
JSON: 52660-1.0076474.json
JSON-LD: 52660-1.0076474-ld.json
RDF/XML (Pretty): 52660-1.0076474-rdf.xml
RDF/JSON: 52660-1.0076474-rdf.json
Turtle: 52660-1.0076474-turtle.txt
N-Triples: 52660-1.0076474-rdf-ntriples.txt
Original Record: 52660-1.0076474-source.json
Full Text
52660-1.0076474-fulltext.txt
Citation
52660-1.0076474.ris

Full Text

5th International/11th Construction Specialty Conference 5e International/11e Conférence spécialisée sur la construction    Vancouver, British Columbia June 8 to June 10, 2015 / 8 juin au 10 juin 2015   110-1 DATA COLLECTION FRAMEWORK FOR CONSTRUCTION SAFETY RESEARCH Yuting Chen1, 3, Emilie Alderman1, Brenda McCabe1 and Douglas Hyatt2 1 Department of Civil Engineering, University of Toronto, Canada 2. Rotman School of Management, University of Toronto, Canada 3 yut.chen@mail.utoronto.ca Abstract: Engaging workers from construction companies of various sizes and ensuring their participation in construction safety research is often very difficult. Voluntary participation is typically limited by industry-specific recruitment challenges such as the transient nature of the workforce, industry perception of safety research, schedule limitations, and economic constraints. This paper uses the lessons learned and best practices from several years of data collection experience to present a data collection framework for research within the construction industry. The framework includes organizational support, research instruments, data collection processes, and measures of data collection efficiency. The framework was developed following an intensive six-month data collection period, resulting in 370 completed surveys. A 95% survey completion rate following survey site visits was observed, however the overall recruitment time per survey was 3.8 hours. It is clear that data collection itself is often one of the most challenging and time consuming activities related to construction safety research. Clear communication protocols, strict confidentiality measures, and effective incentive strategies are discussed. Methods of engagement are also provided; often a hybrid of a top-down and bottom-up approach is required ensuring participation and worker/company buy-in. The data collection framework in this paper provides a point of departure for researchers to improve their data collection processes and in turn, work toward improving safety performance more efficiently in the construction industry.         1 INTRODUCTION The construction industry is a dynamic and evolving environment where the employees can experience disproportionately high injury and fatality rates (Hallowell and Gambatese 2009). Although significant improvements in safety performance continue to be made in Ontario following the development of the Occupational Health and Safety Act in 1978, construction worker safety has become forefront in recent years (Government of Ontario 2002). Therefore, ongoing and consistent safety research to improve practices and worker-hazard interactions is required (Hallowell and Gambatese 2009). Moreover, recruiting workers to participate in voluntary safety research and data collection is necessary to understand the ever-changing, remote, and transient nature of the industry (Kidd et al. 2004).  Construction safety problems are ubiquitous and are rarely unique to a specific trade, employer, region, province, or country (Hinze 2008). Thus there is an opportunity to solve safety problems on a global level by addressing many facets including managerial components, behavioural aspects, technological advancements, and cultural characteristics (Hinze 2008). Construction projects are defined by various unique factors including frequent crew changes, exposure to adverse weather conditions, and ever 110-2 changing topology, topography, and general work conditions. This makes construction site safety more difficult to measure and research (Rozenfeld et al. 2010), requiring a universally effective data collection program and approach.  The effort, time, and resources required to recruit and retain participants for construction safety research is often far greater than anticipated at the project outset. The top two reported reasons for nonparticipation in safety research projects is the time required to complete the survey process and participants’ and supervisors’ perception of the safety record of their company or site (Kidd et al. 2004). Lessons learned from the recruitment stage of safety studies are useful when engaging participants in future safety research initiatives. Lessons learned include understanding the stability of the construction companies, ensuring sufficient time is provided to recruit participants, oversampling to meet data set targets, approaching both owners and workers during the initial recruitment stages, and understanding the burden perceived by the participants and providing tailored incentives to overcome this burden (Kidd et al. 2004).  Time demands, economic constraints, worker turnover, industry perceptions, and various worksite conditions often limit worker participation in data collection and research. A gap in the research exists in that a systematic and pragmatic data collection framework has not been proposed to overcome participation issues related to construction safety research. Methods to overcome recruitment challenges in small construction companies have been suggested however a comprehensive and universally applicable data collection framework has not been put forward (Kidd et al. 2004). 1.1 Objective and Scope This paper outlines a practical and efficient framework to identify and recruit participants in construction safety data collection initiatives. This will involve selecting potential construction sites, engaging and gathering initial site contacts, connecting with site and company representatives, coordinating and scheduling site visits, and managing onsite survey distribution and collection. Lessons learned and best practices from the researchers’ construction safety improvement initiative are presented in a case study format to capture data collection challenges and successful mitigation techniques. The case studies were obtained while researching construction safety to better understand the behavioural and cultural aspects of the industry, nature of the work, attitudes, and demographics. The data presented in this paper was gathered by surveying workers from construction sites across Ontario. The need for a data collection framework was identified by reviewing the rate of buy-in from the engaged site/employer contacts (site participation rate), the ratio of actual participation verses expected participation, the overall recruitment effort (time), and the onsite feedback obtained during survey distribution and collection. 2 THE FRAMEWORK This framework was developed after several years of data collection experience that took place from 2004 to 2006 (McCabe et al. 2008), and in 2014. In particular, data on the safety climate of construction workers in Ontario were collected through self-administered surveys. The process of engaging site management to allow access to the sites helped the team hone their practices in an effort to maximize the yield. Embedded in the framework are our best practices. The assumptions of this framework are: • Access to individual workers is desired versus having one person represent the site • Data collection is completed in person as opposed to on-line surveys • The sample should be representative of a broad target population. This is different from working with one or two employers to collect data on their sites.  Figure 1 shows a data collection framework with four main parts: organizational support, research instrument, data collection processes, and measures of data collection efficiency.  110-3          Figure 1: Data collection framework 2.1 Organizational Support In most cases, the two major parties in this endeavour will be the research team and the construction team.  The construction team includes head office management, site management and supervisors, and workers. To gain access to the site and therefore the workers, permission was required from those ultimately responsible for the site – head office management. As noted in the third part of the framework, the path that the request may take depends on the way in which they are approached. However, it is noted that a lack of enthusiasm at any level can end the conversation quickly. Finding the right champion, therefore, is important. The research team at University of Toronto comprised the principal investigators (PI), two graduate research assistants (RA) and three undergraduate research interns (RI), who assisted with site identification and data collection. The RAs coordinated their responsibilities to eliminate duplicated effort and missed opportunities. As such, RA1 worked directly with the three RIs to implement introductory site visits and data collection. RA2 was responsible for follow up communications with the employers (corporate and site managers) and for scheduling survey collection visits. Having RA2 also invest some time in the onsite data collection efforts allowed for an enhanced understanding of the on-site needs and dynamics, while also allowing for relationship building at the site level. This proved useful during the implementation of top down engagement strategies. Communication protocols such as updating experience diaries and site contact lists daily were established to facilitate quick and complete information sharing between the two teams. It was essential that the team have safety training before gaining access to any site. They all had their own personal protective equipment (boots, hardhat, and vests) and completed training for WHMIS, Ministry of Labour’s Worker Health and Safety Awareness, and Fall Awareness. They also carried a letter of introduction from the PI, and a letter from the university outlining its insurance coverage for registered students undertaking research off-campus. This did not preclude individual sites requiring the team to undergo site-specific training. Efficiency and Effectiveness Six measures Data Collection Engagement Challenges Research Instrument Questionnaire Confidentiality Research Ethics Informed Consent Participant Incentives Organizational Support Research team: Principle Investigators; Research Assistants; Research Interns Industry: Corporate Management; Site Management; Workers 110-4 2.2 Research Instrument Preparation of the research instrument, finalizing approvals, and establishing protocols are next. The survey should be prepared carefully to ensure that the questions being asked will result in targeted responses to the research questions. One way in which this can be achieved is by using validated safety climate scales. These scales are available in the psychology literature for a large number of topics, as shown in Table 1. Each scale is described by several questions. Although these scales have already been validated, they should be re-validated when applied to new industries. Principal component analysis (PCA) in conjunction with other statistical methods (e.g. collinear test) was used to achieve this. Consequently, the questions that were either not correlated with any others or were highly correlated with others were removed. Cronbach's alpha reliability test was also run. As shown in Table 2, seven out of 13 alpha values are between 0.7 and 0.9, which suggests good internal consistency. Three alpha values are between 0.6 and 0.7, which is acceptable. For the remaining 3 scales (Conscientiousness, Leadership, and Role overload), the alpha values are below but around 0.6, which is acceptable but weak. These three scales should be reviewed and adjusted in subsequent research.  The participants’ demographics and incidents are also measured in this survey, and their relationship with safety climate is analyzed.   Table 1: Safety climate scales Scales Explanation Sources Conscientiousness A group of personality traits, referring collectively to one’s competence, responsibility, and self-image in general. It is believed conscientiousness influences safety behavior. (Goldberg 1992) Fatalism One’s views of the importance and controllability of safety. (Williamson et al. 1997) Safety consciousness One’s awareness of safety issues. (Barling et al. 2002)  Leadership One’s satisfaction with leadership’s ability to provide influence, motivation, intellectual stimulation, and individual consideration. Role overload One’s perceptions about whether there is more work than can be accomplished in the time frame available in one’s job. Work pressure One’s perceptions of whether there is excessive pressure to complete work faster, thereby reducing the amount of time available to plan and carry out work. (Glendon and Litherland 2001) Job safety perception One’s perceptions of how safe their job is. (Hayes et al. 1998) Co-worker safety perception One’s perceptions about whether their co-workers have good safety behaviors. Supervisor safety perception One’s perceptions about their supervisor’s safety practices. Management safety perception One’ perceptions about whether the company’s managers have good safety attitudes and provide a safe work environment. Safety program and policies perception One’s perceptions about the effectiveness of safety programs and policies in place. Interpersonal conflict at work The level to which I get along with others at work. (Spector and Jex 1998) Job involvement One’s beliefs regarding the importance of the role work plays in their life. (Kanungo 1982)  110-5 Table 2: Scale reliability test Scales No. of questions No. of valid cases Alpha Value Acceptability level Conscientiousness 10 215 0.55 Acceptable but weak Fatalism 4 246 0.64 Acceptable Safety consciousness 5 258 0.74 Good Leadership 12 246 0.55 Acceptable but weak Role overload 2 259 0.57 Acceptable but weak Work pressure 2 263 0.75 Good Job safety perception 4 258 0.60 Acceptable Co-worker safety perception 2 261 0.75 Good Supervisor safety perception 4 253 0.63 Acceptable Management safety perception 3 262 0.83 Good Safety program & policies perception 7 244 0.89 Good Interpersonal conflict at work 3 244 0.70 Good Job involvement 6 243 0.90 Good  The PIs must decide if the participants are to be assured confidentially with respect to their responses, and how that will be achieved. In our case, the surveys were confidential, and several strategies were used to ensure that confidentiality. First, participants completed the survey on their own under the oversight of the research team and not their bosses. Once completed, the surveys were put through a slot of a locked box, which was taken away by the researchers at the end of the data collection visit. Participants did not put their names on the surveys, and we even brought pencils with us to ensure that individuals could not be identified by the colour of the ink they used. In the office, the surveys were entered into the database. Even if confidentiality is ensured, participants must sign a Consent Form before completing the survey. It contains important pieces of information, written in language that is appropriate for the participant population. This form is required by the research ethics review panel and includes: • Contact information for the researchers and for the Office of Research Ethics should they have any questions. • The purpose of the research, its sponsor, and the time commitment expected from them as participants. • Conditions for participation, and it must explicitly state if it is voluntary, and how they can withdraw from the research should they wish to. • Risks and benefits to participants due to their taking part in the research. • Explaining who will have access to the data, how it will be used, and how it will be published. • Procedures for maintaining confidentiality, both of hard and electronic data. PIs and the RAs are strongly encouraged to complete research ethics training to better understand some of the issues that relate to their specific research.  To improve the uptake and acknowledge the contribution of the participants, an incentive strategy may be employed. That strategy should acknowledge each level of participation, and it doesn’t have to be expensive. In is our case, when participants completed their survey and put it in the box, they were given a sports drink and a hardhat sticker that was made specifically for this project (see Figure 2). It is interesting to note that many workers chose to have the stickers but refused the drink. Site managers were also given a certificate of participation.  110-6  Figure 2: Hardhat sticker We had the survey translated into various languages (English, Italian, Spanish, and Portuguese) to facilitate the participation of those who were less comfortable in written English. Without regard to the language, it was observed we often needed to explain what some questions meant. This reinforced the benefits of going to the sites to collect data.  2.3 Data Collection This discussion examines the process of implementing the research instrument, from first contact to completion of the data collection. Engagement of the industry is the first challenge; top-down or bottom-up methods can be employed.  Top-down means that the team first contacts the head office management, in this case the person responsible for safety, to introduce the project. If the person is enthusiastic about the project, then they will engage their site managers and schedule data collection visits. This method involves three steps: RA2 contacts the corporate management and if permission is given, schedules site visit(s), and RA1 collects data. The bottom-up method involves engaging the site managers first, who then work to gain corporate permission. It has four steps: initial contact at the site by RA1, follow-up by RA2 and communication with the site and/or the corporate management until approval is given, schedule site visit(s) by RA2, and collect data by RA1. In either case, the first contacts must be introduced to the project with sufficient detail to allow them to make informed decisions whether to move forward or not. Details may include: • The objectives of the research • Who on site is being targeted (e.g. everyone, only specific trades) • The process being used to collect data (e.g. interviews, surveys, observation) • The time commitment that would be needed by the participants and when that time would be needed (the start of the day is best in most cases). • A small sample of the survey • The overall benefits of participating • Safety training of the research team • Documentation summarizing the information The challenge with the top-down method is identifying the companies and then identifying the right person to contact. The challenge with the bottom-up method is finding construction sites. In the ideal case, engaging a site manager resulted in many more sites from that contractor if the corporate manager was equally engaged. We used a combination of top-down and bottom-up methods. The research team often went to site before the workers started work, e.g., 6 to 7AM, so that the normal work was not overly disrupted.  110-7 Factors that can negatively affect the willingness of management (corporate or site) to participate include being behind schedule, poor communications with the company representatives, and negative attitudes of the supervisors toward safety research. Factors that reduced the number of surveys collected on a particular day included poor weather, a high proportion of workers with limited English, generally poor morale on-site, lack of acceptance from the site supervisors, poor introductory explanations about the survey by the research team, and the research team arriving late at the site.  One example that demonstrates these factors was when using the bottom-up method; a construction worker was approached at the site gate to ask where the site office was. He was very helpful and began to direct the team to the site office. However, as soon as safety was mentioned, his helpfulness vanished and he started making excuses about why he could no longer take the team to the site office. This was not an isolated case, and the team soon learned to avoid discussing the specific topic of the research in the first few minutes of first contact.  2.4 Data Collection Efficiency and Effectiveness Data collection efficiency relates to how well the team is turning effort into surveys. It may be gauged using the measures outlined in Equations 1 to 6. [1] R1 = N1/N  [2] R2 = M/N1  [3] R3 = Σ(ai/bi)/N1  [4] R4 = T/M  [5] R5 = % within bounds  [6] R6 = no. surveys 90% completed / M   Where: R1 is the site participation rate R2 is the average number of surveys collected per site R3 is the ratio of actual to expected number of surveys R4 is the recruitment time per survey achieved R5 is the sample representativeness R6 is the survey completeness ai is the actual number of surveys collected for site i, i=1:N1 bi is the expected number of surveys for site i, i=1:N1 M is number of surveys collected N1 is the number of sites that participated in the research N is the total number of sites that were visited without regard to whether they participated T is the total recruitment time The calculation of T can be challenging. In our case, recruitment and data collection was conducted intensely from June to August 2014, and sporadically from September to November 2014, as shown in Table 3. This is a surprisingly high number. Table 4 shows the performance of our research efforts thus far. 370 surveys were collected after initial contact with 68 sites. Eighteen of the 68 sites participated in the project. Among these 18 sites, 8 were contacted using the top-down approach and 10 sites were obtained using the bottom-up approach. The overall participation rate (R1) is 26.4% and the average number of surveys per site (R2) is 21. It is difficult to know the success rate of the top-down approach because the discussions about which sites will participate are typically internal to the corporation. The overall ratio of actual surveys to expected surveys (R3) is 0.76 with values ranging from 0.25 to 1.5 on individual sites (Figure 3). Thirteen of 18 sites over-promised the number of surveys on their sites, with 3 of them resulting in less than 40% of the surveys anticipated. Two sites provided the exact number of surveys expected and 3 provided more than promised. 110-8  Table 3: Recruitment time Category Number of people hrs/mo. each no. of months Total hours Contact/ schedule 1 136 3 408 Site visits Jun-Aug 4 80 3 920 Site visits Sept-Nov 3 10 3 90 Total     T =1418  Table 4: Our Effectiveness Measures Value M Number of surveys collected 370 N Number of sites contacted 68 N1 Number of sites participated in the research 18 T Total contact/schedule time (hours) 1418 R1 Site participation rate 26.4% R2 Average number of surveys per site 21 R3 Actual-to-expected surveys 76% R4 Recruitment time per survey (hours) 3.8 R5 Sample representativeness 56% R6 Survey completeness 94.9%  4 8 12 16 200.00.61.21.80.250.730.80.61.50.730.550.851.20.50.570.671 10.30.331.250.84Site number (participated) Actual/expected number of surveys Figure 3: R3 distribution Representativeness is to verify that the sample being collected is not biased or skewed, and is representative of the target population. It can be determined by comparing the collected data with publicly available workforce data (Koehoorn et al. 2013). This should be checked part way through the data collection period so that adjustments can be made to correct any unintentional biases or gaps in the data. In our case, the data were compared to Statistics Canada Ontario workforce data on age, gender, and company size. The age distribution is reasonably similar, as shown in Table 5. A higher male percentage 110-9 was found in our sample. The employer size distribution is skewed to larger companies, however, and will be addressed immediately.  Table 5: Intermediate Verification of Sample Category  Verification Data Our sample (n=370) Within Bounds Age Distribution    15 to 24 years 16.7%1 9%  25 to 54 years 69.6%1 77.2%  55 years & over 13.7%1 13.8%  Gender Distribution    Male 88%1 98%  Female 12%1 2%  Employer Size Distribution Micro (1-4 employees) 16.7%2 3.1%  Small (5-99 employees) 57.1%2 30.6%  Medium (100-499 employees) 13.6%2 25.8%  Large (500+ employees) 12.8%2 40.5%     5/9=56% 1 (StatsCan 2014b) 2 (StatsCan 2014a)  Completeness reflects how much of the survey has been completed by the participants, as shown in Table 6. In our case, 95% of the 370 surveys have a high degree of completeness. We believe that this high rate is due to having the research team on site to immediately respond to any questions about the survey.  Table 6: Completeness of the surveys Completeness Number of Surveys Percent of Surveys 0% 2 0.5% 10% 3 0.8% 40% 2 0.5% 60% 5 1.4% 75% 7 1.9% >90% 351 94.9%  3 CONCLUSION There are many challenges when collecting reliable data for safety research. In addition to the data collection process itself, safety tends to be a sensitive topic associated with liability. This paper describes a framework for facilitating that process and includes many lessons learned and best practices. The framework consists of four main parts: the organizational structure, the survey instrument, data collection, and effectiveness measures. The survey instrument portion examined the process involved in developing a survey, including research ethics review, confidentiality issues, and participation incentives. Data collection itself is one of the most challenging activities, and some of our lessons learned are included. With respect to effectiveness, our sample shows a high completeness and representativeness of age but not for employer distribution. Recruitment time per survey is a surprisingly high 3.8 hours.  110-10 Acknowledgements We are indebted to all of the managers, safety coordinators, and workers who participated in this study. Special thanks go to those who took extra time to share their experiences and insight. We gratefully acknowledge the funding support of Ontario’s Ministry of Labour Research Opportunities Program. References Barling, J., Loughlin, C., and Kelloway, E.K. 2002. Development and test of a model linking safety-specific transformational leadership and occupational safety. Journal of Applied Psychology, 87(3): 488-496. Glendon, A.I. and Stanton, N.A. 2000. Perspectives on Safety Culture. Safety Science, 34(1-3): 193-214. Goldberg, L.R. 1992. The development of markers for the big-five factor structure. Psychological Assessment, 4(1): 26-42. Government of Ontario. 2002. History of employment standards in Ontario. Retrieved from http://www.worksmartontario.gov.on.ca/scripts/default.asp?contentID=5-1-1-1 (last accessed on 12 February 2015). Hallowell, M. R. and Gambatese, J. A. 2009. Construction safety risk mitigation. Journal of Construction Engineering and Management, 135(12): 1316-1323. Hayes, B.E., Perander, J., Smecko, T., and Trask, J. 1998. Measuring perceptions of workplace safety: development and validation of the work safety scale. Journal of Safety Research, 29(3): 145-161. Hinze, J. 2008. Construction Safety. Safety Science, 46(4): 565-565. Kanungo, R.N. 1982. Work alienation: an integrative approach, Praeger, New York, NY, USA.  Kidd, P., Parshall, M., Wojcik, S., and Struttmann, T. 2004. Overcoming recruitment challenges in construction safety research. American Journal of Industrial Medicine, 45(3): 297-304. Koehoorn, M., Trask, C.M., and Teschke, K. 2013. Recruitment for occupational research: using injured workers as the point of entry into workplaces. PloS one, 8(6): e68354. McCabe, B., Loughlin, C., Munteanu, R., Tucker, S., and Lam, A. 2008. Individual safety and health outcomes in the construction industry. Canadian Journal of Civil Engineering, 35(12): 1455-1467. Rozenfeld, O., Sacks, R., Rosenfeld, Y., and Baum, H. 2010. Construction job safety analysis. Safety Science, 48(4): 491-498. Spector, P.E. and Jex, S.M. 1998. Development of four self-report measures of job stressors and strain: interpersonal conflict at work scale, organizational constraints scale, quantitative workload inventory, and physical symptoms inventory. Journal of Occupational Health Psychology, 3(4): 356-367. Statistics Canada. 2014a. Table 281-0042, Employment by enterprise size of employment (SEPH) for all employees, for selected industries classified using the North American Industry Classification System (NAICS). Statistics Canada. 2014b. Table 282-0071, Labour force survey estimates (LFS), wages of employees by type of work, North American Industry Classification System (NAICS), sex and age group, unadjusted for seasonality, monthly (data in thousands).  Williamson, A.M., Feyer, A.M., Cairns, D., and Biancotti, D. 1997. The development of a measure of safety climate: the role of safety perceptions and attitudes. Safety Science, 25 (1-3): 15-27.  5th International/11th Construction Specialty Conference 5e International/11e Conférence spécialisée sur la construction    Vancouver, British Columbia June 8 to June 10, 2015 / 8 juin au 10 juin 2015   110-1 DATA COLLECTION FRAMEWORK FOR CONSTRUCTION SAFETY RESEARCH Yuting Chen1, 3, Emilie Alderman1, Brenda McCabe1 and Douglas Hyatt2 1 Department of Civil Engineering, University of Toronto, Canada 2. Rotman School of Management, University of Toronto, Canada 3 yut.chen@mail.utoronto.ca Abstract: Engaging workers from construction companies of various sizes and ensuring their participation in construction safety research is often very difficult. Voluntary participation is typically limited by industry-specific recruitment challenges such as the transient nature of the workforce, industry perception of safety research, schedule limitations, and economic constraints. This paper uses the lessons learned and best practices from several years of data collection experience to present a data collection framework for research within the construction industry. The framework includes organizational support, research instruments, data collection processes, and measures of data collection efficiency. The framework was developed following an intensive six-month data collection period, resulting in 370 completed surveys. A 95% survey completion rate following survey site visits was observed, however the overall recruitment time per survey was 3.8 hours. It is clear that data collection itself is often one of the most challenging and time consuming activities related to construction safety research. Clear communication protocols, strict confidentiality measures, and effective incentive strategies are discussed. Methods of engagement are also provided; often a hybrid of a top-down and bottom-up approach is required ensuring participation and worker/company buy-in. The data collection framework in this paper provides a point of departure for researchers to improve their data collection processes and in turn, work toward improving safety performance more efficiently in the construction industry.         1 INTRODUCTION The construction industry is a dynamic and evolving environment where the employees can experience disproportionately high injury and fatality rates (Hallowell and Gambatese 2009). Although significant improvements in safety performance continue to be made in Ontario following the development of the Occupational Health and Safety Act in 1978, construction worker safety has become forefront in recent years (Government of Ontario 2002). Therefore, ongoing and consistent safety research to improve practices and worker-hazard interactions is required (Hallowell and Gambatese 2009). Moreover, recruiting workers to participate in voluntary safety research and data collection is necessary to understand the ever-changing, remote, and transient nature of the industry (Kidd et al. 2004).  Construction safety problems are ubiquitous and are rarely unique to a specific trade, employer, region, province, or country (Hinze 2008). Thus there is an opportunity to solve safety problems on a global level by addressing many facets including managerial components, behavioural aspects, technological advancements, and cultural characteristics (Hinze 2008). Construction projects are defined by various unique factors including frequent crew changes, exposure to adverse weather conditions, and ever 110-2 changing topology, topography, and general work conditions. This makes construction site safety more difficult to measure and research (Rozenfeld et al. 2010), requiring a universally effective data collection program and approach.  The effort, time, and resources required to recruit and retain participants for construction safety research is often far greater than anticipated at the project outset. The top two reported reasons for nonparticipation in safety research projects is the time required to complete the survey process and participants’ and supervisors’ perception of the safety record of their company or site (Kidd et al. 2004). Lessons learned from the recruitment stage of safety studies are useful when engaging participants in future safety research initiatives. Lessons learned include understanding the stability of the construction companies, ensuring sufficient time is provided to recruit participants, oversampling to meet data set targets, approaching both owners and workers during the initial recruitment stages, and understanding the burden perceived by the participants and providing tailored incentives to overcome this burden (Kidd et al. 2004).  Time demands, economic constraints, worker turnover, industry perceptions, and various worksite conditions often limit worker participation in data collection and research. A gap in the research exists in that a systematic and pragmatic data collection framework has not been proposed to overcome participation issues related to construction safety research. Methods to overcome recruitment challenges in small construction companies have been suggested however a comprehensive and universally applicable data collection framework has not been put forward (Kidd et al. 2004). 1.1 Objective and Scope This paper outlines a practical and efficient framework to identify and recruit participants in construction safety data collection initiatives. This will involve selecting potential construction sites, engaging and gathering initial site contacts, connecting with site and company representatives, coordinating and scheduling site visits, and managing onsite survey distribution and collection. Lessons learned and best practices from the researchers’ construction safety improvement initiative are presented in a case study format to capture data collection challenges and successful mitigation techniques. The case studies were obtained while researching construction safety to better understand the behavioural and cultural aspects of the industry, nature of the work, attitudes, and demographics. The data presented in this paper was gathered by surveying workers from construction sites across Ontario. The need for a data collection framework was identified by reviewing the rate of buy-in from the engaged site/employer contacts (site participation rate), the ratio of actual participation verses expected participation, the overall recruitment effort (time), and the onsite feedback obtained during survey distribution and collection. 2 THE FRAMEWORK This framework was developed after several years of data collection experience that took place from 2004 to 2006 (McCabe et al. 2008), and in 2014. In particular, data on the safety climate of construction workers in Ontario were collected through self-administered surveys. The process of engaging site management to allow access to the sites helped the team hone their practices in an effort to maximize the yield. Embedded in the framework are our best practices. The assumptions of this framework are: • Access to individual workers is desired versus having one person represent the site • Data collection is completed in person as opposed to on-line surveys • The sample should be representative of a broad target population. This is different from working with one or two employers to collect data on their sites.  Figure 1 shows a data collection framework with four main parts: organizational support, research instrument, data collection processes, and measures of data collection efficiency.  110-3          Figure 1: Data collection framework 2.1 Organizational Support In most cases, the two major parties in this endeavour will be the research team and the construction team.  The construction team includes head office management, site management and supervisors, and workers. To gain access to the site and therefore the workers, permission was required from those ultimately responsible for the site – head office management. As noted in the third part of the framework, the path that the request may take depends on the way in which they are approached. However, it is noted that a lack of enthusiasm at any level can end the conversation quickly. Finding the right champion, therefore, is important. The research team at University of Toronto comprised the principal investigators (PI), two graduate research assistants (RA) and three undergraduate research interns (RI), who assisted with site identification and data collection. The RAs coordinated their responsibilities to eliminate duplicated effort and missed opportunities. As such, RA1 worked directly with the three RIs to implement introductory site visits and data collection. RA2 was responsible for follow up communications with the employers (corporate and site managers) and for scheduling survey collection visits. Having RA2 also invest some time in the onsite data collection efforts allowed for an enhanced understanding of the on-site needs and dynamics, while also allowing for relationship building at the site level. This proved useful during the implementation of top down engagement strategies. Communication protocols such as updating experience diaries and site contact lists daily were established to facilitate quick and complete information sharing between the two teams. It was essential that the team have safety training before gaining access to any site. They all had their own personal protective equipment (boots, hardhat, and vests) and completed training for WHMIS, Ministry of Labour’s Worker Health and Safety Awareness, and Fall Awareness. They also carried a letter of introduction from the PI, and a letter from the university outlining its insurance coverage for registered students undertaking research off-campus. This did not preclude individual sites requiring the team to undergo site-specific training. Efficiency and Effectiveness Six measures Data Collection Engagement Challenges Research Instrument Questionnaire Confidentiality Research Ethics Informed Consent Participant Incentives Organizational Support Research team: Principle Investigators; Research Assistants; Research Interns Industry: Corporate Management; Site Management; Workers 110-4 2.2 Research Instrument Preparation of the research instrument, finalizing approvals, and establishing protocols are next. The survey should be prepared carefully to ensure that the questions being asked will result in targeted responses to the research questions. One way in which this can be achieved is by using validated safety climate scales. These scales are available in the psychology literature for a large number of topics, as shown in Table 1. Each scale is described by several questions. Although these scales have already been validated, they should be re-validated when applied to new industries. Principal component analysis (PCA) in conjunction with other statistical methods (e.g. collinear test) was used to achieve this. Consequently, the questions that were either not correlated with any others or were highly correlated with others were removed. Cronbach's alpha reliability test was also run. As shown in Table 2, seven out of 13 alpha values are between 0.7 and 0.9, which suggests good internal consistency. Three alpha values are between 0.6 and 0.7, which is acceptable. For the remaining 3 scales (Conscientiousness, Leadership, and Role overload), the alpha values are below but around 0.6, which is acceptable but weak. These three scales should be reviewed and adjusted in subsequent research.  The participants’ demographics and incidents are also measured in this survey, and their relationship with safety climate is analyzed.   Table 1: Safety climate scales Scales Explanation Sources Conscientiousness A group of personality traits, referring collectively to one’s competence, responsibility, and self-image in general. It is believed conscientiousness influences safety behavior. (Goldberg 1992) Fatalism One’s views of the importance and controllability of safety. (Williamson et al. 1997) Safety consciousness One’s awareness of safety issues. (Barling et al. 2002)  Leadership One’s satisfaction with leadership’s ability to provide influence, motivation, intellectual stimulation, and individual consideration. Role overload One’s perceptions about whether there is more work than can be accomplished in the time frame available in one’s job. Work pressure One’s perceptions of whether there is excessive pressure to complete work faster, thereby reducing the amount of time available to plan and carry out work. (Glendon and Litherland 2001) Job safety perception One’s perceptions of how safe their job is. (Hayes et al. 1998) Co-worker safety perception One’s perceptions about whether their co-workers have good safety behaviors. Supervisor safety perception One’s perceptions about their supervisor’s safety practices. Management safety perception One’ perceptions about whether the company’s managers have good safety attitudes and provide a safe work environment. Safety program and policies perception One’s perceptions about the effectiveness of safety programs and policies in place. Interpersonal conflict at work The level to which I get along with others at work. (Spector and Jex 1998) Job involvement One’s beliefs regarding the importance of the role work plays in their life. (Kanungo 1982)  110-5 Table 2: Scale reliability test Scales No. of questions No. of valid cases Alpha Value Acceptability level Conscientiousness 10 215 0.55 Acceptable but weak Fatalism 4 246 0.64 Acceptable Safety consciousness 5 258 0.74 Good Leadership 12 246 0.55 Acceptable but weak Role overload 2 259 0.57 Acceptable but weak Work pressure 2 263 0.75 Good Job safety perception 4 258 0.60 Acceptable Co-worker safety perception 2 261 0.75 Good Supervisor safety perception 4 253 0.63 Acceptable Management safety perception 3 262 0.83 Good Safety program & policies perception 7 244 0.89 Good Interpersonal conflict at work 3 244 0.70 Good Job involvement 6 243 0.90 Good  The PIs must decide if the participants are to be assured confidentially with respect to their responses, and how that will be achieved. In our case, the surveys were confidential, and several strategies were used to ensure that confidentiality. First, participants completed the survey on their own under the oversight of the research team and not their bosses. Once completed, the surveys were put through a slot of a locked box, which was taken away by the researchers at the end of the data collection visit. Participants did not put their names on the surveys, and we even brought pencils with us to ensure that individuals could not be identified by the colour of the ink they used. In the office, the surveys were entered into the database. Even if confidentiality is ensured, participants must sign a Consent Form before completing the survey. It contains important pieces of information, written in language that is appropriate for the participant population. This form is required by the research ethics review panel and includes: • Contact information for the researchers and for the Office of Research Ethics should they have any questions. • The purpose of the research, its sponsor, and the time commitment expected from them as participants. • Conditions for participation, and it must explicitly state if it is voluntary, and how they can withdraw from the research should they wish to. • Risks and benefits to participants due to their taking part in the research. • Explaining who will have access to the data, how it will be used, and how it will be published. • Procedures for maintaining confidentiality, both of hard and electronic data. PIs and the RAs are strongly encouraged to complete research ethics training to better understand some of the issues that relate to their specific research.  To improve the uptake and acknowledge the contribution of the participants, an incentive strategy may be employed. That strategy should acknowledge each level of participation, and it doesn’t have to be expensive. In is our case, when participants completed their survey and put it in the box, they were given a sports drink and a hardhat sticker that was made specifically for this project (see Figure 2). It is interesting to note that many workers chose to have the stickers but refused the drink. Site managers were also given a certificate of participation.  110-6  Figure 2: Hardhat sticker We had the survey translated into various languages (English, Italian, Spanish, and Portuguese) to facilitate the participation of those who were less comfortable in written English. Without regard to the language, it was observed we often needed to explain what some questions meant. This reinforced the benefits of going to the sites to collect data.  2.3 Data Collection This discussion examines the process of implementing the research instrument, from first contact to completion of the data collection. Engagement of the industry is the first challenge; top-down or bottom-up methods can be employed.  Top-down means that the team first contacts the head office management, in this case the person responsible for safety, to introduce the project. If the person is enthusiastic about the project, then they will engage their site managers and schedule data collection visits. This method involves three steps: RA2 contacts the corporate management and if permission is given, schedules site visit(s), and RA1 collects data. The bottom-up method involves engaging the site managers first, who then work to gain corporate permission. It has four steps: initial contact at the site by RA1, follow-up by RA2 and communication with the site and/or the corporate management until approval is given, schedule site visit(s) by RA2, and collect data by RA1. In either case, the first contacts must be introduced to the project with sufficient detail to allow them to make informed decisions whether to move forward or not. Details may include: • The objectives of the research • Who on site is being targeted (e.g. everyone, only specific trades) • The process being used to collect data (e.g. interviews, surveys, observation) • The time commitment that would be needed by the participants and when that time would be needed (the start of the day is best in most cases). • A small sample of the survey • The overall benefits of participating • Safety training of the research team • Documentation summarizing the information The challenge with the top-down method is identifying the companies and then identifying the right person to contact. The challenge with the bottom-up method is finding construction sites. In the ideal case, engaging a site manager resulted in many more sites from that contractor if the corporate manager was equally engaged. We used a combination of top-down and bottom-up methods. The research team often went to site before the workers started work, e.g., 6 to 7AM, so that the normal work was not overly disrupted.  110-7 Factors that can negatively affect the willingness of management (corporate or site) to participate include being behind schedule, poor communications with the company representatives, and negative attitudes of the supervisors toward safety research. Factors that reduced the number of surveys collected on a particular day included poor weather, a high proportion of workers with limited English, generally poor morale on-site, lack of acceptance from the site supervisors, poor introductory explanations about the survey by the research team, and the research team arriving late at the site.  One example that demonstrates these factors was when using the bottom-up method; a construction worker was approached at the site gate to ask where the site office was. He was very helpful and began to direct the team to the site office. However, as soon as safety was mentioned, his helpfulness vanished and he started making excuses about why he could no longer take the team to the site office. This was not an isolated case, and the team soon learned to avoid discussing the specific topic of the research in the first few minutes of first contact.  2.4 Data Collection Efficiency and Effectiveness Data collection efficiency relates to how well the team is turning effort into surveys. It may be gauged using the measures outlined in Equations 1 to 6. [1] R1 = N1/N  [2] R2 = M/N1  [3] R3 = Σ(ai/bi)/N1  [4] R4 = T/M  [5] R5 = % within bounds  [6] R6 = no. surveys 90% completed / M   Where: R1 is the site participation rate R2 is the average number of surveys collected per site R3 is the ratio of actual to expected number of surveys R4 is the recruitment time per survey achieved R5 is the sample representativeness R6 is the survey completeness ai is the actual number of surveys collected for site i, i=1:N1 bi is the expected number of surveys for site i, i=1:N1 M is number of surveys collected N1 is the number of sites that participated in the research N is the total number of sites that were visited without regard to whether they participated T is the total recruitment time The calculation of T can be challenging. In our case, recruitment and data collection was conducted intensely from June to August 2014, and sporadically from September to November 2014, as shown in Table 3. This is a surprisingly high number. Table 4 shows the performance of our research efforts thus far. 370 surveys were collected after initial contact with 68 sites. Eighteen of the 68 sites participated in the project. Among these 18 sites, 8 were contacted using the top-down approach and 10 sites were obtained using the bottom-up approach. The overall participation rate (R1) is 26.4% and the average number of surveys per site (R2) is 21. It is difficult to know the success rate of the top-down approach because the discussions about which sites will participate are typically internal to the corporation. The overall ratio of actual surveys to expected surveys (R3) is 0.76 with values ranging from 0.25 to 1.5 on individual sites (Figure 3). Thirteen of 18 sites over-promised the number of surveys on their sites, with 3 of them resulting in less than 40% of the surveys anticipated. Two sites provided the exact number of surveys expected and 3 provided more than promised. 110-8  Table 3: Recruitment time Category Number of people hrs/mo. each no. of months Total hours Contact/ schedule 1 136 3 408 Site visits Jun-Aug 4 80 3 920 Site visits Sept-Nov 3 10 3 90 Total     T =1418  Table 4: Our Effectiveness Measures Value M Number of surveys collected 370 N Number of sites contacted 68 N1 Number of sites participated in the research 18 T Total contact/schedule time (hours) 1418 R1 Site participation rate 26.4% R2 Average number of surveys per site 21 R3 Actual-to-expected surveys 76% R4 Recruitment time per survey (hours) 3.8 R5 Sample representativeness 56% R6 Survey completeness 94.9%  4 8 12 16 200.00.61.21.80.250.730.80.61.50.730.550.851.20.50.570.671 10.30.331.250.84Site number (participated) Actual/expected number of surveys Figure 3: R3 distribution Representativeness is to verify that the sample being collected is not biased or skewed, and is representative of the target population. It can be determined by comparing the collected data with publicly available workforce data (Koehoorn et al. 2013). This should be checked part way through the data collection period so that adjustments can be made to correct any unintentional biases or gaps in the data. In our case, the data were compared to Statistics Canada Ontario workforce data on age, gender, and company size. The age distribution is reasonably similar, as shown in Table 5. A higher male percentage 110-9 was found in our sample. The employer size distribution is skewed to larger companies, however, and will be addressed immediately.  Table 5: Intermediate Verification of Sample Category  Verification Data Our sample (n=370) Within Bounds Age Distribution    15 to 24 years 16.7%1 9%  25 to 54 years 69.6%1 77.2%  55 years & over 13.7%1 13.8%  Gender Distribution    Male 88%1 98%  Female 12%1 2%  Employer Size Distribution Micro (1-4 employees) 16.7%2 3.1%  Small (5-99 employees) 57.1%2 30.6%  Medium (100-499 employees) 13.6%2 25.8%  Large (500+ employees) 12.8%2 40.5%     5/9=56% 1 (StatsCan 2014b) 2 (StatsCan 2014a)  Completeness reflects how much of the survey has been completed by the participants, as shown in Table 6. In our case, 95% of the 370 surveys have a high degree of completeness. We believe that this high rate is due to having the research team on site to immediately respond to any questions about the survey.  Table 6: Completeness of the surveys Completeness Number of Surveys Percent of Surveys 0% 2 0.5% 10% 3 0.8% 40% 2 0.5% 60% 5 1.4% 75% 7 1.9% >90% 351 94.9%  3 CONCLUSION There are many challenges when collecting reliable data for safety research. In addition to the data collection process itself, safety tends to be a sensitive topic associated with liability. This paper describes a framework for facilitating that process and includes many lessons learned and best practices. The framework consists of four main parts: the organizational structure, the survey instrument, data collection, and effectiveness measures. The survey instrument portion examined the process involved in developing a survey, including research ethics review, confidentiality issues, and participation incentives. Data collection itself is one of the most challenging activities, and some of our lessons learned are included. With respect to effectiveness, our sample shows a high completeness and representativeness of age but not for employer distribution. Recruitment time per survey is a surprisingly high 3.8 hours.  110-10 Acknowledgements We are indebted to all of the managers, safety coordinators, and workers who participated in this study. Special thanks go to those who took extra time to share their experiences and insight. We gratefully acknowledge the funding support of Ontario’s Ministry of Labour Research Opportunities Program. References Barling, J., Loughlin, C., and Kelloway, E.K. 2002. Development and test of a model linking safety-specific transformational leadership and occupational safety. Journal of Applied Psychology, 87(3): 488-496. Glendon, A.I. and Stanton, N.A. 2000. Perspectives on Safety Culture. Safety Science, 34(1-3): 193-214. Goldberg, L.R. 1992. The development of markers for the big-five factor structure. Psychological Assessment, 4(1): 26-42. Government of Ontario. 2002. History of employment standards in Ontario. Retrieved from http://www.worksmartontario.gov.on.ca/scripts/default.asp?contentID=5-1-1-1 (last accessed on 12 February 2015). Hallowell, M. R. and Gambatese, J. A. 2009. Construction safety risk mitigation. Journal of Construction Engineering and Management, 135(12): 1316-1323. Hayes, B.E., Perander, J., Smecko, T., and Trask, J. 1998. Measuring perceptions of workplace safety: development and validation of the work safety scale. Journal of Safety Research, 29(3): 145-161. Hinze, J. 2008. Construction Safety. Safety Science, 46(4): 565-565. Kanungo, R.N. 1982. Work alienation: an integrative approach, Praeger, New York, NY, USA.  Kidd, P., Parshall, M., Wojcik, S., and Struttmann, T. 2004. Overcoming recruitment challenges in construction safety research. American Journal of Industrial Medicine, 45(3): 297-304. Koehoorn, M., Trask, C.M., and Teschke, K. 2013. Recruitment for occupational research: using injured workers as the point of entry into workplaces. PloS one, 8(6): e68354. McCabe, B., Loughlin, C., Munteanu, R., Tucker, S., and Lam, A. 2008. Individual safety and health outcomes in the construction industry. Canadian Journal of Civil Engineering, 35(12): 1455-1467. Rozenfeld, O., Sacks, R., Rosenfeld, Y., and Baum, H. 2010. Construction job safety analysis. Safety Science, 48(4): 491-498. Spector, P.E. and Jex, S.M. 1998. Development of four self-report measures of job stressors and strain: interpersonal conflict at work scale, organizational constraints scale, quantitative workload inventory, and physical symptoms inventory. Journal of Occupational Health Psychology, 3(4): 356-367. Statistics Canada. 2014a. Table 281-0042, Employment by enterprise size of employment (SEPH) for all employees, for selected industries classified using the North American Industry Classification System (NAICS). Statistics Canada. 2014b. Table 282-0071, Labour force survey estimates (LFS), wages of employees by type of work, North American Industry Classification System (NAICS), sex and age group, unadjusted for seasonality, monthly (data in thousands).  Williamson, A.M., Feyer, A.M., Cairns, D., and Biancotti, D. 1997. The development of a measure of safety climate: the role of safety perceptions and attitudes. Safety Science, 25 (1-3): 15-27.  Longitudinal Study of Safety Climate in Ontario Construction Industry   Yuting Chen, Department of Civil Engineering Emilie Alderman, Department of Civil Engineering Prof. Brenda McCabe, Department of Civil Engineering Prof. Douglas Hyatt, Rotman School of Management 2	Safety Climate Influence Improved work behaviors of compliance. Reduction in injury severity and frequency. Safety climate refers to the perceptions that employees share about their working environments (Zohar 1980). Safety Climate 3	People have different attitudes when facing problems. Stone Example 3. Improve 1. Not aware 2. Avoid 4	 Stone Example 3. Improve 2. Avoid •  Site A: 90% of the workers chose “Avoid”. •  Site B: 90% chose “improve”.  Negative safety climate Positive safety climate 5	Research Project 2004-2006 2013-2015 911 surveys 445 surveys Have workers’ attitudes toward safety and incidents changed?  Compare 6	Data Collection Framework Efficiency and Effectiveness Six measures Data Collection Engagement Challenges Research Instrument Questionnaire Confidentiality Research Ethics Informed Consent Participant Incentives Organizational Support Research team: Principle Investigators; Research Assistants; Research Interns Industry: Corporate Management; Site Management; Workers 7	Data Collection Efficiency Measures Value Number of surveys collected 445 Number of sites contacted 73 Number of sites participated in the research 23 Total contact/schedule time (hours) 1812 Site participation rate 31.5% Average number of surveys per site 19 Actual-to-expected surveys 76% Recruitment time per survey (hours) 4.1 Sample representativeness 56% Survey completeness 94.9% 8	Survey Sample Self-administered surveys: Worker Supervisor Three parts are measured: •  Workforce demographics •  Attitudes toward safety •  Safety incidents 9	Worker Demographics 010203040Age Time working  inconstructionTime  with the currentemployerYears2004 201410	Worker Demographics (cont.) 0 2 4 6 8 10 12 No. of employers in the past 3 years No. of projects in the past 3 years 2004 2014 Year No. of employers in the past 3 years 1 2 >=3 2004 46% 24% 30% 2014 51% 25% 24% Year No. of projects in the past 3 years 1-3 4-5 >=6 2004 30% 24% 46% 2014 48% 21% 31% 11	•  Safety climate has improved by 3.6% - 7.5% (11 out of 13 factors) •  Incidents have decreased, as shown in the figure. Attitude & Incident Changes Safety climate and incidents have changed in a positive way	12	Workers Reporting at Least One Incident   Workers reporting at least one physical symptom (%) Workers reporting at least one psychological symptom (%) Workers reporting at least one safety event (%) 2004 80.8 53.9 65.5 2014 67.8 45.3 50.7 13	Conclusion •  Data collection of safety climate research is challenging because safety tends to be a sensitive topic associated with liability. •  Recruitment time per survey is a surprisingly high 4.1 hours. •  Safety climate of Ontario construction sites has been improved in a positive way in the past decade. •  Future research will focus on the driving force of safety climate improvement.  14	Thanks! Questions? 15	Factor Group Mean Mean Difference Independent sample T test Non-parametric (Mann-Whitney Test) Conscientiousness 2004 3.88 .29 * * 2014 4.17 Fatalism 2004 2.47 -.13 * * 2014 2.34 Safety consciousness 2004 4.13 .15 * * 2014 4.28 Leadership 2004 3.73 .21 * * 2014 3.94 Role overload 2004 2.33 -.07 / / 2014 2.26 Work pressure 2004 2.35 -.14 * * 2014 2.21 16	Factor Group Mean Mean Difference Independent sample T test Non-parametric (Mann-Whitney Test) Conscientiousness 2004 3.88 .29 * * Job safety 2004 2.32 -.03 / / 2014 2.29 Co-worker safety 2004 3.47 .23 * * 2014 3.70 Supervisor safety 2004 3.85 .18 * * 2014 4.03 Management safety 2004 3.79 .25 * * 2014 4.04 Safety program 2004 3.95 .15 * * 2014 4.10 Interpersonal conflicts 2004 1.85 -.07 / * 2014 1.77 Job involvement 2004 3.92 .26 * * 2014 4.18 17	

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
https://iiif.library.ubc.ca/presentation/dsp.52660.1-0076474/manifest

Comment

Related Items