Statistical Methods for RelatingStrength Properties of DimensionalLumberbyYanling CaiHonours B.Sc., University of Toronto, 2008M.Sc., University of Toronto, 2009A THESIS SUBMITTED IN PARTIAL FULFILLMENT OFTHE REQUIREMENTS FOR THE DEGREE OFDOCTOR OF PHILOSOPHYinThe Faculty of Graduate and Postdoctoral Studies(Statistics)THE UNIVERSITY OF BRITISH COLUMBIA(Vancouver)August 2015c© Yanling Cai 2015AbstractWe present a novel approach for predicting one lumber strength propertyfrom another, each being measured destructively. The objective is to reducethe cost of lumber monitoring programs, by measuring one of the propertiesand predicting the other.To reach the objective, we review single proof load design (SPLD) pro-posed to assess dependence between two jointly normally distributed randomvariables X and Y that cannot be observed simultaneously. The SPLD testsspecimens in one mode X up to a determined load (proof loading), and thesurvivors are tested to failure in a second mode Y . To resolve the near non–identifiability of parameters in the SPLD approach, we construct a penaltyfunction based on a Bayesian power prior to regularize the parameters. Sim-ulation studies of the penalized approach suggest redesigning the SPLD forimproving the estimation performance. The new design, a rediscovery, as-signs specimens to one of the two groups: SPLD and shoulder. The shouldergroup tests specimens to failure in the Y mode. The SPLD with a shoulderapproach results in a more accurate and precise estimate.To quantify damage caused by proof loading lumber specimens, we usethe maximum likelihood method to estimate theoretical quantiles of thestrength distribution using a sample from a SPLD with a shoulder experi-ment with a low proof load level. The comparison between those estimatedtheoretical quantiles and empirical quantiles of the proof load survivors re-veals the damage at higher proof load levels. Using our experimental data onmanufactured lumber, we find low and high proof load levels leave survivorsundamaged, but intermediate load levels do damage weaker survivors.We generalize the SPLD with a shoulder approach to incorporate proofload damage, and thus finally provide a method for estimating the X − YiiAbstractdependence when damage occurs. This generalized approach is applied onour experimental data to estimate the relationship between bending and log–transformed tension. The high correlation found in our application suggeststhat if there is a need to verify the two strength properties, only one ofthem needs to be measured and the other is predicted from the measuredstrength.iiiPrefaceThis thesis is an original work of the author, Yanling Cai, under the super-vision of Professor James V. Zidek.The ideas presented in Chapters 3, 4 and 5 were jointly developed byme and Professor James V. Zidek. Professor William J. Welch providedvaluable suggestions at the early stage of developing the work presented inChapter 3. I conducted all computational work and derivations, and themajority of the writing.Selected materials in Chapters 3 were summarized into a 15–minutespresentation at the 41st Statistical Society of Canada Annual Meeting inEdmonton on May, 2013. Selected materials in Chapter 4 were summarizedin a poster presentation at the 42nd Statistical Society of Canada AnnualMeeting in Toronto on May, 2014. Additional manuscripts based on theresearch in this thesis are in preparation for submission to peer–reviewedjournals.ivTable of ContentsAbstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iiPreface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ivTable of Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . vList of Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viiiList of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiiList of Abbreviations and Symbols . . . . . . . . . . . . . . . . . xxAcknowledgements . . . . . . . . . . . . . . . . . . . . . . . . . . . xxiDedication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxii1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 Summer–of–2011 experiment . . . . . . . . . . . . . . . . . . 132.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . 132.2 Lumber and its major strength properties . . . . . . . . . . . 132.3 The Summer–of–2011 experiment . . . . . . . . . . . . . . . 192.4 Exploratory data analysis . . . . . . . . . . . . . . . . . . . . 232.4.1 Exploring the MOE, UTS and MOR data . . . . . . 242.4.2 Dependence between strength properties . . . . . . . 372.5 Summary and conclusions . . . . . . . . . . . . . . . . . . . . 393 Estimating censored bivariate normal’s correlation . . . . 413.1 A brief history of estimating the correlation by proof loading. 44vTable of Contents3.2 The penalized SPLD approach . . . . . . . . . . . . . . . . . 463.2.1 Regularized estimation . . . . . . . . . . . . . . . . . 463.2.2 The penalized SPLD approach . . . . . . . . . . . . . 513.2.3 Simulation studies . . . . . . . . . . . . . . . . . . . . 553.3 The SPLD with a shoulder approach . . . . . . . . . . . . . 633.3.1 Simulation studies . . . . . . . . . . . . . . . . . . . . 653.4 Discussion: other application . . . . . . . . . . . . . . . . . . 713.5 Summary and conclusions . . . . . . . . . . . . . . . . . . . . 714 Estimating proof load damage on lumber strength . . . . . 734.1 Degradation modelling . . . . . . . . . . . . . . . . . . . . . 744.2 Damage accumulation models . . . . . . . . . . . . . . . . . 774.2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . 774.2.2 Damage accumulation models . . . . . . . . . . . . . 784.2.3 Summary . . . . . . . . . . . . . . . . . . . . . . . . . 864.3 Proof load effect: an empirical assessment . . . . . . . . . . . 864.3.1 Wood literature review on proof load effect . . . . . . 864.3.2 Untangling the effects of dependence and damage . . 894.3.3 Simulation studies . . . . . . . . . . . . . . . . . . . . 934.3.4 Illustration: the Summer–of–2011 experiment . . . . 1024.4 Summary and conclusions . . . . . . . . . . . . . . . . . . . . 1205 Breaking the same board twice again with an application 1225.1 The Generalized SPLD with a shoulder approach . . . . . . 1235.1.1 Generating s from s∗ . . . . . . . . . . . . . . . . . . 1265.1.2 Simulation studies . . . . . . . . . . . . . . . . . . . . 1285.2 Application: Summer–of–2011 experiment . . . . . . . . . . 1375.3 Summary and conclusions . . . . . . . . . . . . . . . . . . . . 1406 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1426.1 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1426.2 Application of our work . . . . . . . . . . . . . . . . . . . . . 1466.3 Future work . . . . . . . . . . . . . . . . . . . . . . . . . . . 147viTable of ContentsBibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150AppendicesA Adjusted MOE . . . . . . . . . . . . . . . . . . . . . . . . . . . 156B Estimating the proof load levels . . . . . . . . . . . . . . . . 158C Summer–of–2011 experiment Q–Q plots . . . . . . . . . . . 162viiList of Tables2.1 The table shows schematically how the experimental groupsused in the Summer–of–2011 experiment were formed. Here,ID(i) is the identity of the lumber specimen with the ith rankin a given bundle. . . . . . . . . . . . . . . . . . . . . . . . . . 212.2 The estimated proof load level (PLL) in pounds, the esti-mated and observed numbers of specimens failed due to theproof load and the observed percentage of failed specimensdue to the proof load. . . . . . . . . . . . . . . . . . . . . . . 232.3 Type and unit of MOE, UTS, and MOR. . . . . . . . . . . . 242.4 Summary statistics for MOE/UTS/MOR: the 0th (min),25th, 50th, 75th and 100th (max) empirical percentiles, thesample mean, and the sample standard deviation (SD). Thesample size is N. . . . . . . . . . . . . . . . . . . . . . . . . . 272.5 Sample skewness for MOE/UTS/MOR. The sample size is N. 282.6 Fitting the normal distribution, the 2-parameter Weibull dis-tribution and the log–normal distribution toMOE/UTS/MOR.This table lists the maximum likelihood estimate of the pa-rameter and its standard error. . . . . . . . . . . . . . . . . . 332.7 The AIC values for the fitted normal, 2-parameter Weibull,and log–normal distributions for theMOE/UTS/MOR data.The criterion suggests the normal, log–normal and normaldistributions for modelling MOE/UTS/MOR, respectively.However, in the latter two cases, the Weibull distribution isa close contender. . . . . . . . . . . . . . . . . . . . . . . . . 37viiiList of Tables3.1 Finite sample performance of the PMLE and the MLE withN = 87, 150, and 300. The proof load level is specified suchthat 50% of the specimens fail below the load level. Thesimulation setting is described as in the text. . . . . . . . . . 573.2 Finite sample performance of the PMLE with N = 300. Thevalue p indicates the percentage of specimens failed below theproof load level. A higher p corresponds to a higher proof loadlevel. The simulation setting is described as in the text. . . . 603.3 Finite sample performance of the MLE that maximizes thejoint likelihood function LPFY (θ; s, y) with N = 300. Thevalue pp indicates the proportion of specimens assigned to theSPLD group. When pp = 1, all specimens are in the SPLDgroup. The simulation setting is described as in the text. TheMLE performance declines as the size of the shoulder groupdeclines. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 673.4 Finite sample performance of the MLE that maximizes thejoint likelihood function LPFY (θ; s, y) with N = 600. Thevalue pp indicates the proportion of specimens assigned to theSPLD group. When pp = 1, all specimens are in the SPLDgroup. The simulation setting is described as in the text. TheMLE performance declines as the size of the shoulder groupdeclines. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 683.5 Finite sample performance of MLE that maximizes the jointlikelihood function LPFY (θ; s, y) with N = 600. The value pindicates the percentage of specimens failed below the proofload level. The simulation setting is described as in the text.The results suggest a failure percentage in proof loading tobe about 80%. . . . . . . . . . . . . . . . . . . . . . . . . . . 704.1 Known θ: performance of the least squares estimate of α.The simulation setting is described as in the text. . . . . . . . 994.2 Unknown θ : performance of the least squares estimate of α.The simulation setting is described as in the text. . . . . . . 101ixList of Tables4.3 Maximum likelihood estimates of the parameters for the bi-variate normal distribution of X and Y , where X is MORand Y is log(UTS). . . . . . . . . . . . . . . . . . . . . . . . . 1034.4 Probability of [Y |X > l2] being equal to or smaller than 0.When p = 20%, 40%, 60%, or 80%, l2 is specified to be the20th, 40th, 60th, or 80th quantile of the marginal distributionof X, respectively. . . . . . . . . . . . . . . . . . . . . . . . . 1114.5 Approximate least squares estimates of α with estimated stan-dard errors for the proof loaded groups T60, T40, R60, R40and R20. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1135.1 Known α with a sample size N = 600: performance of theMLE obtained by the generalized SPLD with a shoulder ap-proach. The damage model is [Y ∗|X > l2]d= [(Y −α/Y )|X >l2]. Simulation details are described in the text. . . . . . . . . 1315.2 Known α with a sample size N = 300: performance of theMLE obtained by the generalized SPLD with a shoulder ap-proach. The damage model is [Y ∗|X > l2]d= [(Y −α/Y )|X >l2]. Simulation details are described in the text. . . . . . . . . 1325.3 Unknown α with sample sizes N1 = 1200 and N2 = 600:performance of the MLE obtained by the generalized SPLDwith a shoulder approach. The sample size of the SPLD witha shoulder sample without proof load damage is N1. Thesample size of the SPLD with a shoulder sample with proofload damage is N2. The damage model is [Y ∗|X > l2]d=[(Y − α/Y )|X > l2]. Simulation details are described in thetext. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136xList of Tables5.4 Unknown α with sample sizes N1 = 1200 and N2 = 300:performance of the MLE obtained by the generalized SPLDwith a shoulder approach. The sample size of the SPLD witha shoulder sample without proof load damage is N1. Thesample size of the SPLD with a shoulder sample with proofload damage is N2. The damage model is [Y ∗|X > l2]d=[(Y − α/Y )|X > l2]. Simulation details are described in thetext. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1385.5 Application: Summer–of–2011 experiment. The MLE of θand its standard error are reported based on each SPLD witha shoulder sample. The approximate 95% confidence intervals(CI) of ρ are also reported. . . . . . . . . . . . . . . . . . . . 140A.1 The sample mean and the sample standard deviation of theadjusted MOE’s in million pounds per square inch (psi) foreach experimental group within each bundle and across threebundles. Bundles 1 and 2 are of grade–type SPF(Spruce,Pine, Fir) 1650f-1.5E. Bundle 3 is of type SPF No.2. . . . . . 157B.1 Maximum likelihood estimates of the Weibull parameters byfitting the R100 data with a Weibull distribution. . . . . . . . 159B.2 Estimated quantiles of the MOR distribution with their stan-dard errors (SD), and the approximate 95% confidence inter-vals (CI). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159B.3 Estimated number of failed specimens of R60/40/20 with ap-proximate 95% confidence intervals. . . . . . . . . . . . . . . 160B.4 Estimated quantiles of the MOR distribution with 95% confi-dence intervals under the assumption that the shape param-eter is 4. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161B.5 Estimated number of broken specimens of R60/40/20 with95% confidence intervals. . . . . . . . . . . . . . . . . . . . . 161xiList of Figures2.1 Statistics graduate students working in the FPInnovations lab. 142.2 The bending machine conducts the destructive test for mea-suring MOR. The specimen is positioned between the twooutermost support points. The two load points are spreadequally within the test span to push the specimen upwardwith equal forces until failure. . . . . . . . . . . . . . . . . . 162.3 The tension machine conducts the destructive test for mea-suring UTS. Once the machine is loaded with a specimen, itpulls the specimen in opposite directions at the loading pointslocated at the two ends, until the specimen fails. . . . . . . . 172.4 The MOE machine conducts the non–destructive test formeasuring vibration MOE. Once the machine is loaded witha specimen, the specimen is subjected to a gentle force. Themachine records the specimen’s weight and frequency of oscil-lation. The reading inputs are converted to a vibration MOEmeasurement. . . . . . . . . . . . . . . . . . . . . . . . . . . . 182.5 Histograms of MOR, UTS, and MOE. The units of MORand UTS are thousand psi. The unit of MOE is million psi. 25xiiList of Figures2.6 Boxplots of MOR, UTS, and MOE. The lower and upperbox boundaries are the 25th and the 75th percentiles (Q1 andQ3), respectively. Two vertical lines extend from the lowerand upper box boundaries to the lower and upper whiskers,respectively. The lower whisker is 1.5× (Q3−Q1) from Q1.The upper whisker is 1.5× (Q3−Q1) from Q3. Observationsnot included between the whiskers are plotted with dots. Theunits of MOR and UTS are thousand psi. The unit of MOEis million psi. . . . . . . . . . . . . . . . . . . . . . . . . . . . 262.7 Sample quantiles of MOE against various theoretical quan-tiles. The normal theoretical quantiles are from a normaldistribution with parameters µ1 = 0 and σ1 = 1. The log–normal theoretical quantiles are from a log–normal distribu-tion with parameters µ2 = 0 and σ2 = 1. . . . . . . . . . . . . 302.8 Sample quantiles of UTS against various theoretical quan-tiles. The normal theoretical quantiles are from a normaldistribution with parameters µ1 = 0 and σ1 = 1. The log–normal theoretical quantiles are from a log–normal distribu-tion with parameters µ2 = 0 and σ2 = 1. . . . . . . . . . . . . 312.9 Sample quantiles of MOR against various theoretical quan-tiles. The normal theoretical quantiles are from a normaldistribution with parameters µ1 = 0 and σ1 = 1. The log–normal theoretical quantiles are from a log–normal distribu-tion with parameters µ2 = 0 and σ2 = 1. . . . . . . . . . . . . 322.10 MOE. Left panel: the empirical cumulative distribution,the fitted normal cumulative distribution with µˆ1 = 1.58 andσˆ1 = 0.27, the fitted Weibull cumulative distribution withβˆ = 6.56 and ηˆ = 1.69, and the fitted log–normal cumulativedistribution with µˆ2 = 0.44 and σˆ2 = 0.18. Right panel: theleft tail region in the left panel is zoomed in. . . . . . . . . . 34xiiiList of Figures2.11 UTS. Left panel: the empirical cumulative distribution, thefitted normal cumulative distribution with µˆ1 = 4.50 andσˆ1 = 1.71, the fitted Weibull cumulative distribution withβˆ = 2.79 and ηˆ = 5.05, and the fitted log–normal cumulativedistribution with µˆ2 = 1.43 and σˆ2 = 0.40. Right panel: theleft tail region in the left panel is zoomed in. . . . . . . . . . 352.12 MOR. Left panel: the empirical cumulative distribution,the fitted normal cumulative distribution µˆ1 = 6.62 and σˆ1 =1.87, the fitted Weibull cumulative distribution with βˆ = 3.85and ηˆ = 7.32, and the fitted log–normal cumulative distribu-tion with µˆ2 = 1.85 and σˆ2 = 0.31. Right panel: the left tailregion in the left panel is zoomed in. . . . . . . . . . . . . . . 362.13 Plots of MOE against MOR (left panel) and MOE againstUTS (right panel). Both plots suggest a high positive corre-lation, although a non–linear relationship seems apparent inthe right panel. . . . . . . . . . . . . . . . . . . . . . . . . . . 383.1 Estimates of ρ: 50 randomly selected pairs of (PMLE, MLE)from Table 3.1 when ρ = 0.2. The solid lines indicate thetrue value ρ = 0.2, showing wide variation in the MLE valuescompared with the PMLE values. . . . . . . . . . . . . . . . . 583.2 Maximum likelihood estimate that maximizes the joint like-lihood LPFQY (θ; s, y) given the observations from the proofloaded group and the truncated group (Y : Y < QY (q)) forvarious q values. The solid lines indicate the true value of ρ. . 644.1 Q–Q plot: y∗i against Q[Y |X>l2](pi;θ) for α = 0 and N =100, 250, 500, or 1000. The solid line is the real damage modelY ∗ = Y . The points lie on the 45 degrees line as they should.There is no damage. . . . . . . . . . . . . . . . . . . . . . . . 95xivList of Figures4.2 Q–Q plot: y∗i against Q[Y |X>l2](pi;θ) for α = 0.1 and N =100, 250, 500, or 1000. The solid line is the damage modelY ∗ = Y − 0.1/Y . The points lie closely to the true damagemodel. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 964.3 Q–Q plot: y∗i against Q[Y |X>l2](pi;θ) for α = 0.5 and N =100, 250, 500, or 1000. The solid line is the real damage modelY ∗ = Y − 0.5/Y . More severe damage occurs. The plotscorrectly reflect the change in the Y values. . . . . . . . . . . 974.4 T60’s Q–Q plot. The empirical quantile of [X∗|Y > 1.59] isplotted against the estimated theoretical quantile of [X|Y >1.59]. The solid line is the 45–degree line. No apparent dam-age is observed. . . . . . . . . . . . . . . . . . . . . . . . . . . 1044.5 T40’s Q–Q plot. The empirical quantile of [X∗|Y > 1.38] isplotted against the estimated theoretical quantile of [X|Y >1.38]. The solid line is the 45–degree line. No apparent dam-age in the strength of the strong specimens. Damage occurson the weaker specimens. . . . . . . . . . . . . . . . . . . . . 1054.6 R60’s Q–Q plot. The empirical quantile of [Y ∗|X > 7.09] isplotted against the estimated theoretical quantile of [Y |X >7.09]. The solid line is the 45–degree line. No apparent dam-age is observed. . . . . . . . . . . . . . . . . . . . . . . . . . . 1064.7 R40’s Q–Q plot. The empirical quantile of [Y ∗|X > 6.11] isplotted against the estimated theoretical quantile of [Y |X >6.11]. The solid line is the 45–degree line. No apparent dam-age in the strength of the strong specimens. Damage occurson the weaker specimens. . . . . . . . . . . . . . . . . . . . . 1074.8 R20’s Q–Q plot. The empirical quantile of [Y ∗|X > 4.96] isplotted against the estimated theoretical quantile of [Y |X >4.96]. The solid line is the 45–degree line. No apparent dam-age is observed. . . . . . . . . . . . . . . . . . . . . . . . . . . 1084.9 The function Y ∗ = Y − α/Y for α = 0, 0.25, 0.5 and 0.75. . . 1124.10 T60: Q–Q plot with the fitted damage model [X∗|Y > 1.59]d=[X|Y > 1.59]. The solid line is the fitted damage model. . . . 114xvList of Figures4.11 T40: Q–Q plot with the fitted damage model [X∗|Y > 1.38]d=[(X − 1.55/X)|Y > 1.38] = [X|Y > 1.38]− 1.55/[X|Y > 1.38]. 1154.12 R60: Q–Q plot with the fitted damage model [Y ∗|X > 7.09]d=[(Y − 0.03/Y )|X > 7.09] = [Y |X > 7.09]− 0.03/[Y |X > 7.09]. 1164.13 R40: Q–Q plot with the fitted damage model [Y ∗|X > 6.11]d=[(Y − 0.12/Y )|X > 6.11] = [Y |X > 6.11]− 0.12/[Y |X > 6.11]. 1174.14 R20: Q–Q plot with the fitted damage model [Y ∗|X > 4.96]d=[(Y − 0.01/Y )|X > 4.96] = [Y |X > 4.96]− 0.01/[Y |X > 4.96]. 118C.1 T60: The empirical quantile of [X∗|Y > 1.59] is plottedagainst the estimated theoretical quantile of [X|Y > 1.59]for various ρ values. The solid line is the 45–degree line. . . . 163C.2 T40: The empirical quantile of [X∗|Y > 1.38] is plottedagainst the estimated theoretical quantile of [X|Y > 1.38]for various ρ values. The solid line is the 45–degree line. . . . 164C.3 R60: The empirical quantile of [Y ∗|X > 7.09] is plottedagainst the estimated theoretical quantile of [Y |X > 7.09]for various ρ values. The solid line is the 45–degree line. . . . 165C.4 R40: The empirical quantile of [Y ∗|X > 6.11] is plottedagainst the estimated theoretical quantile of [Y |X > 6.11]for various ρ values. The solid line is the 45–degree line. . . . 166C.5 R20: The empirical quantile of [Y ∗|X > 4.96] is plottedagainst the estimated theoretical quantile of [Y ∗|X > 4.96]for various ρ values. The solid line is the 45–degree line. . . . 167xviList of Abbreviations andSymbolsList of AbbreviationsAIC Akaike information criterionASTM American standard test methodDOL Duration–of–loadFPSMG Forest products stochastic modelling groupMC Moisture contentML Maximum likelihoodMLE In Chapters 1 to 3: maximum likelihood estimatorIn Chapters 4 and 5: maximum likelihood estimateMOE Modulus of elasticityMOR Modulus of ruptureMSRC Maximum strength–reducing characteristicPMLE Penalized maximum likelihood estimatorpsi Pounds per square inchSPLD Single proof load designUTS Ultimate tensile strengthxviiList of Abbreviations and SymbolsList of Symbols Used Throughout This Dissertationl A proof load level in proof load designsl2 Represent a higher proof load levelX,Y, Z Random variables (in our application, they are lumberstrength properties); Z is X if X ≤ l, and Y otherwisex, y, z Realizations of random variables X,Y and Z, respectivelyY ∗ Residual lumber strength in the Y mode after proof loadingin the X modeZ∗ Z∗ is X if X ≤ l2, and Y ∗ otherwise; here, l2 represent ahigher proof load level where the proof load survivors aredamagedU, u Random variable U = 1 if X ≤ l, and 0 otherwise. Itsrealization is u. Here, l could be also l2.s An observation generated by a SPLD experiment where itsproof load survivor is not damageds Observations generated by a SPLD experiment where its proofload survivors are not damageds∗ An observation generated by a SPLD experiment where itsproof load survivor is damageds∗ Observations generated by a SPLD experiment where its proofload survivors are damagedfX(·) Density function of a random variable X; here, X could beother random variablesFX(·) Distribution function of a random variable X; here, X couldbe other random variablesxviiiList of Abbreviations and SymbolsQX(·) Quantile function of a random variable X; here, X could beother random variables[Y |X > l] Conditional random variable Y given X > l where l is a proofload level; the roles of X and Y are interchangeableY A vector of random variablesy A realization of a vector of random variables YL Generic notation for likelihood functionsµX , µY Means of normally distributed random variables X and Y ,respectivelyσX , σY Standard deviations of normally distributed random variablesX and Y , respectivelyρ Correlation between two random variables X and Y that arejointly normally distributedθ A vector of parameters of a bivariate normal distribution andθ = (µX , σX , µY , σY , ρ)Tθˆ Maximum likelihood estimator of θθˆp Penalized maximum likelihood estimator of θθˆMLEMaximum likelihood estimate of θh(·) The damage model that descries the distributional relation-ship between [Y ∗|X > l2] and [Y |X > l2]; h([Y |X > l2]) =[Y ∗|X > l2] = [(Y −α/Y )|X > l2] = [Y |X > l2]−α/[Y |X >l2]; the roles of X and Y are interchangeable; α ≥ 0 is aconstant; l2 represents a higher proof load levelα Parameter in the damage function h(·)αˆLS Least squares estimate of αxixList of Abbreviations and SymbolsΦ(·) Distribution function of the standard normal random variableI{·} (0 or 1) indicator functionP ProbabilityXd= Y Random variables X and Y are equal in distributionn,N Generic notation for sample sizest Chapter 4: time tα(t) Chapter 4: damage accumulated by time t for the damageaccumulation models of the duration–of–load effectT Chapter 4: random breaking time where α(T ) = 1tc Chapter 4: a representative breaking time (constant)τ(t) Chapter 4: applied load at time tτs Chapter 4: short–term lumber strengthσ(t) Chapter 4: applied load ratio (τ(t)/τs) at time tSome symbols are used only very short–term. Such symbols are not de-fined in this list of symbols. Vectors and matrices are usually written inbold. Random variables are written in uppercase and their correspondingrealizations are written in lowercase.xxAcknowledgementsMy deepest debt of gratitude for profound advices and warm support goesto my supervisor, Professor James V. Zidek. As a junior researcher, I amblessed to work with Jim, Through him, I learn the importance of askingmyself why and how. His wisdom will stay with me for the rest of my life,and is something that time cannot take away from me. Thank Jim for whoI am and all the things I am not.I would like to thank the members of my supervisory committee, Pro-fessors William J. Welch and Harry Joe, for their challenging questions andinspiring suggestions. I also would like to thank Conroy Lum from FPIn-novations for sharing his knowledge on lumber engineering. Without theirinvolvement, I will not be as happy as I am now with my research.It’s my 13th year in Canada. Canada has been very generous to me.The greatest thing Canada offers me is letting me meet many beautifulsouls living on this land, especially friends in Toronto and Vancouver. Iwould like to thank all the beautiful souls who I come across alone the way.Very much of me is made from what I learn from them.Last but not least, I want to thank my amazing family especially myparents (Qiquan Cai and Jinmei Yang), my husband (Yuntao She), my sister(Yanhong Cai), and my two brothers (Jia Hui Cai and Jia Rong Cai). Thisthesis owes them more than I can tell. Their love and support throughoutthe years make everything possible.xxiDedication致我的父亲母亲xxiiChapter 1IntroductionStarting from 2008, the Forest Products Stochastic Modelling Group(FPSMG), in collaboration with wood scientists at FPInnovations, has beenworking on current challenges faced by the forest products industry. TheFPSMG consists of statisticians from University of British Columbia and Si-mon Fraser University. The FPSMG projects have 4 major research themes:monitoring the resource; marketing the supply; modelling the properties,and managing the load. I have been fortunate to work in the team con-cerned with Theme III: modelling the properties. This thesis presents themajor results obtained in the work that I have completed for the team.Its genesis lay in the need to develop a model for predicting one lumberstrength property X from another Y . The objective is to reduce the cost ofthe long term monitoring programs being implemented in Canada and theUnited States for lumber strength properties, by measuring just one of thestrength properties and predicting the other.Since each strength property is measured by its own destructive testingand a single lumber specimen cannot break twice, the strength properties Xand Y cannot be simultaneously measured on a single lumber specimen. Todevelop a model for predicting X from Y , we decided to investigate an inge-nious idea called “breaking the same board twice” published about 30 yearsago (Evans et al., 1984; Green et al., 1984; Johnson and Galligan, 1983).The idea is to first test lumber specimens in a sample to measure their X–value, but only test them up to an intermediate load level. This pretestingprocedure is called proof loading in the X mode. At the second stage, thesurvivors from the first stage are tested to failure to measure their Y –value(hereafter, we refer to this as testing to failure in the Y mode). It turnsout this two–stage test procedure, called single proof load design (SPLD),1Chapter 1. Introductiongenerates a weak signal of the X − Y dependence even though roughly halfof the sample is tested to failure in the X mode and the other half of thesample yields only a Y –value. My goal was to apply this idea to estimate thecorrelation coefficient ρ that determines the degree of dependence betweenX and Y when they have a bivariate normal distribution. Thus work reallybegan on the topic of what is now Chapter 5 of this thesis.After exploring the SPLD idea over a lengthly period of time using simu-lation studies, I learned that the idea did not work as originally formulated,because the X − Y dependence signal provided by the data generated fromthe SPLD experiment was too weak. More specifically, the variability inthe estimate of ρ obtained by the maximum likelihood (ML) approach, washuge when ρ is small. Of course the case of a small ρ is of little practicalinterest because the predictor of say Y from X would be of little value. Butthe trouble is that the variability in the estimate of ρ could result in a largeestimate even when ρ is small! This could wrongly suggest that predictingY from X would be feasible.This discovery eventually led to the first major result in my thesis. Toreduce the variability in the estimate of ρ, we started with a method of whatis now called “regularizing” an unstable ML estimator (MLE) by adding apenalty function to the log–likelihood function that is maximized to get theMLE. The idea of adding a penalty function goes back a long way, but whatI describe in Chapter 3 is a way of constructing a penalty function in thespecific case of lumber testing. The way the penalty function is constructedpromises to be of value in other contexts as well. The simulation studiesof the penalized maximum likelihood method suggest us to redesign SPLDfor resolving the huge variability in the estimate of ρ. The new designis called SPLD with a shoulder. In a SPLD with a shoulder experiment,lumber specimens in a random sample are not all proof loaded in the Xmode. Instead, specimens are assigned to one of the two groups: the SPLDgroup with proof loading in the X mode and the shoulder group of Y .Specimens of the shoulder group of Y are tested to failure in the Y modewithout any proof loading. The parameter of interest is then estimated bythe ML approach given the sample generated by the SPLD with a shoulder2Chapter 1. Introductionexperiment. This approach is referred to as the SPLD with a shoulderapproach. The SPLD with a shoulder approach in fact is a rediscovery.Amorim and Johnson (1986) recognize the advantage of including shouldergroups where specimens are tested to failure in the X or Y mode withoutany proof loading.At that point, I would have been ready to return to the work of Chapter5. However in completing the work of Chapter 3, I recognized another diffi-culty with the breaking same board twice approach, namely that it ignoredthe possibility that some damage might have accumulated in the strengthof the lumber specimens that survived the proof loading procedure in theX mode. That would mean in particular the Y –value obtained from thesurvivors after the proof loading in the X mode would not be the same asthe Y –value I would have obtained if I had not proof loaded the lumberspecimens in the X mode.This realization led to the second major result described in this thesis,a method of assessing the damage caused by proof loading material used instructural engineering applications. The key idea here was shifting the focusfrom the individual specimen to its population and how the proof–loadedpopulation differed from the non–proof–loaded population. It was relativelystraightforward to build a new theory for that.With the SPLD with a shoulder approach, and a distributional changedue to damage caused by proof loading, we were ready to estimate ρ. Thisbrings us back to the beginning, Chapter 5. Only one thing remained, whichwas to demonstrate our method with real data. The final major piece of thestory is described in Chapter 2, an experiment in the summer of 2011 thatwas set up to estimate ρ and see the proof load effects. I played a major rolein the group of students involved in designing, managing and analyzing theresults. Since we had produced the experiment, the resulting data had noproprietary restrictions attached. The resulting data enabled me and othersto try out their theories and models. Overall the demonstration proved tobe satisfactory and it now only remains to try the results of the thesis in anindustrial setting.The work summarized above is now described in more detail in the sub-3Chapter 2sections that follow.Chapter 2Chapter 2 introduces lumber and its major strength properties such as themodulus of rupture (MOR) and the ultimate tensile strength (UTS). Mostimportantly, Chapter 2 also introduces an experiment on manufactured lum-ber that we conducted in an industrial lab in the summer of 2011. I wasinvolved heavily in designing the experiment under the supervision of Pro-fessors William J. Welch and James V. Zidek. I was also a student managerand lab worker for conducting the experiment. The thesis refers to thatexperiment as the Summer–of–2011 experiment. The Summer–of–2011 ex-periment aims at determining the lumber strength properties relationship,but the data also serve other research teams at FPSMG such as the moni-toring resource team.In addition, Chapter 2 presents an exploratory analysis of the data fromthe Summer–of–2011 experiment. Through the exploratory data analysis,major characteristics of lumber strength properties are learned. For exam-ple, conclusions about the appropriate distributions for capturing the datacharacteristics are provided in Chapter 2.Chapter 3Chapter 3 is concerned with the estimation of the correlation between a pairof normally distributed responses (X,Y ), when the two responses cannotbe observed simultaneously. Examples of these kinds of random variablesare the MOR and the UTS introduced in Chapter 2 where each strengthproperty requires a destructive test. Since a lumber specimen cannot bebroken twice, we cannot observe both destructive strength properties on thesame specimen. The methodology presented in this chapter for estimatingthe correlation between these kinds of random variables is very general, butwe focus on the lumber application.We review the single proof load design (SPLD) developed in Evans et al.4Chapter 3(1984), Green et al. (1984), and Johnson and Galligan (1983). This design isproposed to estimate the correlation between a pair of normally distributedresponses that cannot be observed simultaneously. More specifically, theSPLD enables us to observe one response X if it is below a known thresh-old l (also called proof loading in the X mode), and the other response Yotherwise. Equivalently, we observe Z = XI{X ≤ l} + Y I{X > l} andU = I{X ≤ l} for each specimen in a random sample, where I denotes the(0 or 1) indicator function. The parameter of interest is then estimated bythe maximum likelihood method to get the maximum likelihood estimator(MLE). The MLE maximizes the likelihood given a random sample of re-sponses Z = XI{X ≤ l}+Y I{X > l} and U = I{X ≤ l}. This procedure isreferred to as the SPLD approach. The MLE has desired asymptotic prop-erties such as asymptotic consistency and normality. However, when thesample is of a realistic size, the SPLD approach has the problem that thedata it generates lead to the near non–identifiability of parameters, due toa weak bridge between the sample information and the unknown likelihoodparameters. The finite sample performance of the MLE obtained by theSPLD approach is poor in terms of accuracy and precision.To resolve this issue, a symmetric proof load design is proposed byAmorim and Johnson (1986). In the symmetric proof load design, a sampleof specimens is divided into two halves. One half is tested with a SPLDexperiment with the proof loading being, say, in the X mode. The otherhalf is done in a similar way, but this time proof loading is in the Y mode.Thus the log–likelihood function given the data generated by the symmetricproof load design is a sum of two log–likelihood functions. One is the log–likelihood function given the data generated by the SPLD experiment withproof loading in the X mode. The other one is the log–likelihood functiongiven the data generated by the SPLD experiment with proof loading inthe Y mode. Unfortunately, proof loading in both the X and Y modes isnot always physically feasible. Without loss of generality, let’s suppose thatproof loading in the X mode is feasible, but not in the Y mode. This impliesthat the data generated by the SPLD experiment with proof loading in theY mode are not observable.5Chapter 3As an alternative to the symmetric proof load design, we explore a newpenalized maximum likelihood estimation procedure for the SPLD approachto resolve the near non–identifiability of the parameters due to insufficientlarge sample sizes. We define a penalized log–likelihood function, which isthe sum of a penalty function and the log–likelihood function given the datagenerated by the SPLD experiment with proof loading in the X mode. Thepurpose of introducing the penalty function is to regularize the parametersin order to achieve desirable finite sample performance. The constructionof the penalty function is motivated by the symmetric proof load design. Infact, the exponentiated penalty function is a likelihood function L raised toa suitable power of λ where L is the likelihood function of the unobservabledata generated by the SPLD experiment with proof loading in the Y modeand λ ≥ 0. The constant value λ can be considered as a regularization pa-rameter, which controls the penalty size. Furthermore, the penalty functionwe define can be also viewed as a Bayesian power prior (Ibrahim et al., 1998;Ibrahim and Chen, 2000). The unobservable data of the likelihood functionL become parameters in the power prior. We specify those parameters of thepower prior to be pseudo–observations generated to reflect experts’ opinions(Lele and Allen, 2006; Novick and Grizzle, 1965). We obtain a penalizedmaximum likelihood estimator (PMLE) for the parameters of interest bythe penalized maximum likelihood method, where the PMLE maximizesthe penalized likelihood function (Hoerl, 1962; Hoerl and Kennard, 1970).We refer to this approach as the penalized SPLD approach.Simulation studies are conducted to compare the finite sample perfor-mances of the PMLE obtained by the penalized SPLD approach and theMLE obtained by the SPLD approach. The PMLE outperforms the MLEin terms of accuracy and precision in samples of realistic sizes. However,a closer investigation of the penalized SPLD approach suggests that thePMLE performance depends on whether the pseudo–observations generatedfor the penalty function provide appropriate information about the marginalmean of Y . The investigation suggests us to redesign SPLD for resolving thenear non–identifiability of the parameters due to sample sizes in the SPLDapproach. We explore an alternative design, called SPLD with a shoulder,6Chapter 4for estimating the correlation between X and Y that cannot be observedsimultaneously.In a SPLD with a shoulder experiment, specimens are assigned to oneof two groups. One group is a SPLD group with proof loading in the Xmode. The other group is a shoulder group of Y where specimens are testedto failure in the Y mode without any proof loading. The shoulder group isintroduced to strengthen the sample–to–parameter bridge, and thus resolvesthe near non–identifiability issue of the SPLD approach. The parameters ofinterest are then estimated by the maximum likelihood method. The MLEmaximizes the likelihood given a random sample of observations generatedby the SPLD with a shoulder experiment. This approach is referred to asthe SPLD with a shoulder approach. As we mention previously, the SPLDwith a shoulder experiment is in fact a rediscovery. Amorim and Johnson(1986) acknowledge a design using shoulder samples (called hybrid design)where two shoulder groups are introduced: one shoulder group of Y wherespecimens are tested to failure in the Y mode without any proof loadingand one shoulder group of X where specimens are tested to failure in theX mode without any proof loading. Simulation studies are conducted todemonstrate the success of the SPLD with a shoulder approach: accurateand more precise estimates in samples of realistic sizes. The applicationsof the SPLD with a shoulder approach include estimating duration–of–load(DOL) effects and determining the lumber strength properties relationship.Chapter 4To carry out a SPLD experiment for estimating the correlation between twodestructive lumber strength properties, lumber specimens are tested up toa pre–determined load l. If the proof load level l is high, damage may beaccumulated in the strength of the proof load survivors. Namely, we mayobserve Z∗ = XI{X < l} + Y ∗I{X > l} and U = I{X ≤ l} for eachlumber specimen in a random sample where Y ∗ is the residual strength inthe Y mode after proof loading in the X mode. It may be more realisticto take the proof load damage into consideration, when applying the SPLD7Chapter 4with a shoulder approach for estimating the correlation between the twodestructive lumber strength properties. Chapter 4 investigates whether anydamage accumulates in the strength of the proof load survivors during theproof loading procedure of a SPLD experiment. The amount of damage inthe strength of the proof load survivors is quantified in terms of relationshipsbetween the strength and the residual strength.Chapter 4 reviews three wood scientists’ duration–of–load (DOL) modelsand derives the theoretical damage effects based on these DOL models. Thethree DOL models are the Madison curve (Wood, 1951), the U.S. model(Gerhards and Link, 1986; Gerhards, 1979), and the Canadian model (Foschiand Yao, 1986). The DOL effect is a phenomenon where a product madefrom wood can resist higher stresses when the loads are imposed for a shorterduration of time. We consider the DOL models based on a ramp–load testunder load control. During the ramp load test, the load increases linearlywith time until the specimen fails. To carry out the ramp load test, themachine can be set up under load control or under displacement control.If the machine is set up under load control, the rate at which the loadincreases per second is identical for all specimens. If the machine is set upunder displacement control, the load still increases linearly with time butthe rate at which the load increases per second depends on the specimen’smodulus of elasticity (MOE) and it varies from specimens to specimens.If the rate at which the load increases per second varies across specimens,we may not be able to track the relationship between the strength and theresidual strength analytically by working with the DOL models. As a result,we consider the ramp load test under load control scenario. The analyticalrelationships between the strength and the residual strength based on theDOL models are summarized as follows.Firstly, the Madison curve suggests that the relationship between a repre-sentative strength of the strength distribution and a representative residualstrength of the residual strength distribution is linear. Secondly, the U.S.model suggests that the deterministic relationship between the strength andthe residual strength is approximately linear. Finally, the Canadian modelalso suggests a deterministic relationship between the strength and the resid-8Chapter 4ual strength. However, the deterministic relationship based on the Cana-dian model is not analytically tractable due to random effects. The readeris reminded that the Summer–of–2011 experiment was conducted under dis-placement control for safety reasons. The relationships derived from theDOL damage accumulation models may not be directly applicable to ourdata sets of the Summer–of–2011 experiment.As an alternative, we develop an empirical approach for quantifying theamount of damage in the strength of the proof load survivors. The de-terministic relationship between the strength and the residual strength isnot available without strong assumptions or going through complicated andelaborate work of matching two lumber specimens. Instead of investigatingthe structural relationship between the strength and the residual strength,we find a mathematical transformation such that the transformed strengthand the residual strength are equal in distribution.Without loss of generality, suppose that the specimens are proof loadedin the X mode up to a proof load level l2, and the survivors are tested tofailure in the Y mode. We are interested in assessing the degree of the dam-age accumulated in the strength of the proof load survivors. If the proofload survivors are damaged by proof loading, then we observe [Y ∗|X > l2]for each survivor in the random sample. We may be tempted to comparethe empirical distribution of [Y ∗|X > l2] with a reference distribution spec-ified as the (empirical) marginal distribution of Y . This comparison wouldnot work in general unless the correlation between X and Y in the bivariatenormal distribution is zero. It leads us naturally to the problem of determin-ing the appropriate reference distribution. The answer is the distributionof [Y |X > l2]. Estimating the parameters in the bivariate normal distribu-tion of X and Y enables us to obtain information about the distribution of[Y |X > l2].To do this, we introduce a SPLD with a shoulder experiment where spec-imens in the SPLD group are proof loaded in the X mode up to a low proofload level. The low proof load level ensures that the damage accumulatedin the strength of the proof load survivors is negligible (Katzengruber et al.,2006; Woeste et al., 1986; Strickler et al., 1969). Using the sample collected9Chapter 5from this particular SPLD with a shoulder experiment, we then apply theSPLD with a shoulder approach to estimate the parameters of the bivariatenormal distribution of X and Y . Treating those estimated parameters asknown, we can calculate theoretical quantiles of [Y |X > l] for any proof loadlevel l including the proof load level l2. The estimated theoretical quantilesare compared with the empirical quantiles of [Y ∗|X > l2] by a Q–Q plot.If the points of the Q–Q line lie closely along the 45–degree line, the distri-butions of [Y |X > l2] and [Y ∗|X > l2] are similar, which implies that nodamage occurs. Otherwise, based on the Q–Q plot, we find a mathematicaltransformation on [Y |X > l2] such that the transformed random variable[Y |X > l2] and the random variable [Y ∗|X > l2] are equal in distribution.We refer to this distributional relationship as the damage model hereafter.We illustrate the above empirical approach with the Summer–of–2011experiment data. The Summer–of–2011 experiment has six SPLD groups,and the damage model for each SPLD group is provided in Chapter 4. Ingeneral, if the proof load level is high enough, the specimens either failcompletely or survive without any damage. Specimens that survive theproof loading procedure are so strong that they remain unharmed by theproof loading. For intermediate proof load levels, the stronger survivorsremain undamaged. The damage occurs to the weaker survivors. For lowproof load levels, the damage to the survivors is not apparent.Chapter 5In case of damage, Chapter 5 demonstrates how to incorporate the damagemodel presented in Chapter 4 with the SPLD with a shoulder approachpresented in Chapter 3. Let us recall the SPLD with a shoulder approach.The random variables X and Y are assumed to follow a bivariate normaldistribution. The primary goal is to estimate the correlation parameterbetween X and Y that reveals their dependence information. The parameterof interest is estimated by the maximum likelihood method. The MLEmaximizes the likelihood function given the data generated by the SPLDwith a shoulder experiment.10Chapter 5The likelihood function given the data generated by the SPLD witha shoulder experiment has 3 parts. One part is the likelihood functionfX(x) given the data generated from the specimens that fail due to proofloading where fX(x) is the density function of X. The second part is thelikelihood function fY |X>l(y)P(X > l) given the data generated from theproof load survivors where fY |X>l(y) is the conditional density function ofY given X > l. The last part is the likelihood function fY (y) given the datagenerated from the specimens of the Y shoulder group. In case of damage,the data generated by the proof load survivors are not observable. Instead,we observe [Y ∗|X > l] for each proof load survivor.Based on the joint distribution of X and Y , we may be tempted toderive a new likelihood function for the data generated by the SPLD with ashoulder experiment where the proof load survivors are damaged. However,the density function of [Y ∗|X > l] is not available. The change–of–variabletechnique cannot be performed on the random variable [Y |X > l] to get thedensity function of [Y ∗|X > l] because the structural relationship between[Y ∗|X > l] and [Y |X > l] is not available. To sum up, the problem is thatthe likelihood function given the data generated from the undamaged proofload survivors is available, but those data are not observable; the likelihoodfunction given the data generated from the damaged proof load survivors isnot available, but we observe those data.The key idea to resolve this issue is to generate the desired data from[Y |X > l] based on the observed data from [Y ∗|X > l] using the distri-butional relationship between [Y |X > l] and [Y ∗|X > l]. The generateddata from [Y |X > l] are then taken as if they are observed. We calcu-late the MLE that optimizes the likelihood given the generated data from[Y |X > l], the observed data collected from the specimens that fail due toproof loading, and the observed data collected from the shoulder group ofY . This procedure is referred to as the generalized SPLD with a shoulderapproach. Simulation studies are conducted to investigate the MLE perfor-mance based on the generalized SPLD with a shoulder approach, indicatingthat the MLE is approximately unbiased.We then illustrate the (generalized) SPLD with a shoulder approach with11Chapter 6the Summer–of–2011 experiment data. The application with the Summer–of–2011 experiment achieves one of the most important goals that drive thisthesis’s development. The large correlation found in our application betweenthe lumber strength properties MOR and the log–transformed UTS hasimportant implications for monitoring the lumber products. For it suggeststhat if there is a need to verify the strength properties, not all the strengthproperties need to be tested. The long term monitoring program couldcollect one strength, and predict the other from the collected strength, whichis expected to result in a substantial reduction in experimental costs.Chapter 6Chapter 6 is the conclusion chapter, which summarizes the research ques-tions, findings and contributions. This chapter also discusses the research’slimitation and further research might be done as a result of our work.12Chapter 2Summer–of–2011 experiment2.1 IntroductionIn the summer of 2011, wood scientists at the FPInnovations and statis-ticians at the University of British Columbia conducted an experiment to“break a piece of lumber twice” (Summer–of–2011 experiment). Nearly adozen statistics graduate students involved at various times, working in anindustrial lab in the summer of 2011 under the supervision of FPInnovationsstaff. Statistical scientists and students collaborated with wood engineers,scientists, and technicians who provided the subject area expertise neededfor the research on structural performance at FPInnovations. The data setproduced by this experiment was intended to serve various scientific ob-jectives, but mainly it focused on investigating lumber strength propertyrelationships using the proof load technique. In this chapter, we presenta detailed description of the Summer–of–2011 experiment and provide anexploratory data analysis to learn data characteristics.2.2 Lumber and its major strength propertiesLumber, as a naturally building material, has mechanical and elastic proper-ties that are related to physical and mechanical properties of the structuresbuilt from it. The mechanical properties of lumber are often represented asstrength properties such as ultimate tensile strength (UTS) and modulus ofrupture (MOR). UTS is the maximum tensile stress in pounds per squareinch (psi) that a specimen can sustain in a direction parallel to the grain (atexture produced by wood fibres; visible to eyes) before failure. MOR is themaximum bending stress (in psi) that a specimen can sustain before failure.132.2. Lumber and its major strength propertiesFigure 2.1: Statistics graduate students working in the FPInnovations lab.142.2. Lumber and its major strength propertiesBoth the UTS and theMOR are measured by loading a lumber specimenon a specific strength testing machine until failure. Unlike the destructivestrength properties MOR and UTS, lumber’s elastic properties measurethe bending deflection of a specimen caused by a low-level load, and thespecimen recovers completely after the load is removed. One of the mostcommon and useful elastic properties of lumber is vibration modulus ofelasticity (MOE), expressed in millions of psi. The elastic properties andthe strength properties are correlated with each other. The Summer–of–2011experiment conducted MOR and UTS tests in accordance with AmericanStandard Test Method (ASTM) D4761 (ASTM D4761, 2005), and conductedthe vibration MOE test in accordance with ASTM D6874 (2003).To measure a lumber specimen’s MOR, we load the specimen on a bend-ing machine. The bending machine has two outermost support points thatare 73.5 inches apart (the “test span”). The specimen is positioned betweenthe two outermost support points so that its maximum strength–reducingcharacteristic (MSRC) is randomly located within the test span. The MSRCis a professional lumber grader’s best guess of the wood characteristics thatwill cause failure, and it is determined by visual inspection of the speci-men. Examples of wood characteristics are “knots”, “grain” and “shake”.Once the bending test starts, two loading points that are equally spread outwithin the test span push the specimen upward with equal forces until thespecimen fails (Figure 2.2).To measure a lumber specimen’s UTS, the specimen is loaded in a ten-sion machine. The last two feet on both ends of the lumber specimen arethen grabbed and pulled by the loading points of the tension machine in op-posite directions parallel to the grain until the specimen fails (Figure 2.3).A lumber’s stiffness as defined by the vibration MOE is measured by anon–destructive transverse vibration testing. The lumber specimen is simplysupported by the machine’s two end points. One of those end points has aload cell transducer installed. The test starts with a gentle force pressingthe lumber specimen and releasing it immediately. As a result of the force,the lumber specimen vibrates. The load cell reads the specimen’s weightand frequency of oscillation into a personal computer that uses the reading152.2. Lumber and its major strength propertiesto calculate the vibration MOE measurement (Figure 2.4). Measuring aspecimen’s vibration MOE does not damage the specimen because of thegentle force.Figure 2.2: The bending machine conducts the destructive test for measuringMOR. The specimen is positioned between the two outermost supportpoints. The two load points are spread equally within the test span to pushthe specimen upward with equal forces until failure.162.2. Lumber and its major strength propertiesFigure 2.3: The tension machine conducts the destructive test for measuringUTS. Once the machine is loaded with a specimen, it pulls the specimen inopposite directions at the loading points located at the two ends, until thespecimen fails.172.2. Lumber and its major strength propertiesFigure 2.4: The MOE machine conducts the non–destructive test for mea-suring vibration MOE. Once the machine is loaded with a specimen, thespecimen is subjected to a gentle force. The machine records the specimen’sweight and frequency of oscillation. The reading inputs are converted to avibration MOE measurement.182.3. The Summer–of–2011 experimentUp to this point, we have introduced lumber’s strength properties UTSand MOR and its elastic property MOE with descriptions of how they aremeasured. An important feature of a lumber specimen, which affects itsMOR,UTS and MOE, is its moisture content (MC). For the next sectionwhere the Summer–of-2011 experiment is described, we need the MC foreach specimen in order to assign the specimens to different treatment groups.As a biological material, the surrounding moisture atmosphere affects thelumber specimen: it picks up some moisture in a humid environment andgives up some moisture to a dry environment. The MC of a lumber specimenis defined asMoist weight− oven-dry weightoven-dry weight× 100 = MC percentage.Instead of oven drying a lumber specimen, we can also approximate its MCby an electronic moisture meter. The electronic moisture meter provides anestimated MC based on the correlation between the actual MC and variablessuch as wood electrical resistance. The electronic method of measuring MCsaves a great amount of time compared with the oven drying method, butit is less accurate.2.3 The Summer–of–2011 experimentThis section does not provide the details of the analytic inputs required toconduct the Summer–of–2011 experiment. We refer the reader to Appen-dices A and B for the details.The experimental materials came in three bundles of lumber: two bun-dles labelled #1 and #2 were of grade–type SPF 1650f-1.5E, while onelabelled #3 was of type SPF No.2. Each 2 × 4 inch specimen was 12 feetlong. SPF standards for Spruce, Pine and Fir species. The SPF 1650f-1.5Ehad a design value of 1650 psi for the fibre stress in bending and an expectedaverage MOE of 1.5 million psi. The SPF No.2s were No.2 grade lumberthat came from a mixture of Spruce, Pine, and Fir species. Both gradeswere produced together from the same mill. The information about the192.3. The Summer–of–2011 experimentmill is confidential. We selected SPF because it is Canada’s highest volumespecies group. Compared with the two bundles of 1650f-1.5E, the SPF No.2bundle came from a population with generally weaker lumber strength prop-erties. The inclusion of the SPF No.2 bundle added variation to the lumberstrength properties, making the combined sample resemble to some smallextent, a sample from the global population of all lumber of that grade.Work began with a professional grader who examined each specimen vi-sually and coded its MSRC. After the visual inspection, the investigatorsmeasured each specimen’s vibration MOE and used a moisture meter to es-timate its MC. Given each lumber specimen’s vibration MOE and estimatedMC, we calculated its adjusted vibration MOE following a standard pro-cedure, ASTM D1990 (Appendix A). Eight hundred and seventy specimensfrom the three bundles were divided into eight experimental groups basedon the adjusted vibration MOEs: R20/40/60/100 and T20/40/60/100. Thesorting was done bundle-by-bundle. For each bundle, the adjusted vibrationMOEs were ranked from the largest to the smallest, and the rank wentfrom 1 to 290. The specimen with the ith rank was identified and labelledby ID(i) for i = 1, 2, · · · , 290. The specimens were then arranged by theirranks, ID(1), ID(2), · · · , ID(290). The specimens from ID(1) to ID(10) wereassigned one by one to T100, T100, T60, T40, T20, R20, R40, R60, R100and R100 respectively. We repeated this procedure for the next 10 speci-mens, and so on until all 290 specimens were assigned. Thus, we created29 blocks and 10 units within each block for each bundle (Table 2.1). Sincethe adjusted vibration MOE was highly correlated with lumber strengthproperties such as MOR and UTS, the groups T100, T60, T40, T20, R100,R60, R40, and R20 could be assumed to be homogenous in terms of thestrength properties.Groups R60/40/20 and T60/40/20 were called the proof loaded groups(or single proof load design groups), where specimens were tested underdifferent proof load levels in the MOR mode and in the UTS mode, respec-tively (details later). Proof loading refers to a process where specimens aresubjected to only a modest load that breaks only the weakest pieces in apopulation. We used a scheme that tests a specimen up to a pre–set load202.3. The Summer–of–2011 experimentTable 2.1: The table shows schematically how the experimental groups usedin the Summer–of–2011 experiment were formed. Here, ID(i) is the identityof the lumber specimen with the ith rank in a given bundle.Piece identifier Rank GroupID(1) 1 T100ID(2) 2 T100ID(3) 3 T60ID(4) 4 T40ID(5) 5 T20ID(6) 6 R20ID(7) 7 R40ID(8) 8 R60ID(9) 9 R100ID(10) 10 R100ID(11) 11 R100ID(12) 12 R100ID(13) 13 R60.........212.3. The Summer–of–2011 experimentand passes those that do not break at this load. Each proof loaded grouphad 87 specimens. The experiment had been designed to fail 60%, 40%,20% of the specimens due to proof loading them in the MOR mode for theR60, R40, and R20 groups, respectively. As with the MOR, we expected60%, 40%, 20% of the specimens would fail due to proof loading in the UTSmode for the T60, T40, and T20 groups, respectively.Groups R100 and T100 were called the “shoulder groups”, and eachhad 174 specimens. We called them shoulder groups because neither ofthe groups was proof loaded and instead the specimens in R100/T100 wereloaded in a bending/tension machine respectively, and tested to failure. Thedestructive tests started with Groups T100 and R100. The specimens ofT100 (R100) were tested to failure in the UTS mode (the MOR mode) tomeasure their UTS (MOR) under displacement control where the machine’sloading points moved at 0.5 inches per minute (2.8 inches per minute). Theresulted data provided estimated quantiles of the UTS distribution and theMOR distribution, which allowed us to determine the proof load levels forthe proof loaded groups R60/40/20 and T60/40/20 (Appendix B). Our goalwas not to estimate the proof load levels precisely, but rather to obtain wellseparated proof load levels in order to investigate the proof load effects. Ta-ble 2.2 summarizes the estimated proof load levels in pounds, the estimatedand the observed number of failed specimens due to proof loading, and theobserved percentage of failed specimens due to proof loading for each exper-imental group R60/40/20 and T60/40/20. The proof load levels are spreadwell to give us the contrasts needed to study the proof load effects.The specimens in R60/40/20 were proof loaded in the MOR mode. Toproof load them, they were loaded on the bending machine, and the bendingmachine’s loading points moved at 2.8 inches per minute until the loadreached 1770/1525/1237 pounds, respectively. If a specimen in R60/40/20survived the proof loading procedure in the MOR mode, it was loaded tofailure on the tension machine under displacement control at the rate of 0.5inches per minute. As a result, for each specimen in Groups R60/40/20, weobserved the MOR if it failed due to proof loading it in the MOR mode orUTS otherwise.222.4. Exploratory data analysisTable 2.2: The estimated proof load level (PLL) in pounds, the estimatedand observed numbers of specimens failed due to the proof load and theobserved percentage of failed specimens due to the proof load.# of BrokenGroups PLL (in lb) Est. Obs. Obs. % of BrokenR60 1770 52 54 62R40 1525 35 40 46R20 1237 17 20 23T60 25490 52 52 60T40 20670 35 38 44T20 15360 17 14 16Similarly, each specimen in the T60/40/20 groups was proof loaded inthe UTS mode, and the tension machine’s loading points moved at 0.5 inchesper minute until the load reached 25490/20670/15360 pounds, respectively.If the specimen did not fail at the end of the proof loading procedure in theUTS mode, the surviving specimens of T60/40/20 were loaded to failure inthe bending machine under displacement control at the rate of 2.8 inchesper minute. We observed UTS if the specimen failed due to proof loadingin the UTS mode or MOR otherwise.2.4 Exploratory data analysisThis section illustrates the major characteristics of MOE, UTS and MORand their dependencies. The illustration is based on two data sets collectedfrom the shoulder groups T100 and R100, respectively. Group T100 hasMOE and UTS measurements observed on the same specimen while GroupR100 has MOE and MOR measurements observed on the same specimen.Section 2.4.1 presents the major characteristics of MOE, UTS and MOR(e.g. identifying the distribution functions that describe their characteris-tics well). Section 2.4.2 explores the dependencies between MOE and UTS,and between MOE and MOR. We are not able to explore the dependencebetween UTS and MOR directly because we do not observe them simultane-232.4. Exploratory data analysisously on the same lumber specimen. Approaches to assessing the dependencebetween two random variables that cannot be observed simultaneously arepresented in Chapter 3.2.4.1 Exploring the MOE, UTS and MOR dataTable 2.3 lists the types and the units of MOE (million psi), UTS (psi) andMOR (psi), all three measurements being continuous. Later in this section,we fit these three continuous variables with the log–normal distribution. Toavoid taking the logarithm of a dimensioned quantity (Massa et al., 2011;Zhai, 2011), we make them dimensionless before taking logarithms. For agiven MOE measurement, say m1 millions of psi, we make it dimensionlessby dividing m1 millions of psi by c millions of psi where c = 1 (one unitmeasurement). Other choices could have been made, for example to avoidnumerical problems, but this choice proved satisfactory. Equivalently, thedimensionless version of the MOE measurement ism1 millions of psi1 millions of psi= m1.Similarly, the UTS and MOR measurements are divided by their units ofmeasurement, namely by 1 thousand psi. Hereafter, whenever MOE, UTS,and MOR are mentioned, we refer to their dimensionless versions. Thereader is reminded that the choice of c would not affect the appropriatenessof the following analysis. Indeed, the following analysis would not changeunder a change in these units provided that dimensionless values are usedconsistently.Table 2.3: Type and unit of MOE, UTS, and MOR.Name Group Type UnitMOE R100 & T100 continuous million psiUTS T100 continuous thousand psiMOR R100 continuous thousand psi242.4. Exploratory data analysisTo visualize MOE, UTS and MOR, Figures 2.5 and 2.6 plot their his-tograms and boxplots, respectively. The histograms of MOE, UTS andMOR appear to be unimodal. We do not see strong evidence of asymme-tries in the MOR and the MOE distributions based on their boxplots andhistograms. However, the UTS distribution may be positively skewed.Figure 2.5: Histograms of MOR, UTS, and MOE. The units of MOR andUTS are thousand psi. The unit of MOE is million psi.0102030400 5 10MORcount0102030400 4 8UTS02550750.5 1.0 1.5 2.0 2.5MOE252.4. Exploratory data analysisFigure 2.6: Boxplots of MOR, UTS, and MOE. The lower and upper boxboundaries are the 25th and the 75th percentiles (Q1 and Q3), respectively.Two vertical lines extend from the lower and upper box boundaries to thelower and upper whiskers, respectively. The lower whisker is 1.5×(Q3−Q1)from Q1. The upper whisker is 1.5× (Q3−Q1) from Q3. Observations notincluded between the whiskers are plotted with dots. The units of MORand UTS are thousand psi. The unit of MOE is million psi.36912MOR2.55.07.510.0UTS1.01.52.0MOE262.4. Exploratory data analysisEmpirical quantiles are useful for investigating skewness. To confirmwhat we observe in Figures 2.5 and 2.6, we calculate the empirical quantilesof MOR, UTS, and MOE. The empirical distribution of a sequence ofindependent and identically distributed random variables {X1, X2, · · · , Xn}is defined asFˆX(x) =1nn∑i=1I(Xi ≤ x),where I(·) is the (0 or 1) indicator function. The inverse function of thisdiscrete empirical distribution is defined as the empirical quantile functionQˆX(t) = Fˆ−1X (x) = inf{x : FˆX(x) ≥ t}.The empirical quantiles of UTS, MOR, and MOE are summarized in Table2.4, which suggests that the MOE and the MOR distributions are indeedapproximately symmetric and the UTS distribution is slightly skewed to theright.Table 2.4: Summary statistics for MOE/UTS/MOR: the 0th (min), 25th,50th, 75th and 100th (max) empirical percentiles, the sample mean, and thesample standard deviation (SD). The sample size is N.Empirical quantilesName Group N Min 25% Mean 50% 75% Max SDMOE R100&T100 348 0.75 1.40 1.58 1.59 1.74 2.36 0.27UTS T100 174 0.95 3.25 4.50 4.28 5.45 9.88 1.88MOR R100 174 1.97 5.28 6.62 6.61 7.62 11.75 1.71To quantify the skewness observed in the empirical quantiles (Table 2.4)and Figures 2.5 and 2.6, we use the sample skewness measurement definedas √n(n− 1)n− 2∑ni=1(xi − x¯)3/n(∑ni=1(xi − x¯)2/n)3/2,for a sample of observations {x1, x2, · · · , xn} from a random variable X.Table 2.5 gives the sample skewness of MOE,UTS and MOR. The MORand UTS distributions are positively skewed while the MOE distribution isnegatively skewed. To confirm whether the skewness are significantly differ-272.4. Exploratory data analysisent from 0, we carry out the D’Agostino test of skewness (D’Agostino, 1970)for each distribution. The null hypothesis is that the skewness is zero, andthe alternative hypothesis is that the skewness is non–zero. The p–valuesare 0.23, 0.34, and 0.00 for the MOE distribution, the MOR distribution,and the UTS distribution, respectively. At the 5% significance level, theskewness of the UTS distribution is significantly different from zero. We donot detect significant skewness for the MOE and MOR distributions at the5% significance level.Table 2.5: Sample skewness for MOE/UTS/MOR. The sample size is N.Name Group N SkewnessMOE R100&T100 348 -0.16UTS T100 174 0.73MOR R100 174 0.17The Weibull distribution is one of the commonly seen distributions formodelling lumber strength properties (Gupta et al., 1986). It is one ofthe limiting distributions of the minimum of a sequence of identically andindependently distributed random variables, providing that the limiting dis-tribution is a non–degenerate distribution. This fact often justifies the useof the Weibull distribution for modelling the failure time/strength resultedfrom the weakest link. In case of testing a lumber specimen to failure formeasuring its UTS, a uniform tensile stress is applied along the longitudi-nal axis of the specimen within the test span, and the specimen fails at theweakest segment of the lumber specimen. Such a test procedure supportsthe use of the Weibull distribution for modelling the UTS. However, incase of testing a lumber specimen to failure for its MOR, the specimen maynot fail at the weakest segment of the specimen because the stress is notuniformly distributed on the specimen within the test span. The lumberindustry also commonly models the lumber strength properties using thelog–normal distribution and the normal distribution (Gupta et al., 1986).The Weibull, log–normal and normal distributions vary from heavier tolighter tails. The density functions for these distributions are respectively:282.4. Exploratory data analysisfor the normal distribution,fnormal(x;µ1, σ1) =1σ1√2piexp{−(x− µ1)22σ21}, (2.1)where µ1 ∈ (−∞,∞) and σ1 > 0; for the log–normal distribution,flogN(x;µ2, σ2) =1xσ2√2piexp{−(log x− µ2)22σ22}for x > 0, (2.2)where µ2 ∈ (−∞,∞) and σ2 > 0; and for the 2–parameter Weibull distri-bution,fw(x;β, η) =βη(xη)β−1exp{−(xη)β} for x ≥ 0, (2.3)where β > 0 and η > 0.To find a distribution that reflects the MOE characteristics, Figure 2.7plots the empirical quantiles of MOE against the theoretical quantiles of theWeibull, the log–normal and the normal distributions. Similarly, Figures 2.8to 2.9 give the quantile to quantile plots for UTS and MOR, respectively. Ifthe theoretical quantiles are heavier than the empirical quantiles, we expectto see a concave downward curve. If the theoretical quantiles are similar tothe empirical quantiles, we expect to see a linear curve. If the theoreticalquantiles are lighter than the empirical quantiles, we expect to see a concaveupward curve.Figure 2.7 plots the empirical quantiles of MOE against various theoret-ical quantiles. The normal distribution or the 2–parameter Weibull distribu-tion with the shape parameter of around 4 seems an appropriate choice. Thelog–normal distribution is too heavy tailed for MOE. Nevertheless in thesequel, we do fit the log–normal distribution for completeness. Figure 2.8plots the empirical quantiles of UTS against various theoretical quantiles.The log–normal distribution seems heavy tailed for UTS. The 2-parameterWeibull distribution with the shape parameter of about 2 seems appropriate.Figure 2.9 plots the empirical quantiles of MOR against various theoreticalquantiles. The normal distribution or the 2-parameter Weibull distributionwith the shape parameter of around 4 seems an appropriate choice.292.4. Exploratory data analysisFigure 2.7: Sample quantiles of MOE against various theoretical quantiles.The normal theoretical quantiles are from a normal distribution with pa-rameters µ1 = 0 and σ1 = 1. The log–normal theoretical quantiles are froma log–normal distribution with parameters µ2 = 0 and σ2 = 1.1.01.52.0−2 0 2Normal theoritical quantilesSample Quantiles1.01.52.00.4 0.8 1.2 1.6Weibull (shape = 4) theoritical quantilesSample Quantiles1.01.52.00.6 0.8 1.0 1.2Weibull (shape = 10) theoritical quantilesSample Quantiles1.01.52.00 5 10 15 20Log−normal theoritical quantilesSample Quantiles302.4. Exploratory data analysisFigure 2.8: Sample quantiles of UTS against various theoretical quantiles.The normal theoretical quantiles are from a normal distribution with pa-rameters µ1 = 0 and σ1 = 1. The log–normal theoretical quantiles are froma log–normal distribution with parameters µ2 = 0 and σ2 = 1.2.55.07.510.0−3 −2 −1 0 1 2 3Normal theoritical quantilesSample Quantiles2.55.07.510.00 2 4 6Weibull (shape = 1) theoritical quantilesSample Quantiles2.55.07.510.00.0 0.5 1.0 1.5 2.0 2.5Weibull (shape = 2) theoritical quantilesSample Quantiles2.55.07.510.00 5 10 15Log−normal theoritical quantilesSample Quantiles312.4. Exploratory data analysisFigure 2.9: Sample quantiles of MOR against various theoretical quantiles.The normal theoretical quantiles are from a normal distribution with pa-rameters µ1 = 0 and σ1 = 1. The log–normal theoretical quantiles are froma log–normal distribution with parameters µ2 = 0 and σ2 = 1.36912−3 −2 −1 0 1 2 3Normal theoritical quantilesSample Quantiles369120.0 0.5 1.0 1.5 2.0 2.5Weibull (shape = 2) theoritical quantilesSample Quantiles369120.4 0.8 1.2 1.6Weibull (shape = 4) theoritical quantiles369120 5 10 15Log−normal theoritical quantiles322.4. Exploratory data analysisFigures 2.7 to 2.9 suggest that each of the distributions, Weibull, log–normal and normal distributions could be appropriate in certain circum-stances. So for completeness, we fit MOR (UTS and MOE) with the log–normal distribution together with the normal and the Weibull distributions.We fit MOE/UTS/MOR with these three candidate distributions andobtain maximum likelihood estimates with standard errors for each param-eter in each candidate distribution in Table 2.6. We can visualize the ad-equacy of these three candidates to describe the data characteristics byplotting the fitted cumulative distribution against the empirical cumulativedistribution. If the fitted cumulative distribution were close to the empiri-cal cumulative distribution, the candidate distribution would be consideredappropriate to describe the data characteristics.Table 2.6: Fitting the normal distribution, the 2-parameter Weibull distri-bution and the log–normal distribution to MOE/UTS/MOR. This tablelists the maximum likelihood estimate of the parameter and its standarderror.Weibull normal log–normalβ η µ1 σ1 µ2 σ2MOE (R100 and T100)EST 6.56 1.69 1.58 0.27 0.44 0.18SE 0.26 0.01 0.01 0.01 0.01 0.01UTS (T100)EST 2.79 5.05 4.50 1.71 1.43 0.40SE 0.16 0.15 0.13 0.09 0.03 0.02MOR (R100)EST 3.85 7.32 6.62 1.87 1.85 0.31SE 0.22 0.15 0.14 0.10 0.02 0.02The left panel of Figure 2.10 plots the empirical cumulative distribution,the fitted normal cumulative distribution, the fitted Weibull cumulative dis-tribution, and the fitted log–normal cumulative distribution of MOE. Thefitted normal cumulative distribution lies close to the empirical cumulativedistribution. The normal distribution seems to be the most appropriateamong the three candidate distributions for modelling the MOE. The lum-ber industry is most concerned about the fitting adequacy at the left tailregion. The left tail region in the left panel of Figure 2.10 is zoomed in the332.4. Exploratory data analysisright panel. The right panel suggests that the normal distribution seems tobe the most appropriate among the three candidate distributions to describethe lower tail quantiles of the MOE distribution.Figure 2.10: MOE. Left panel: the empirical cumulative distribution, thefitted normal cumulative distribution with µˆ1 = 1.58 and σˆ1 = 0.27, thefitted Weibull cumulative distribution with βˆ = 6.56 and ηˆ = 1.69, and thefitted log–normal cumulative distribution with µˆ2 = 0.44 and σˆ2 = 0.18.Right panel: the left tail region in the left panel is zoomed in.0.000.250.500.751.001.0 1.5 2.0Cumulative Distribution Function of MOE0.000.050.100.150.200.250.7 0.9 1.1 1.3Empirical quantiles log−normal Normal WeibullThe left panel of Figure 2.11 plots the empirical cumulative distribution,the fitted normal cumulative distribution, the fitted Weibull cumulative dis-tribution, and the fitted log–normal cumulative distribution for the UTS.The log–normal distribution seems to be the more appropriate distributionfor UTS. The left tail region in the left panel of Figure 2.11 is zoomed inthe right panel. The right panel suggests that the lowest quantiles of theUTS distribution are better described by the Weibull distribution. How-ever, the log–normal distribution is the more appropriate distribution fordescribing the 5th quantile used in setting lumber design values. The em-pirical 5th quantile agrees with the estimated 5th quantile calculated based342.4. Exploratory data analysison the fitted log–normal distribution well.Figure 2.11: UTS. Left panel: the empirical cumulative distribution, thefitted normal cumulative distribution with µˆ1 = 4.50 and σˆ1 = 1.71, thefitted Weibull cumulative distribution with βˆ = 2.79 and ηˆ = 5.05, and thefitted log–normal cumulative distribution with µˆ2 = 1.43 and σˆ2 = 0.40.Right panel: the left tail region in the left panel is zoomed in.0.000.250.500.751.002.5 5.0 7.5 10.0Cumulative Distribution Function of UTS0.000.050.100.150.200.251.0 1.5 2.0 2.5 3.0 3.5Empirical quantiles log−normal Normal WeibullThe left panel of Figure 2.12 plots the empirical cumulative distribution,the fitted normal cumulative distribution, the fitted Weibull cumulative dis-tribution, and the fitted log–normal cumulative distribution for the MOR.The normal and Weibull distributions both seem appropriate for the MOR.The left tail region in the left panel of Figure 2.12 is zoomed in the rightpanel. The right panel suggests that the normal and Weibull distributionsseem to be more appropriate for describing the lower tail quantiles of theMOR distribution, compared with the log–normal distribution.352.4. Exploratory data analysisFigure 2.12: MOR. Left panel: the empirical cumulative distribution, thefitted normal cumulative distribution µˆ1 = 6.62 and σˆ1 = 1.87, the fittedWeibull cumulative distribution with βˆ = 3.85 and ηˆ = 7.32, and the fittedlog–normal cumulative distribution with µˆ2 = 1.85 and σˆ2 = 0.31. Rightpanel: the left tail region in the left panel is zoomed in.0.000.250.500.751.003 6 9 12Cumulative Distribution Function of MOR0.000.050.100.150.200.252 3 4 5Empirical quantiles log−normal Normal Weibull362.4. Exploratory data analysisTo validate what we observe in Figures 2.10 – 2.12, we calculate theAkaike information criterion (AIC) for each fitted distribution. The AICmeasures the distribution’s suitability for describing future data character-istics after penalizing for the number of parameters. It is defined asAIC = 2× kp − 2`,where kp is the number of parameters in the distribution, and ` is the log–likelihood function evaluated at the maximum likelihood estimate. For allthree candidate distributions, kp = 2. We want to select a distribution witha small AIC value. Table 2.7 lists the AIC for the fitted normal, the fittedWeibull, and the fitted log–normal distributions of MOE/UTS/MOR.Table 2.7: The AIC values for the fitted normal, 2-parameter Weibull,and log–normal distributions for the MOE/UTS/MOR data. The crite-rion suggests the normal, log–normal and normal distributions for mod-elling MOE/UTS/MOR, respectively. However, in the latter two cases,the Weibull distribution is a close contender.AICnormal Weibull log–normalMOE 67.45 76.07 98.31UTS 684.04 678.14 677.25MOR 715.79 717.66 733.29The AIC suggests using a normal distribution for modelling MOE, aWeibull or a log–normal distribution for modelling UTS, and a normal ora Weibull distribution for modelling MOR. As we mention previously, theWeibull distribution may be more appropriate for modelling UTS based onphysical knowledge.2.4.2 Dependence between strength propertiesThe goals of the Summer–of–2011 experiment include not only learning thecharacteristics of the lumber strength properties MOE/UTS/MOR, butalso more importantly investigating the dependencies among MOE, UTSand MOR. We are not able to explore the dependence between UTS and372.4. Exploratory data analysisMOR directly because UTS and MOR cannot be observed on the samespecimen. However, we do have paired observations of (MOE,UTS) fromthe shoulder group T100 and paired observations of (MOE,MOR) fromthe shoulder group R100. As we mentioned above, MOE is known to beassociated with UTS and MOR. To explore the potential association ofMOE with either UTS or MOR, Figure 2.13 plots MOE against MOR(left panel) and MOE against UTS (right panel). Both MOR and UTSappear to be highly positively correlated with MOE.Figure 2.13: Plots of MOE against MOR (left panel) and MOE againstUTS (right panel). Both plots suggest a high positive correlation, althougha non–linear relationship seems apparent in the right panel.1.21.62.02.43 6 9 12MORMOEMOE VS MOR (R100)1.01.41.82.22.5 5.0 7.5 10.0UTSMOE VS UTS (T100)Pearson’s correlation coefficient is used to evaluate linear association be-tween two random variables, ranging between -1 and 1. A positive valuemeans that the variables are positively correlated, which means they tendto be large and small together. A very high positive value (close to 1)implies a strong positive linear association between the two random vari-ables. Similarly, a negative Pearson correlation coefficient means that thetwo variables have a negative linear relationship. A very negative Pearsoncorrelation coefficient that is close to -1 implies a strong negative linear de-pendence. If the Pearson’s correlation coefficient were 0, then there would382.5. Summary and conclusionsbe no linear dependence between the variables. The sample Pearson’s corre-lation coefficient between two random variables X and Y with observations{(xi, yi) for i = 1, 2, · · · , n} is∑ni=1(xi − x¯)(yi − y¯)√∑ni=1(xi − x¯)2(yi − y¯)2,where x¯ =∑ni=1 xi/n and y¯ =∑ni=1 yi/n. The sample Pearson’s corre-lation coefficients of (MOE,UTS) and (MOE,MOR) are 0.63 and 0.72,respectively. The pairs are strongly associated linearly, confirming whatwe observe in Figure 2.13. The strong associations between MOE and thestrength properties MOR and UTS provide a strong justification for us-ing the MOE to assign the specimens to the different experimental groupsand assuming that the experimental groups are homogenous in the strengthproperties described in Section 2.3.We cannot visualize or quantify the dependence between UTS and MORin this chapter. However, Chapter 3 presents a new theory to infer thestochastic dependence between these types of random variables.2.5 Summary and conclusionsIn this chapter, we have introduced (Section 2.2) lumber’s major character-istics, especially the strength properties UTS and MOR, and the elasticityproperty MOE. The study of lumber’s strength properties is importantbecause it determines the reliability of a lumber structure under loads likelyto be encountered in practice. Those loads in practice could come fromsuch things as winds and seismic activity. The Summer–of–2011 experimentwas designed and conducted to investigate the strength properties and theirdependencies. Section 2.3 provides a detailed description of that experi-ment. We assigned 870 lumber specimens to 8 experimental groups basedon the adjusted MOEs. Three major components formed the experimentalgroups: two shoulder groups, three groups proof loading in the UTS mode,and three groups proof loading in the MOR mode. Section 2.4 illustratesthe major characteristics of MOE, UTS, and MOR. The exploratory data392.5. Summary and conclusionsanalysis suggests a normal distribution for modelling MOE, a Weibull ora log–normal distribution for modelling UTS, and a normal or a Weibulldistribution for modelling MOR. The sample Pearson’s correlation coeffi-cients of (MOE,UTS) and (MOE,MOR) indicate that MOE is stronglypositively correlated with both UTS and MOR.40Chapter 3Estimating the bivariatenormal’s correlation whenone margin is censored bysizeWe are often interested in the dependence between two random variablesX and Y . But challenges arise when the two variables cannot be observedsimultaneously, more precisely, when only Z = XI{X < l} + Y I{X > l}and U = XI{X < l} is observed for each specimen in a random sample,where l is a known threshold set by the experimenter and I denotes the (0or 1) indicator function. The need to estimate the dependence based onthese kinds of censored random variables belongs to two broad paradigmsof statistical experimentation seen in practice.The first paradigm arises in accelerated life testing of products that mayoperate for thousands of hours without failure under normal conditions.Conducting preliminary tests of the product at those normal conditionswould lead to unacceptably long delays in getting the product into a com-petitive market. Thus commonly industry will test the product at higherthan expected levels to accelerate the time–to–failure. However that thenrequires knowledge not always available, of the relationship between theaccelerated failure time and the normal failure time X. When that knowl-edge is not available, we propose an alternative solution that might be used,namely find a second variable Y correlated with X that fails in shorter time.The test would then consist of stressing the product at normal conditions41Chapter 3. Estimating censored bivariate normal’s correlationto a specified time or load and observing X if it fails and Y on the samespecimen if it does not. The result will be Z and demand estimating thecorrelation between X and Y from a sample of specimens in order to predictthe unobserved X from Y .The second paradigm is seen in relating lumber strength properties asmentioned in Chapter 2. Setting structural engineering design values forforest products often involves a question of whether a quality monitoringsample is better utilized by testing one strength property with greater ac-curacy and precision, or splitting into two or more smaller samples to testmultiple strength properties. A strong dependence between strength prop-erties has important implications for the answer to this question. For itsuggests that if there is a need to verify the strength properties, not all thestrength properties need to be tested. The more expensive to measure fail-ure property could be predicted from the other if the relationship betweenthe strength properties is determined. This means that the relationship be-tween the strength properties could be exploited to reduce overall samplingcosts in long term monitoring programs. However, most strength propertiesare obtained by destructive tests. We are not able to observe the destruc-tive strength properties on the same specimen. But observing Z describedabove provides a way of determining the relationship between two destruc-tive strength properties so that one could be predicted from the other infuture.This chapter is laid out as follows. Section 3.1 reviews the single proofload design (SPLD) proposed by Evans et al. (1984), Green et al. (1984),and Johnson and Galligan (1983) who introduce the random variable Z =XI{X ≤ l}+Y I{X > l} to determine the relationship between two randomvariables that cannot be observed simultaneously. The parameter of interestis then estimated by the maximum likelihood method to get the maximumlikelihood estimator (MLE). The MLE maximizes the likelihood functiongiven a random sample of responses Z and U = I{X ≤ l} collected from aSPLD experiment. We refer to this approach as the SPLD approach. In theSPLD approach, the MLE has desired asymptotic properties such as asymp-totic consistency and normality. However, when the sample is of a realistic42Chapter 3. Estimating censored bivariate normal’s correlationsize, the SPLD approach has the problem of the near non–identifiabilityof parameters, due to a weak bridge between sample information and thelikelihood estimator.To tackle this issue, Amorim and Johnson (1986) propose a symmetricproof load design. The parameter of interest is estimated by the maximumlikelihood method where the MLE maximizes the likelihood function givena random sample collected from a symmetric proof load design experiment.We refer to this approach as the symmetric proof load design approach.The symmetric proof load design approach improves the finite sample per-formance of the MLE dramatically. Unfortunately, the symmetric proof loaddesign is not always physically feasible. For example, in the forest productscontext, proof loading is not appropriate for one kind of strength testingcalled “compression”.As an attempt to provide an alternative solution, Section 3.2 exploresthe idea of regularized estimation using the penalized maximum likelihoodestimator (PMLE), one which goes back a long way in the history of statistics(Hoerl, 1962; Hoerl and Kennard, 1970). The penalty function is motivatedby a power prior where its parameters are pseudo–observations generated toreflect expert opinion. The penalty function regularizes the parameters suchthat the PMLE achieves desirable finite sample performance. Simulationstudies are conducted to investigate the PMLE performance, which leads toa rediscovery that is proposed in Amorim and Johnson (1986): a shouldergroup of Y is needed to solve the near non–identifiability problem in theSPLD approach. Therefore, in Section 3.3, we explore the possibility forovercoming the near non–identifiability problem in the SPLD approach, byredesigning SPLD to include a shoulder group of Y . This design is referredto as the SPLD with a shoulder.The SPLD with a shoulder experiment divides the specimens into oneof the two groups: the shoulder group of Y and the proof loaded group (i.e.SPLD group) with proof loading in the X mode. The parameter of interestis estimated by the maximum likelihood method where the MLE maximizesthe likelihood function given a random sample collected from the SPLDwith a shoulder experiment. The shoulder sample of Y strengthens the433.1. A brief history of estimating the correlation by proof loading.sample–to–estimator bridge, and thus resolves the near non–identifiabilityof parameters in the SPLD approach. Simulation studies are conductedto demonstrate the success of the SPLD with a shoulder approach. Themethodologies presented in this chapter are very general, but we focus onthe lumber application.3.1 A brief history of estimating the correlationby proof loading.To our best knowledge, Johnson and Galligan (1983) introduce the firstever proof load experiments designed to estimate correlation between twodestructive lumber strength properties, and demonstrate the usefulness ofproof loading for estimating the correlation. Proof loading is a techniqueused in the lumber industry to eliminate the weakest specimens in a pop-ulation. It tests a specimen up to a pre–determined load level, and passesspecimens through that do not fail at this level. By definition, the conceptunderlying proof loading can be generalized and its domain of applicabilitycan be broadened as described in the sequel.Following the work of Johnson and Galligan (1983), Green et al. (1984)and Evans et al. (1984) formulate and propose the SPLD approach to assessdestructive lumber strength property relationships. The SPLD experimentenables us to observe Z = XI{X ≤ l} + Y I{X > l} and U = I{X ≤ l}for each specimen in a random sample. The parameter of interest is thenestimated by the maximum likelihood method. The MLE maximizes thejoint likelihood function given the sample of Z = XI{X ≤ l}+ Y I{X > l}and U = I{X ≤ l}. Green et al. (1984) and Evans et al. (1984) conductintensive simulation studies to investigate how the accuracy and precisionof the MLE obtained by using the single proof load design are affected bysample size and proof load level l under various true correlations betweenX and Y when the correlation parameter is the only unknown parameter.Amorim and Johnson (1986) demonstrate the asymptotic consistencyand normality of the MLE when the joint distribution of the two random443.1. A brief history of estimating the correlation by proof loading.variables X and Y is specified to be a bivariate normal distribution. Theyalso use the Fisher information of the correlation parameter to select the op-timal proof load level. Johnson and Lu (2007) demonstrate the asymptoticproperties of the MLE when the joint distribution of X and Y is specified tobe a two–parameter Weibull distribution. However, the MLE has unsatis-factory finite sample performances in terms of accuracy and precision whensample sizes are realistic, especially when the dependence between the tworandom variables is weak.To stabilize the estimate, Amorim and Johnson (1986) propose a sym-metric proof load design. In this symmetric design, a sample of specimensis divided into two halves. One half is tested with a single proof load de-sign, with the initial proof loading being in say the X mode. The otherhalf is done in a similar way, but this time initial proof loading in the Ymode. Johnson and Lu (2000) propose a double proof loads design wherespecimens are proof loaded in say the X mode. If the specimen survives,the specimen is proof loaded in the Y mode. If the specimen survives thesecond proof loading procedure in the Y mode, the specimen is tested tofailure in the X mode. Johnson and Lu (2000) also combine the ideas of thesymmetric proof load design with the double proof loads design to get evenmore efficient estimator of the X − Y correlation.However proof loading in both modes is not always possible. For exam-ple, in the forest products context, proof loading is not appropriate for onekind of strength testing called “compression”. In addition, proof loading inboth the X and the Y modes could be more expensive than proof loadingonly in the X mode. These considerations lead to the methods presentedin this chapter, which can be used without relying on proof loading in bothmodes. One of our methods, the SPLD with a shoulder approach, actually isa rediscovery of the hybrid design that is proposed in Amorim and Johnson(1986).453.2. The penalized SPLD approach3.2 The penalized SPLD approachEstimate instability has long been recognized as a problem in multiple pa-rameter estimation. In this section, we explore the regularization techniqueused to stabilize estimates in multiple parameter problems. A power prior,generated by pseudo–observations that incorporate expert opinion, is pro-posed as a technical device to motivate a penalty function. The size ofthe penalty function is controlled by a regularization parameter, which isselected by its regularization trace. The penalty function regularizes theparameters such that the resulting estimate achieves desirable finite sampleperformance.3.2.1 Regularized estimationAn early form of regularized estimation is seen in ridge regression. Considerthe linear modelY = XTβ + ,where Y = (Y1, Y2, · · · , Yn)T is a n×1 response vector, X is a n×pr designmatrix, β = (β1, β2, · · · , βpr)T is a pr × 1 vector of unknown coefficientparameters, has a multivariate normal distribution with mean vector 0n×1and variance–covariance matrix σ2In×n, and In×n is the n × n identitymatrix. Let y = {y1, y2, · · · , yn} be a realization of Y . Given the observedsample y = {y1, y2, · · · , yn}, the log–likelihood function of β, σ isln L(β, σ; y1, · · · , yn) = −nln(σ)−1σ2n∑i=1(yi − xTi β)2 −n2ln (2pi),where xi is the ith row of X. Differentiating the log–likelihood functionwith respect to the unknown parameters β and σ, we obtain∂∂βln L(β, σ; y1, · · · , yn) =1σ2n∑i=1(yi − xTi β)xi,463.2. The penalized SPLD approach∂∂σln L(β, σ; y1, · · · , yn) = −nσ+1σ3n∑i=1(yi − xTi β)2.The MLE value of β is the solution of the so-called normal equationn∑i=1(yi − xTi β)xi = 0pr×1.The MLE value of β is βˆMLE= (XTX)−1XTy if (XTX)−1 exists, and sothe MLE of β is βˆ = (XTX)−1XTY . In fact, the MLE of β is also theleast squares estimator, which minimizes||Y −XTβ||22 =n∑i=1(Yi − xTi β)2.When strong collinearity exists among some of the predictor variables,βˆMLEis unstable. Small changes in the data could produce dramaticchanges in βˆMLE. This poses a problem for which a solution is proposed incelebrated papers of Hoerl (1962) and Hoerl and Kennard (1970). The solu-tion proposed long ago is to add additional information through a constrainton β to regularize the estimate:||β − β0||22 ≤ c, (3.1)for some known constant c and β0 = (β01 , β02 , · · · , β0pr ) is a known pr × 1vector (L2 penalty) where ||β−β0||22 =∑pri=1(βi−β0i)2. Lagrange multipliertheory implies that minimizing ||Y −XTβ||22 under the constraint (3.1) isequivalent to maximize−||Y −XTβ||22 − λ||β − β0||22, (3.2)with respect to β where λ is the Lagrange multiplier. There is one-to-onecorrespondence between λ and c. The term ||β − β0||22 is usually consid-ered as a penalty term, and λ is called a tuning parameter or regularizationparameter that controls the penalty size. The solution of Expression (3.2)473.2. The penalized SPLD approachwhen β0 = 0 corresponds to the classic ridge estimator (XTX+λI)−1XTY .The additional diagonal constants are added to the matrix XTX to avoidsingularity. The regularization parameter λ is selected based on an exami-nation of the regularization trace. The regularization trace plots the ridgeestimates as a function of the regularization parameter λ where λ increasesfrom 0 until the ridge estimate levels off.If the penalty term in Expression (3.2) is replaced by the L1 penalty||β − β0||1 ≤ c and β0 = 0 where ||β − β0||1 =∑pri=1 |βi − β0i |, we obtainthe famous Lasso model selection technique (Tibshirani, 1996). This is amore extreme form of regularization to stabilize the coefficient parametersbecause some of the coefficients may be shrunk to 0, thereby reducing thedimension of the problem and can serve as a tool for variable selections.Considering σ2 as known, when the quantity (3.2) is multiplied by 1/(2σ2)and exponentiated, we get (up to a proportionality constant) a Bayesianposterior distribution of β using a power prior discussed in Ibrahim et al.(1998), Ibrahim and Chen (2000), and Chen and Ibrahim (2006). Let srepresent current data. Let s0 represent data from a prior experiment, ordata specification based on a theoretical prediction model or expert opinion(hereafter, we use “power prior data” to represent all these cases). Theposterior distribution of β is proportional toL1(β; s)Lλ2(β; s0)pi(β;γ0),where the likelihood given the current data is denoted asL1(β; s) ∝ exp−(β − βˆMLE(s))TXTX(β − βˆMLE(s))2σ2,the likelihood given s0 is, with β0 denoting its MLE value,Lλ2(β; s0) ∝{exp(−(β − β0(s0))T (β − β0(s0))2σ2)}λ,483.2. The penalized SPLD approachand pi(β;γ0) ∝ 1 is an initial prior distribution of β before observing thepower prior data s0 and γ0 is a vector of hyper–parameters.The prior Lλ2(β; s0) is called the power prior where L2(β; s0) is calledthe conjugate prior when s0 is thought to have been generated by a priorexperiment yielding data that reflect the expert prior knowledge. This isa classical Bayesian approach to generate such a conjugate prior (Lele andAllen, 2006; Novick and Grizzle, 1965) where all expert prior knowledgeis represented in the form of pseudo–data, instead in the form of hyper–parameters. The joint power prior distribution is defined asLλ2(β; s0)pi(β;γ0),which equals to Lλ2(β; s0) when pi(β;γ0) is taken as the flat prior. The powerprior is defined as raising the likelihood based on the power prior data to asuitable power λ (Ibrahim et al., 1998; Ibrahim and Chen, 2000). In fact inthis case, the power prior distribution becomes a weighted likelihood with λcalled the relevance weight (Hu, 1994) where the weighted likelihood is usedto trade bias for variance. The weighted likelihood uses the additional infor-mation in the sample s0 (possibly biased) to improve estimation accuracyover that provided by the MLE. The regularization parameter λ controlsthe tail behaviour of the power prior distribution. As the regularizationparameter λ increases, the tail of the power prior distribution diminishesfaster. As a result, the regularization parameter can be interpreted as a pre-cision parameter for the power prior data. The regularization parameter isrequired to be non-negative. When λ = 0, the power prior does not dependon the power prior data. As λ → ∞, the power prior distribution becomesa degenerate prior, with a point mass. When λ is fixed and the likelihoodfunction L2(β; s0) converges to a normal density, then the power prior alsoconverges to a normal density with the precision adjusted by a factor of λ.The application of the power prior is demonstrated in various regressionmodels (Ibrahim et al., 1998; Ibrahim and Chen, 2000), and is supported byformal justification of certain optimal properties. When the regularizationparameter λ is fixed between 0 and 1, the joint power prior distribution493.2. The penalized SPLD approachminimizes a convex sum of the Kullback-Leibler divergence between twodensities where one density is completely based on the current data and an-other density is pooled with the current and the power prior data (Ibrahimet al., 2003). Ibrahim et al. (2003) link this optimality to Zellner’s opti-mal information processing (Zellner, 1997, 2002), and demonstrate that thepower prior is a 100% efficiency information processing rule in the sense ofZellner’s optimal information processing criterion.In practice, finding s0 or equivalently β0 in the above example may bedifficult or computationally challenging unless s0 corresponds to data. Inour application, s0 does not correspond to data. Specifying how s0 shouldbe generated to represent, say the expert opinion, can be challenging, es-pecially since no prescriptive method is available. One common approachsupposes that it comes from the same experiment as the one that generatess, with the amount of data representing the extent of prior knowledge. Thesufficient statistic s0 of the power prior distribution (e.g. β0 as in the re-gression example above), can then be estimated from the marginal likelihoodbased on the distribution of s conditional on the hyper–parameters. Thens0 depends on the data s. The result has been called the Type II MLE(Robbins, 1955; Morris, 1983), and it would be called an “empirical Bayesestimate”.The approach we present in Section 3.2.2 is similar in spirit to thisapproach. It is driven by our knowledge gleaned from inspections of thestructure of the unpenalized likelihood function assuming that the two ran-dom variables X and Y follow a bivariate normal distribution. Althoughthe MLE that maximizes the unpenalized likelihood function is an asymp-totically consistent estimator in accord with the large sample theory formaximum likelihood, it is unstable in samples of realistic size, varying dra-matically from sample–to–sample (Amorim and Johnson, 1986). The nearnon–identifiability of the SPLD approach may be due to the missing infor-mation about the tail of the Y distribution due to the design implementedto observe the random variable Z when the correlation between X and Yis weak. These considerations lead to an estimation algorithm suggested byheuristic reasoning that the data–to–estimate bridge can be strengthened by503.2. The penalized SPLD approachadding the empirical by–pass route, involving a s0 based on a hypotheticalprior experiment with estimated hypothetical sample.3.2.2 The penalized SPLD approachConsidering two destructive properties X and Y , the SPLD proof loads eachspecimen in the X mode up to a pre-determined load l. If the specimenfails, its X would be observed. Otherwise, the surviving specimen is testedto failure in the Y mode to measure Y . Thus, we observe (Z,U) in theSPLD experiment whereZ ={X if X < lY if X > land U ={1 if X < l0 if X > l.Assume the joint distribution of X and Y follows the bivariate normal dis-tribution(XY)v N2([µXµY],[σ2X ρσXσYρσXσY σ2Y]).Let θ = (µX , σX , µY , σY , ρ)T . Given Y = y, the conditional random variable[X|Y = y] follows a normal distribution with mean µX + ρσX(y − µY )/σYand variance (1− ρ2)σ2X .The component of likelihood function of θ for a single observation s =(z, u) isL(θ; s) = {fX(x)}u{fY,X>l(y, x > l)}1−u=exp[− (x−µX)22σ2X]σXu{∫ ∞lfX,Y (x, y)dx}1−u=exp[− (x−µX)22σ2X]σXu{fY (y)∫ ∞lf[X|Y ](x|y)dx}1−u=exp[− (x−µX)22σ2X]σXuexp[− (y−µY )22σ2Y]∫∞ale−o2/2doσY1−u(3.3)513.2. The penalized SPLD approachwhere al = {l − µX − ρ(σX/σY )(y − µY )}/{σX√1− ρ2}, fX(x) is the den-sity function of X, fY (y) is the density function of Y , and f[X|Y ](x|y) isthe density function of X given Y . We never observe X and Y simultane-ously. The information about the X − Y correlation obtains from condi-tioning on X. When the absolute value of ρ is close to zero, conditioningon X provides little information about the distribution of Y . In fact whenρ = 0, y disappears from the part of the likelihood component where Xand Y are linked and al = (l − µX)/σX . Given a sample of observationss = {(z1, u1), (z2, u2), · · · , (zn, un)}, the likelihood function of θ isL(θ; s) =n∏i=1L(θ; si).In the SPLD experiment, we never observe X and Y simultaneously. Theinformation about the X-Y correlation obtains from conditioning on X. Aswe mentioned previously, the MLE of θ is very unstable, especially whenthe correlation is weak. We introduce the power prior as a penalty functionto regularize the parameters, and the construction of the power prior ismotivated based on the symmetric design’s success. In the symmetric proofload design, one half of specimens is proof loaded in the X mode, and theother half is proof loaded in the Y mode. Proof loading in the Y mode isnot always feasible. However, suppose hypothetically, we had conducted asingle proof load design and proof loaded the specimens in the Y mode upto a pre-determined level lh. We observeWh ={Yh if Yh ≤ lhXh if Yh > lhand Vh ={1 if Yh ≤ lh0 if Yh > lh.The component of the hypothetical likelihood function of θ given a single523.2. The penalized SPLD approachobservation s0 = (wh, vh) isLh(θ; s0) = {fYh(yh)}vh {fXh,Yh≥lh(xh, yh > lh)}1−vh= {fYh(yh)}vh{∫ ∞lhfXh,Yh(xh, yh)dyh}1−vh= {fYh(yh)}vh{fXh(xh)∫ ∞lhf[Yh|Xh](yh|xh)dyh}1−vh=exp[− (yh−µY )22σ2Y]σYvhexp[− (xh−µX)22σ2X]∫∞alhe−o2/2doσX1−vh,where alh = {lh − µY − ρ(σY /σX)(xh − µX)}/{σY√1− ρ2}.We use the approach of Lele and Allen (2006) and Novick and Grizzle(1965) to generate the power prior data. We posit experts who provide rea-sonable guesses for µX , σX , µY and σY denoted by (µ0X , σ0X , µ0Y , σ0Y ), respec-tively. Lumber experts have good knowledge about the marginal parametersthrough years of monitoring. At the same time, we ask the expert to pro-vide an opinion about the sign of ρ. The sample size of the hypotheticalobservations is m. The size of m reflects the expert’s confidence about theopinions.The likelihood observations s are represented by {x1, · · · , xks , y1, · · · ,yn−ks} where ks specimens fail below the proof load level l and n − ksspecimens survive. We randomly select a subset {x˜1, x˜2, · · · , x˜kh} of sizekh from {x1, · · · , xks} without replacement for 0 < kh ≤ ks and a subset{y˜kh+1, y˜kh+2, · · · , y˜m} of size m−kh from {y1, · · · , yn−ks} without replace-ment for 0 < m − kh ≤ n − ks. We now introduce the key idea underlyingour approach, namely the empirical construction of the power prior datagiving us the empirical Bayes step. To wit, the m independent hypotheticalobservations s0 for constructing the power prior distribution are taken to bes0 = {yh1 , yh2 , · · · , yhkh , xhkh+1 , xhkh+2 , · · · , xhm} whereyhj = σ0Y sign(ρ)(x˜j − µ0X)/σ0X + µ0Y for j = 1, 2, · · · , kh,533.2. The penalized SPLD approachandxhj = σ0Xsign(ρ)(y˜j − µ0Y )/σ0Y + µ0X for j = kh + 1, · · · ,m.Thus the power prior distribution of θ is Lλh(θ; s0) whereLh(θ; s0) ∝kh∏j=1exp[−(yhj−µY )22σY 2]σYm∏j=kh+1exp[−(xhj−µX )22σX2]∫∞ahje−z2/2dzσX ,and λ ≥ 0, ahj ={lh − µY − ρ(σY /σY )(xhj − µY )}/{σY√1− ρ2}, andlh = max(yhj , j = 1, 2, · · · , kh). The posterior distribution of θ given asample s = {(zi, ui), i = 1, 2, · · · , n} from a SPLD experiment where thespecimens are proof loaded in the X mode up to a pre-determined level l,using a power prior for θ and a flat initial prior of θ would then bepi(θ|s, s0) ∝ L(θ; s)Lλh(θ; s0), where L(θ; s) =n∏i=1L(si;θ).A frequentist may view our invocation of the Bayesian paradigm asmerely a technical artifice to motivate a penalty function rather than asan inferential paradigm. The penalty function regularizes the parameters toachieve a desirable efficiency. The penalized maximum likelihood estimator(PMLE) θˆp is defined asθˆp = argmaxθ[L(θ; s)Lλh(θ; s0)],where the penalty function Lλh(θ; s0) is the power prior distribution. Thisapproach is referred to as the penalized SPLD approach.Since we introduce the penalty function to achieve the stability of theestimates of ρ and µy, the regularization parameter is selected by the reg-ularization trace. The regularization parameter at which we observe theestimates reach stability is selected. For the numerical optimization, wework with unconstrained parameterization θ˜ = (µY , log(σY ), µY , log(σY ),log [(1 + ρ)/(1− ρ)])T . Let ρ˜ denote log [(1 + ρ)/(1− ρ)]. The regulariza-tion parameter λ is selected as follows:543.2. The penalized SPLD approach1. Create a sequence of λ, denoted as λ1 < λ2 < · · · < λkλ .2. For λi, calculate the penalized maximum likelihood estimates ˆ˜ρ(λi) andµˆ(λi)Y .3. Plot the trace of λi against each of ˆ˜ρ(λi) and µˆ(λi)Y separately to inspectwhether the regularization traces are wobbly initially, and level off asλi increases.4. Calculate the approximated gradient of the regularization traceˆ˜ρ(λi)− ˆ˜ρ(λi−1)λi−λi−1as λi−1 increases to λi. If the absolute value of the approx-imated gradient is smaller than 1, then the regularization parametersuggested by the stability of ˆ˜ρ is λi1 .5. Repeat Step 4 for the estimate µˆY . Suppose we observe that theestimate µˆY becomes stable when the regularization parameter is λi2 .6. The regularization parameter selected is max(λij : j = 1, 2) where theestimates ˆ˜ρ and µˆY are both stabilized.The penalized SPLD approach can be considered as a weighted likelihoodwhere the regularization parameter controls how much of the informationin s0 is relevant to the estimations of the parameters. The regularizationvia the penalty function greatly improves the precision and the accuracy ofthe estimation of ρ and µY as demonstrated in the next simulation section(Table 3.1).3.2.3 Simulation studiesA SPLD sample of size N = 300 is generated from a bivariate normal distri-bution with µX = 6.48, σX = 1.85, µY = 1.43, σY = 0.4 and ρ is either 0.2or 0.8. The two different values of ρ are selected because we expect a posi-tive correlation between strength properties. The specifications ρ = 0.2 andρ = 0.8 represent the low and the high correlation cases, respectively. Thespecifications of the marginal parameters µX , σX , µY , and σY are guided bythe shoulder groups data (R100 and T100) from the Summer–of–2011 ex-periment. The shoulder samples R100 and T100 enable us to estimate the553.2. The penalized SPLD approachparameters of the marginal distributions of X and Y , and thus provide thevalues of the marginal parameters for the simulation studies. Throughoutthe thesis, we focus on this particular normal distribution for our datasets.The specimens are proof loaded in the X mode. The proof load level l isspecified such that 50% of specimens fail during the proof loading procedure.In the absence of good expert information, samples from the marginaldistributions would offer an optimal choice for inputting the required expertopinion about the marginal parameters. Two shoulder groups are generatedfrom the univariate marginal distributions of X and Y : one sample of size150 is generated from a normal distribution with µX = 6.48 and σX =1.85; another sample of size 150 is generated from a normal distributionwith µY = 1.43 and σY = 0.4. For the penalized SPLD approach, theexpert opinions about µX , σX , µY , σY are the sample means and the samplestandard deviations of the shoulder groups, and the sign of ρ is positive.With the expert opinions, we generate m = 30 (kh = 15) hypotheticalpower prior observations. We suggest the size of hypothetical observationsto be 10% of the sample size. This suggestion is guided by considerablenumber of simulation studies where we observe that 10% of the sample sizegenerally works well under various true values of ρ. We use steps 1 to 6in Section 3.2 to select the regularization parameter and record the PMLEvalue for the selected regularization parameter. Meanwhile, we also calculatethe corresponding MLE value based the SPLD approach given the simulatedSPLD sample.The above procedure is repeated 104 times. Table 3.1 reports the aver-ages of the MLE values and the PMLE values. Table 3.1 also summarizesthe standard deviations of the MLE values and the PMLE values across 104data sets.The above simulation study is conducted again, but with smaller samplesizes N = 87 and N = 150. The results are summarized in the same fashion,and are also reported in Table 3.1.In terms of accuracy and precision, the finite sample performances ofthe MLE and the PMLE of µX and σX are similar. However, the penalizedmaximum likelihood estimates of µY , σY , ρ outperform the corresponding563.2. The penalized SPLD approachmaximum likelihood estimates in both accuracy and precision. Thus, thepenalized SPLD approach serves as a better alternative to the SPLD ap-proach.Table 3.1: Finite sample performance of the PMLE and the MLE withN = 87, 150, and 300. The proof load level is specified such that 50% of thespecimens fail below the load level. The simulation setting is described asin the text.N = 87 N = 150 N = 300PMLE MLE PMLE MLE PMLE MLEµX = 6.48EST 6.46 6.46 6.48 6.48 6.48 6.48SE 0.24 0.24 0.18 0.19 0.13 0.13σX = 1.85EST 1.81 1.81 1.84 1.84 1.84 1.84SE 0.21 0.22 0.16 0.17 0.12 0.12µY = 1.43EST 1.43 1.47 1.43 1.47 1.43 1.47SE 0.09 0.34 0.08 0.30 0.09 0.26σY = 0.4EST 0.40 0.50 0.41 0.49 0.41 0.47SE 0.05 0.10 0.04 0.08 0.03 0.06ρ = 0.2EST 0.20 0.06 0.20 0.07 0.20 0.07SE 0.29 0.75 0.26 0.70 0.25 0.64µX = 6.48EST 6.46 6.46 6.48 6.48 6.48 6.48SE 0.23 0.24 0.18 0.19 0.13 0.13σX = 1.85EST 1.82 1.81 1.84 1.84 1.84 1.84SE 0.20 0.22 0.16 0.17 0.12 0.12µY = 1.43EST 1.43 1.53 1.43 1.50 1.43 1.47SE 0.07 0.24 0.07 0.19 0.06 0.14σY = 0.4EST 0.40 0.40 0.40 0.40 0.40 0.39SE 0.05 0.08 0.04 0.07 0.04 0.05ρ = 0.8EST 0.79 0.43 0.78 0.53 0.78 0.64SE 0.14 0.64 0.15 0.55 0.12 0.41To visualize the finite sample performance of the PMLE and the MLE interms of accuracy and precision, we randomly select 50 pairs of PMLE andMLE values of ρ from the 104 cases summarized in Table 3.1 when ρ = 0.2.Figure 3.1 plots the PMLE value of ρ against the MLE value of ρ for eachselected pair. The MLE values of ρ spread out anywhere between -1 and573.2. The penalized SPLD approach1 while majority of the PMLE values of ρ stays in a neighbourhood of thetrue value 0.2.Figure 3.1: Estimates of ρ: 50 randomly selected pairs of (PMLE, MLE)from Table 3.1 when ρ = 0.2. The solid lines indicate the true value ρ = 0.2,showing wide variation in the MLE values compared with the PMLE values.−1.0−0.50.00.51.0−1.0 −0.5 0.0 0.5 1.0MLEPenalized MLEOptimal proof load levelThe above simulation studies specify the proof load level such that 50% ofthe specimens fail below the load level. We have also conducted simulationstudies to investigate how the proof load level specification affects the finitesample performances of the PMLE.A SPLD sample of size N = 300 is generated from a bivariate normaldistribution with µX = 6.48, σX = 1.85, µY = 1.43, σY = 0.4 and ρ is either0.2 or 0.8. The specimens are proof loaded in the X mode. The proofload level is specified such that 20%, 40%, 50%, 60% or 80% of specimensare failed because of proof loading. Two shoulder groups are generatedfrom the univariate marginal distributions of X and Y : one sample of size150 is generated from a normal distribution with µX = 6.48 and σX =1.85; another sample of size 150 is generated from a normal distributionwith µY = 1.43 and σY = 0.4. For the penalized SPLD approach, the583.2. The penalized SPLD approachexpert opinions about µX , σX , µY , σY are the sample means and the samplestandard deviations of the shoulder groups, and the sign of ρ is positive.With the expert opinions, we generate m = 30 (kh = 15) hypotheticalpower prior observations. We use steps 1 to 6 in Section 3.2 to select theregularization parameter and calculate the corresponding PMLE value.The above procedure is repeated 104 times. Table 3.2 reports the averageand the standard deviations of the PMLE values across 104 data sets for eachproof load level. Based on the finite sample performance of the PMLE, it’snot clear what’s the optimal proof load level when ρ = 0.8. When ρ = 0.2,the optimal proof load level is expected to break 50% of the specimensduring the proof loading procedure. However, the finite sample performanceof the PMLE is also extremely sensitive to the proof load level specification,especially for the low correlation case.The proof load level affects the generated pseudo–observations in thepenalty function. The pseudo–observations are generated in the hope ofproviding information about the lower tail of the Y distribution that hasbeen cut–off due to proof loading when ρ is positive and the sample is of arealistic size. Our original conjecture is that the missing information aboutthe lower tail of the Y distribution leads to the unstable estimates of µY andρ when ρ is positive. However, we detect a consistent pattern in Table 3.2.Whenever the parameter µY is under–estimated, the parameter ρ is over–estimated. Whenever the parameter µY is over–estimated, the parameter ρis under–estimated. When the parameter µY is estimated well, the parame-ter ρ is well estimated too. Therefore, we suspect that the penalized SPLDapproach only works well when the pseudo–observations, working togetherwith the likelihood observations, provide information about the parameterµY . Information about the lower tail of the Y distribution is not very helpfulto improve the accuracy and precision of the maximum likelihood estimatein the SPLD approach. A possible interpretation is that the proof load sur-vivors provide information about the conditional mean of Y given X > l.A comparison between this conditional mean and the marginal mean of Ywould largely reveal the X − Y dependence. Therefore, we would not esti-mate the X−Y dependence well without appropriate information about the593.2. The penalized SPLD approachTable 3.2: Finite sample performance of the PMLE with N = 300. Thevalue p indicates the percentage of specimens failed below the proof loadlevel. A higher p corresponds to a higher proof load level. The simulationsetting is described as in the text.pp = 20% p = 40% p = 50% p = 60% p = 80%µX = 6.48EST 6.47 6.48 6.48 6.48 6.48SE 0.24 0.15 0.13 0.12 0.11σX = 1.85EST 1.84 1.84 1.84 1.84 1.85SE 0.18 0.13 0.12 0.10 0.09µY = 1.43EST 1.39 1.40 1.43 1.47 1.60SE 0.07 0.08 0.09 0.10 0.13σY = 0.4EST 0.43 0.42 0.41 0.40 0.40SE 0.03 0.03 0.03 0.03 0.05ρ = 0.2EST 0.41 0.30 0.20 0.10 -0.10SE 0.44 0.27 0.25 0.24 0.23µX = 6.48EST 6.47 6.48 6.48 6.48 6.49SE 0.24 0.15 0.13 0.12 0.11σX = 1.85EST 1.84 1.84 1.84 1.85 1.85SE 0.19 0.13 0.12 0.10 0.09µY = 1.43EST 1.42 1.43 1.43 1.44 1.49SE 0.04 0.05 0.06 0.06 0.08σY = 0.4EST 0.41 0.40 0.40 0.39 0.37SE 0.02 0.03 0.04 0.04 0.04ρ = 0.8EST 0.80 0.79 0.78 0.77 0.74SE 0.17 0.14 0.12 0.10 0.11603.2. The penalized SPLD approachparameter µY . To introduce appropriate information about the parameterµY , a shoulder group of Y where all specimens are tested to failure in theY mode without any proof loading should be included in the SPLD.Conjecture: a shoulder group of Y is neededOur new conjecture says that a shoulder group of Y is needed to regularizethe parameters to achieve estimate stability. To support this conjecture, weconsider the following scenario. Suppose specimens are proof loaded in theX mode up to a proof load level l. The survivors are tested to failure in theY mode. We observe the two random variables Z and U whereZ ={X if X < lY if X > land U ={1 if X < l0 if X > l.The component of the likelihood function of θ given a single observations = (z, u) collected from the SPLD experiment, where specimens are proofloaded in the X mode up to the pre-determined load level l, isL(θ; s) = {fX(x)}u{fY,X>l(y, x > l)}1−u=exp[− (x−µX)22σ2X]σXuexp[− (y−µY )22σ2Y]∫∞ale−o2/2doσY1−u,where al = {l − µX − ρ(σX/σY )(y − µY )}/{σX√1− ρ2}.In addition to the SPLD group, suppose we observe Y from the marginaldistribution of Y but conditional on Y ≤ ly and ly is known (referred to asthe truncated Y group). The component of the joint likelihood function ofθ given the single observation s = (z, u) from the SPLD group and a single613.2. The penalized SPLD approachobservation y from the truncated Y group isLPFQY (θ; s, y) = L(θ; s) {fY (y)I{y ≤ ly}}=exp[− (x−µX)22σ2X]σXuexp[− (y−µY )22σ2Y]∫∞ale−o2/2doσY1−u×exp[− (y−µY )22σ2Y]σYI{y ≤ ly},where I(·) is the 0 or 1 indicator function. Given a sample of observationss = {(z1, u1), (z2, u2), . . . , (zn, un)} collected from the SPLD experiment anda sample of observations y = {y1, y2, · · · , yny} collected from the truncatedY group, the likelihood function of θ isLPFQY (θ; s,y) ={n∏i=1L(θ; si)}{ ny∏i(fY (yi)I{yi ≤ ly})}.When the level ly is low, the y observation is from the lower tail ofthe Y distribution. As ly → ∞, the y observation is in fact from themarginal distribution of Y . If our new conjecture is true, then the per-formance of the maximum likelihood estimate that maximizes the joint like-lihood LPFQY (θ; s,y) is expected to improve as the truncated level ly in-creases. Simulation studies are conducted to investigate the performance ofthe maximum likelihood estimate that maximizes the joint likelihood func-tion LPFQY (θ; s,y) for various ly values.A SPLD sample of size 300 is generated from a bivariate normal dis-tribution. The parameters of the bivariate normal distribution are µX =6.48, σX = 1.85, µY = 1.43, σY = 0.4 and ρ is 0.2. The specimens are proofloaded in the X mode. The proof load level l is specified such that 50%of specimens fail below the load level. A sample of size 100 is generatedfrom the univariate normal distribution with µY = 1.43 and σY = 0.4, de-noted by {y1, y2, · · · , y100}. However, we only take the yi observations whereyi ≤ ly = QY (q) and QY (q) is the qth percentile of Y . The value of q is623.3. The SPLD with a shoulder approach5, 10, 30, 50, 75, 90, 95 or 100. When q is small, we generate observationsfrom the lower tail of the Y distribution. As q increases to 100, we basicallygenerate observations from the marginal distribution of Y .We calculate the maximum likelihood estimate that maximizes the jointlikelihood function LPFQY (θ; s, y) given the observations generated fromthe SPLD group and the truncated group (Y : Y ≤ QY (q)). This procedureis repeated 100 times for each q value. Figure 3.2 plots the 100 maximumlikelihood estimates of ρ for each q value. As q increases, the accuracy of themaximum likelihood estimate improves. When QY (q) is the 100th percentileof Y , the maximum likelihood estimates of ρ are in a close neighbourhood ofthe true value. Figure 3.2 confirms our conjecture that a shoulder sample ofY is needed for the SPLD approach to estimate the parameter ρ properly.To conclude, a shoulder sample of Y is needed to resolve the near non-identifiable issue of µY and ρ in the SPLD approach. More precisely, insteadof proof loading all specimens, the specimens are assigned to one of the twogroups: the proof loaded group (i.e. the SPLD group) and the shouldergroup of Y . The shoulder group of Y helps identify the parameters µYand σY . As a result, the near non-identifiable issue of µY and ρ is resolved(details in the next section).3.3 The SPLD with a shoulder approachWe redesign the SPLD such that specimens in a random sample are assignedto one of the two groups: the SPLD group and the shoulder group of Y .The specimens in the shoulder group of Y are tested to failure in the Ymode without any proof loading. This design is referred to as SPLD witha shoulder. As we mention previously, Amorim and Johnson (1986) alsofind such a design and they call it a hybrid design. They also address theoptimal specimen allocation between the shoulder and the SPLD groups intheir hybrid design.The component of the joint likelihood function of θ given the singleobservation s = (z, u) from the SPLD group and the single observation y633.3. The SPLD with a shoulder approachFigure 3.2: Maximum likelihood estimate that maximizes the joint likelihoodLPFQY (θ; s, y) given the observations from the proof loaded group and thetruncated group (Y : Y < QY (q)) for various q values. The solid linesindicate the true value of ρ.0.000.250.500.751.000 25 50 75 100IndexMLE of ρq = 50.000.250.500.751.000 25 50 75 100Indexq = 100.000.250.500.751.000 25 50 75 100Indexq = 300.000.250.500.751.000 25 50 75 100Indexq = 500.000.250.500.751.000 25 50 75 100IndexMLE of ρq = 750.000.250.500.751.000 25 50 75 100Indexq = 900.000.250.500.751.000 25 50 75 100Indexq = 95−0.250.000.250.500.751.000 25 50 75 100Indexq = 100643.3. The SPLD with a shoulder approachfrom the shoulder group of Y isLPFY (θ; s, y) = L(θ; s)LY (µY , σY ; y), (3.4)where LY (µY , σY ; y) = exp{−(y − µY )2/(2σ2Y )}/(σY√2pi) and L(θ; s) isEquation 3.3. Given a sample of observations s = {(z1, u1), (z2, u2), . . . ,(zn, un)} collected from the SPLD experiment and a sample of observationsy = {y1, y2, · · · , yny} collected from the shoulder group of Y , the likelihoodfunction of θ isLPFY (θ; s,y) ={n∏i=1L(θ; si)}{ ny∏i=1LY (µY , σY ; yi)}.The parameter θ is estimated by the maximum likelihood method wherethe MLE maximizes the joint likelihood function LPFY (θ; s, y). We refer tothis approach as the SPLD with a shoulder approach. Simulation studies inSection 3.3.1 are conducted to demonstrate the finite sample performanceof the MLE obtained by the SPLD with a shoulder approach. We alsoinvestigate the optimal specimen allocation between the shoulder group andthe SPLD group, and the optimal proof load level specification for the SPLDgroup.3.3.1 Simulation studiesOptimal allocationThe total number of specimens is N = 300. We assign 300 × pp of thespecimens to the SPLD group, and (300− 300× pp) to the shoulder group.Since only the SPLD group data contain information about the parameterof most interest (the dependence parameter ρ), we do not consider caseswhen pp < 0.5. The value pp is one of 0.5, 0.6, 0.7, 0.8, 0.9 and 1. Thecase when pp = 1 corresponds to the SPLD experiment where all specimensare allocated to the SPLD group. Changing the value of pp allows us toinvestigate how the specimen allocation between the shoulder group and theSPLD group affects the accuracy and precision of the maximum likelihood653.3. The SPLD with a shoulder approachestimate.The SPLD sample of size 300 × pp is generated from a bivariate nor-mal distribution. The parameters of the bivariate normal distribution areµX = 6.48, σX = 1.85, µY = 1.43, σY = 0.4 and ρ is either 0.2 or 0.8. Thespecimens are proof loaded in the X mode. The proof load level l is specifiedsuch that 50% of specimens are failed below the load level. The shouldergroup of size (300− 300× pp) is generated from the univariate normal dis-tribution with µY = 1.43 and σY = 0.4. We calculate the MLE value thatoptimizes the joint likelihood function LPFY (θ; s, y) given the observationssimulated for the shoulder group and the SPLD group. This procedure is re-peated 104 times. Table 3.3 reports the average and the standard deviationof the MLE values across 104 data sets.The above simulation study is conducted again, but with a larger samplesize N = 600. The results are summarized in a similar fashion as Table 3.3,and are reported in Table 3.4. The maximum likelihood estimates of µX andσX perform in about the same way for both approaches: the SPLD approachand the SPLD with a shoulder approach. For the parameters µY and ρ, themaximum likelihood estimate based on the SPLD with a shoulder approachdramatically outperforms that based on the SPLD approach (pp = 1) interms of accuracy and precision, especially when ρ is small.Regarding the optimal allocation between the SPLD group and the shoul-der group, the finite sample performance of the MLE of ρ deteriorates as thenumber of the shoulder group observations of Y declines. If we would liketo choose an optimal group allocation based on the estimate performanceof ρ, Tables 3.3 and 3.4 suggest that assigning about 50% of the specimensto the SPLD group and the rest to the shoulder group is reasonable forboth low and high correlation cases. Moreover, the estimates of ρ becomeessentially unbiased even for the smaller sample size N = 300. But unlessN greater than 600, the coefficient of variation in the ρ estimate is largewhen ρ is small. And of course in practice, ρ is unknown suggesting that anN of at least 600 is needed to guarantee reasonable estimator accuracy; atleast enough to learn if ρ is large to make the X−Y dependence sufficientlystrong for practical purposes.663.3. The SPLD with a shoulder approachTable 3.3: Finite sample performance of the MLE that maximizes the jointlikelihood function LPFY (θ; s, y) with N = 300. The value pp indicates theproportion of specimens assigned to the SPLD group. When pp = 1, allspecimens are in the SPLD group. The simulation setting is described as inthe text. The MLE performance declines as the size of the shoulder groupdeclines.pp0.5 0.6 0.7 0.8 0.9 1µX = 6.48EST 6.48 6.48 6.48 6.48 6.48 6.48SE 0.19 0.17 0.16 0.15 0.14 0.13σX = 1.85EST 1.84 1.84 1.84 1.84 1.84 1.84SE 0.17 0.15 0.14 0.13 0.12 0.12µY = 1.43EST 1.43 1.43 1.43 1.43 1.43 1.47SE 0.03 0.04 0.04 0.05 0.07 0.26σY = 0.4EST 0.40 0.40 0.40 0.40 0.40 0.47SE 0.02 0.02 0.02 0.02 0.03 0.06ρ = 0.2EST 0.20 0.19 0.19 0.19 0.18 0.07SE 0.17 0.17 0.17 0.19 0.24 0.64µX = 6.48EST 6.48 6.48 6.48 6.48 6.48 6.48SE 0.18 0.17 0.16 0.15 0.14 0.13σX = 1.85EST 1.84 1.84 1.84 1.84 1.84 1.84SE 0.16 0.15 0.14 0.13 0.12 0.12µY = 1.43EST 1.43 1.43 1.43 1.43 1.43 1.47SE 0.03 0.03 0.03 0.04 0.05 0.14σY = 0.4EST 0.40 0.40 0.40 0.40 0.40 0.39SE 0.02 0.02 0.02 0.03 0.03 0.05ρ = 0.8EST 0.80 0.79 0.79 0.79 0.78 0.64SE 0.08 0.08 0.08 0.08 0.11 0.41673.3. The SPLD with a shoulder approachTable 3.4: Finite sample performance of the MLE that maximizes the jointlikelihood function LPFY (θ; s, y) with N = 600. The value pp indicates theproportion of specimens assigned to the SPLD group. When pp = 1, allspecimens are in the SPLD group. The simulation setting is described as inthe text. The MLE performance declines as the size of the shoulder groupdeclines.pp0.5 0.6 0.7 0.8 0.9 1µX = 6.48EST 6.48 6.48 6.48 6.48 6.48 6.48SE 0.13 0.12 0.11 0.11 0.10 0.09σX = 1.85EST 1.84 1.84 1.85 1.85 1.85 1.85SE 0.12 0.11 0.10 0.09 0.09 0.08µY = 1.43EST 1.43 1.43 1.43 1.43 1.43 1.49SE 0.02 0.03 0.03 0.04 0.05 0.24σY = 0.4EST 0.40 0.40 0.40 0.40 0.40 0.46SE 0.01 0.01 0.01 0.02 0.02 0.04ρ = 0.2EST 0.20 0.20 0.20 0.19 0.19 0.01SE 0.12 0.12 0.12 0.13 0.17 0.62µX = 6.48EST 6.48 6.48 6.48 6.48 6.48 6.48SE 0.13 0.12 0.11 0.10 0.10 0.09σX = 1.85EST 1.84 1.84 1.84 1.85 1.85 1.85SE 0.12 0.11 0.10 0.09 0.09 0.08µY = 1.43EST 1.43 1.43 1.43 1.43 1.43 1.45SE 0.02 0.02 0.02 0.03 0.04 0.09σY = 0.4EST 0.40 0.40 0.40 0.40 0.40 0.39SE 0.01 0.02 0.02 0.02 0.02 0.04ρ = 0.8EST 0.80 0.80 0.80 0.79 0.79 0.72SE 0.06 0.05 0.05 0.06 0.07 0.25683.3. The SPLD with a shoulder approachOptimal proof load levelThe above simulation studies specify the proof load level of the SPLD groupsuch that 50% of specimens are failed below the load level. The proof loadlevel could affect the finite sample performance of the MLE. An extremecase is that all specimens fail below the proof load level. The resulteddata under this extreme case would not have any information about thecorrelation parameter ρ. Simulation studies are conducted to investigatehow the MLE performance depends on the proof load level specification.The total number of specimens is N = 600. We assign 300 (600×0.50) ofthe specimens to the SPLD group, and 300 (600−600×0.50) to the shouldergroup. The allocation is suggested based on the previous simulation studiesfor optimal allocation. The SPLD sample of size 300 is generated froma bivariate normal distribution. The parameters of the bivariate normaldistribution are µX = 6.48, σX = 1.85, µY = 1.43, σY = 0.4 and ρ is either0.2 or 0.8. The specimens are proof loaded in the X mode. The proof loadlevel is specified such that 20%, 40%, 50%, 60% or 80% of specimens arefailed below the proof load level. Changing the proof load level allows us toinvestigate how the specification of the proof load level affects the accuracyand precision of the maximum likelihood estimate. The shoulder group ofsize 300 is generated from the univariate normal distribution with µY = 1.43and σY = 0.4.We calculate the MLE value based on the joint likelihood functionLPFY (θ; s, y) given the observations simulated for the shoulder group andthe SPLD group. This procedure is repeated 104 times. Table 3.5 reportsthe average and the standard deviation of the MLE values across 104 datasets for each proof load level. We would like to find the optimal proof loadlevel such that the accuracy and the precision of the maximum likelihoodestimate of ρ perform the best. The simulation results presented in Table3.5 suggest that specifying the proof load level such that about 80% of thespecimens fail below the load level is optimal when ρ = 0.2. Notice whenρ = 0.2 and the percentage of specimens failed below the proof load levelis 80%, the approximate 95% confidence interval of ρ based on Table 3.5 is693.3. The SPLD with a shoulder approach(0.004, 0.396), which makes ρ = 0 implausible. When ρ = 0.8, the proofload level should be specified such that 60% to 80% of the specimens failbelow the load level is optimal.Table 3.5: Finite sample performance of MLE that maximizes the jointlikelihood function LPFY (θ; s, y) with N = 600. The value p indicates thepercentage of specimens failed below the proof load level. The simulationsetting is described as in the text. The results suggest a failure percentagein proof loading to be about 80%.p20% 40% 50% 60% 80% 90%µX = 6.48EST 6.48 6.48 6.48 6.48 6.48 6.48SE 0.26 0.15 0.13 0.12 0.11 0.11σX = 1.85EST 1.84 1.84 1.84 1.84 1.85 1.85SE 0.20 0.13 0.12 0.11 0.09 0.08µY = 1.43EST 1.43 1.43 1.43 1.43 1.43 1.43SE 0.02 0.02 0.02 0.02 0.02 0.02σY = 0.4EST 0.40 0.40 0.40 0.40 0.40 0.40SE 0.01 0.01 0.01 0.01 0.01 0.02ρ = 0.2EST 0.18 0.19 0.20 0.20 0.20 0.20SE 0.23 0.14 0.12 0.11 0.10 0.11µX = 6.48EST 6.48 6.48 6.48 6.48 6.48 6.48SE 0.25 0.15 0.13 0.12 0.11 0.11σX = 1.85EST 1.84 1.84 1.84 1.84 1.85 1.85SE 0.20 0.13 0.12 0.11 0.09 0.08µY = 1.43EST 1.43 1.43 1.43 1.43 1.43 1.43SE 0.02 0.02 0.02 0.02 0.02 0.02σY = 0.4EST 0.40 0.40 0.40 0.40 0.40 0.40SE 0.01 0.01 0.01 0.01 0.01 0.01ρ = 0.8EST 0.79 0.80 0.80 0.80 0.80 0.80SE 0.09 0.06 0.06 0.05 0.05 0.06To conclude, simulation studies presented in this section suggest notto assign all specimens to the SPLD group, which is the SPLD approach.Instead for a more precise and higher accurate maximum likelihood estimateof ρ, assigning 50% of specimens to the SPLD group and the rest to the703.4. Discussion: other applicationshoulder group of Y serves as a better alternative. Further more, the optimalproof load level for the SPLD group is a load level that is expected to breakabout 80% of the specimens. This percentage could be reduced when priorinformation suggests that ρ is reasonably large.3.4 Discussion: other applicationIn addition to determining the relationship between destructive strengthproperties, the SPLD with a shoulder approach could also contribute to theestimation of duration–of–load effects for forest products.The duration–of–load effect refers to a phenomenon where a wood prod-uct yields a higher strength when loaded quickly. For practical reasons, theforest products industry establishes the short-term strength of a populationof wood products, and then reduces it to account for the duration–of–load ef-fect. Linking the short-term strength and the long-term strength (duration–of–load adjustment) has been one of the largest and most important factorsin engineering design. Here is what the SPLD with a shoulder approachcould offer for estimating the duration–of–load effect. Specimens are as-signed to one of the two groups: proof loaded via short–term test group andlong–term loading test group. In the proof loaded via short–term test group,specimens are loaded under a short–term test up to a pre-determined loadlevel. If the specimens fail, we observe the specimens’ short–term strength.Otherwise, the survivors are loaded under the long term testing until fail-ure. In the long–term loading test group, specimens are loaded under thelong–term testing until failure. The observations collected from these twogroups would allow us to determine the relationship between the short–termstrength and the long–term strength, and thus suggest an appropriate du-ration of load adjustment.3.5 Summary and conclusionsThis chapter presents how to estimate the correlation between two normallydistributed random variables X and Y that cannot be observed simultane-713.5. Summary and conclusionsously. The single proof load design (SPLD) approach relies on the randomvariable Z = XI{X < l}+Y I{X > l} to determine the relationship betweenX and Y . To address the instability problem of the maximum likelihoodestimate in the SPLD approach, we start by working with the penalizedSPLD approach, which adds a penalty function to the likelihood functionof the SPLD approach. The parameter is then estimated by the penalizedmaximum likelihood method. The penalized maximum likelihood estimator(PMLE) maximizes the penalized likelihood function. The penalty functionis motivated by an empirical Bayesian power prior, which regularizes the pa-rameters such that the PMLE achieves desirable finite sample performance.However, simulation studies suggest that the finite sample performance ofthe PMLE relies on the pseudo–observations at the empirical Bayes step.The pseudo–observations, working together with the likelihood observations,provide accurate information about the parameter µY . Closer inspectionsof the PMLE performance suggest that the finite sample performance of themaximum likelihood estimator (MLE) in the SPLD approach is much im-proved by redesigning SPLD and adding a complementary shoulder sampleof Y . The shoulder sample of Y strengthens the sample–to–estimator signal,and solves the near non–identifiability problem for the parameters µY andρ, especially when ρ is small.The SPLD with a shoulder experiment assigns specimens to one of thetwo groups: the SPLD group where specimens are proof loaded in the Xmode and the shoulder group of Y . The result is an accurate maximumlikelihood estimate with greater precision. The optimal allocation and theoptimal proof load level are determined based on the finite sample perfor-mance of the MLE of the parameter ρ. The optimal allocation assigns about50% of the specimens to the SPLD group and the rest to the shoulder groupof Y. The optimal proof load level for the SPLD group is expected to break60% to 80% of specimens during the proof loading procedure. This percent-age could be reduced when prior information suggests that ρ is reasonablylarge.72Chapter 4Estimating the damagecaused by proof loadinglumber productsChapter 3 presents the single proof load design (SPLD) with a shoulderapproach to assess dependence between two random variables that cannotbe observed simultaneously. One particular application is that of assessingthe dependence between lumber destructive strength properties. In a SPLDexperiment, we proof load a specimen in one of these modes, denote it byX, up to a pre–determined load level l, and test the survivors to failure inthe second mode Y . In Chapter 3, we assumed that no damage occurredon survivors or equivalently, that we are able to observe [Y |X > l] on thesurvivors. The no–damage assumption may well be unrealistic in certainsituations, where proof loading could damage the survivors, especially whenthe proof load level l is high. Our attention thus turns to assessing theseverity of the damage accumulated in the strength of the proof load sur-vivors during the proof loading procedure. More specifically, we may observe[Y ∗|X > l] instead of [Y |X > l] on the survivors where Y ∗ is the residualstrength in the Y mode after proof loading in the X mode up to the loadlevel l. It may be more realistic to take the damage effect into considera-tion when applying the SPLD with a shoulder approach for assessing thedependence between destructive strength properties.In this chapter we will investigate the damage accumulated in the strengthof the proof load survivors, expressed as the relationship between the strengthand the residual strength after proof loading. Three possible approaches to734.1. Degradation modellingmodelling the relationship between the strength and the residual strengthare presented in the chapter. The first approach reviews theoretical mod-els developed by wood scientists for assessing the duration–of–load effect(Section 4.2). Their damage accumulation models are based on years’ ofaccumulated experience and are usually characterized by differential equa-tions. Analytical derivations of the accumulation models yield relationshipsbetween the strength and the residual strength. A more empirical approachis seen in section 4.3. We introduce a SPLD experiment with a low proofload level. The low proof load level ensures that the damage accumulatedin the strength of the survivors is negligible. Using the sample collectedfrom this particular SPLD experiment, we then implement the SPLD witha shoulder approach to estimate the theoretical quantile of [Y |X > l]. Theestimated theoretical quantile of [Y |X > l] is compared with the empiricalquantile of [Y ∗|X > l]. The comparison between the estimated theoreticalquantiles and the empirical quantiles reveals the damage caused by proofloading. The damage is described as the distributional relationship between[Y |X > l] and [Y ∗|X > l]. These two approaches to modelling the damagecaused by proof loading lead naturally to questions about why we are notable to apply degradation modelling with its established and seemingly ob-vious approaches. For completeness, this chapter reviews that subject andthen explains why the models that have been developed there are not helpfulfor characterizing the damage accumulation in a piece of lumber under theloading history considered in this chapter (Section 4.1).4.1 Degradation modellingReliability is often assessed by analyzing time–to–failure data. However, theproducts may be expected to operate for thousands of hours without failureunder normal operating conditions. Within the practical time constraintsof a life testing experiment, we often face a situation where we end up withfew or no failures at the end of the experiment. The lack of the time–to–failure data has motivated the development of a practical approach calleddegradation modelling. Such a situation is faced in assessing the so–called744.1. Degradation modellingduration–of–load (DOL) effect on forest products intended for use in struc-tural engineering (see Barrett (1996) for a comprehensive review). The DOLeffect is a phenomenon where a product made from wood can resist higherstresses when the loads are imposed for a shorter duration of time. In otherwords, wood is stronger when loaded quickly. Under sustained load, lumberproducts may fail due to a phenomenon called “creep” (or deflection) evenwhen the lumber is subjected only to load levels well below those needed tocause failure in short term tests. Thus it appears superficially as though thedegradation modelling approach might have a role to play here for estimat-ing the DOL effect, leading us to explore that possibility before attemptingto establish our own theory.Degradation models do not observe the time–to–failure directly, but in-stead involve a surrogate degradation measurement that accurately reflectsa specimen’s status under operating conditions. The surrogate is monitoredand recorded over time. When the level of degradation has achieved a pre–determined threshold, the product is considered to have failed, despite thefact that the product may still be operating. The degradation measurementsare then used to estimate the time–to–failure distribution. An example ofdegradation data is seen in a monitoring program for train wheels of Brazil-ian railways (Limnios et al., 2010, Chapter 11). To avoid catastrophic trainaccidents, railways monitor the train wheels closely. One of the measure-ments inspected by the railways is the wheel’s diameter, which reflects thewheel’s performance. The diameter of a new wheel is 966 mm. At each in-spection time, the wheel’s diameter is measured. Once the wheel’s diameterreaches 889 mm, the wheel is considered as failed and replaced by a newwheel. Choosing the pre-determined threshold is a key element of degra-dation modelling. Methods for choosing the pre–determined threshold areproposed by Tseng and Yu (1997), but the determination of the thresholdremains controversial (Singpurwalla, 2006).Degradation models can be classified into two categories: normal degra-dation models and accelerated degradation models. For applications wheredegradation progresses slowly, accelerated degradation models enable in-ference about the time–to–failure based on the degradation data obtained754.1. Degradation modellingunder accelerated operating conditions. Accelerated degradation models arereviewed and presented in Nelson (2004). Here, we review normal degrada-tion models that seem to be more related to our problem.Lu and Meeker (1993) present the general degradation path model, whichis a general nonlinear mixed-effects regression model with time as the pre-dictor and degradation measurement as the response. The dependence be-tween repeated degradation measurements is taken care of by a randomeffect representing individual subject’s characteristics. In fact, both lin-ear and non-linear regression models are commonly applied in degradationpath modelling (Liao, 2004). Yang and Xue (1996) present a random pro-cess degradation model, which fits a normal distribution to the degradationmeasurements collected for all specimens at each time point, and inspecthow their estimated mean and variance change over time. The reliabil-ity, the survival function of the time–to–failure, is then a function of theestimated parameters and the pre-determined failure threshold. Since thenormality assumption is not always valid, Zuo et al. (1999) propose to selectand fit an appropriate distribution that represents the degradation measure-ments collected from the subjects at each time point, and then investigatehow the estimated parameters change over time. Degradation models withrandom loadings are also found in the literature, for example, the general-ized stress–strength inference model presented by Huang and Askin (2004).Other commonly–used degradation models include the Gamma degradationprocess in maintenance optimization that models the degradation in termsof a time–dependent stochastic process (Noortwijk et al., 2007).However we have not been able to find a way of realizing the potential ofdegradation modelling in laying a foundation for modelling damage accumu-lated in the lumber strength. First and most importantly, lumber strengthproperties do not have the surrogate partners that we need to associate withthe time–to–failure under either long (creep) or short term test. Even in theshort term test, failure is not a simple thing and it can occur due to atleast one of failure modes leading to a complex lower left tail in the strengthdistribution (Liu, 2012; Kondo et al., 2014). Furthermore in most contexts,failure is defined to have occurred when the specimen breaks completely.764.2. Damage accumulation modelsEven if a surrogate measurement were available for lumber products, find-ing the threshold is a challenging research issue in itself that would leave uswell short of an implementable general foundation for assessing damage dueto load under short–term or long–term tests.Thus in the end, we have rejected this approach — it does not seemapplicable to assessing damage in the context of lumber products, espe-cially for modelling the DOL effect where it might seem to have a role toplay. On the other hand it is not really needed for short–term strengthassessment where the failure modes differ greatly from that due to creep.In the Summer–of–2011 experiment, most lumber specimens completed theproof loading procedure and then the destructive test within 10 minutes.Collecting the time–to–failure data is not an issue for us. To sum up, thedegradation models that have been developed do not seem helpful for char-acterizing the damage accumulation in a lumber specimen under the loadinghistory considered in this chapter.4.2 Damage accumulation models4.2.1 IntroductionA product made from wood can resist higher stresses when the loads areimposed for a shorter duration of time. As noted above, this phenomenon isknown as the DOL effect. Due to the DOL effect, wood strength cannot bedefined without its loading history. For practical reasons, the lumber indus-try only establishes the short–term strength of a population of wood prod-ucts, and then downscales it to account for the DOL effect. Wood scientistshave developed damage accumulation models for the DOL effect. The dam-age accumulation models link ramp–loading effects (short–term strength)with constant–loading effects (long–term strength) by taking into accountthe fact that wood strength is a function of time span over which a load isimposed on the wood.This section reviews three most popular wood damage accumulationmodels: the Madison curve (Wood, 1951; Hendrickson et al., 1987), the774.2. Damage accumulation modelsU.S. model (Gerhards and Link, 1986; Gerhards, 1979) and the Canadianmodel (Foschi and Yao, 1986). Gerhards (1979) suggests that the theoreti-cal effect of proof loading on wood strength and lifetime could be predictedfrom the damage accumulation models. After all, the same properties ofthe material govern the underlying mechanism of the proof load effect andthe DOL effect. For these reasons and for completeness, we discuss how theresidual strength of a specimen after proof loading depends on its strengthbased on the wood scientists’ damage accumulation models. Exploring thewood scientists’ damage accumulation models also provides a good initialstarting point for tackling the damage problem caused by proof loading.However, the reader is reminded that the loading history consideredin the damage accumulation models differs from the one in our experi-ment. The Summer–of–2011 experiment addressed a more complicated phe-nomenon where specimens were proof loaded in one strength mode, andthen the survivors were tested to failure in another mode. In comparison,the damage accumulation models are intended to address the loading historyin the same strength mode.4.2.2 Damage accumulation modelsWhile the very original Madison curve by Wood (1951) represents the strengthtested under impact loading, ramp loading, and long–term loading by onecontinuous curve, the U.S. and the Canadian models are characterized bydifferential equations expressed as:dα(t)dt= g(α(t), σ(t),ν),where α(t) is the damage accumulated by time t, g is a function to be spec-ified, ν is a vector of parameters, and σ(t) is the applied load ratio at time tas defined below. The accumulated damage α(t) is a non-decreasing functionof t, which takes any value between 0 and 1. At time 0, α(0) = 0 indicatingno damage. At time T , the random breaking time, α(T ) = 1 indicating thewood product fails. The applied load ratio σ(t) is a dimensionless quantityand defined as τ(t)/τs where τ(t) is the applied load at time t and τs is the784.2. Damage accumulation modelsshort–term strength that can be obtained under a ramp load test (detailsto follow) with short duration. Both the U.S. and Canadian models con-sider the short–term strength τs to be a random variable that varies fromspecimen to specimen. The U.S. and Canadian models propose differentfunctional forms for g. The solution, α(t), depends on the functional formof g, and how the load is imposed on the specimen.We present solutions α(t) in the U.S. and Canadian models under theramp load test with load control (Zhai, 2011). During the ramp load test,the load increases linearly with time until the specimen fails. To carry outthe ramp load test, the machine can be set up under load control or underdisplacement control. If the machine is set up under load control, the rateat which the load increases per second k > 0 is identical for all specimens.However the load control test is hazardous. For safety reasons, the machineis usually set up under displacement control where the machine’s loadingpoints (at the hydraulic jacks) move at a constant rate measured in inchesper minutes. Under displacement control, the load still increases linearlywith time but the rate at which the load increases per second depends onthe specimen’s modulus of elasticity (MOE) and it varies from specimen tospecimen. Equivalently, under the ramp load test, σ(t) = kt/τs. The rateat which the load increases per second k is the same for all specimens underload control, and varies across the specimens under displacement control. Ifthe rate at which the load increases per second varies across specimens, wemay not be able to track the relationship between the strength and the resid-ual strength analytically by working with wood scientists’ damage models.As a result, we consider the ramp load test under load control scenario. Thereader is reminded that the Summer-of-2011 experiment was conducted un-der displacement control for safety reasons. The following analytical resultsmay not be directly applicable to our data sets. However we believe thatanalysis offers insight and some theoretical support for the use of a lineardamage model. Furthermore, the results will be directly applicable in futurestudies involving the ramp load test under load control scenario.794.2. Damage accumulation modelsThe Madison curveServing as a basis of the National Design Specification for Wood Construc-tion by the American Wood Council, Wood (1951) proposes the relationshipbetween strength properties and the duration of load byy∗(t)y× 100 =108.4t0.04635+ 18.3, (4.1)where t > 0 is the duration of load or time–to–failure in seconds, y∗ isthe residual strength at time t, y∗(t)/y is the strength ratio expressed as apercentage of the standard test strength y. The standard test strength isthe strength obtained with the ramp load setting such that we expect clearwood to fail at about 7.5 minutes. The Madison curve is estimated basedon an empirical hyperbolic curve passing through one point representing thestrength ratio obtained with the ramp load test with a load duration of 0.015second, the 100% strength ratio, and a point representing the strength ratioof 69% that is obtained with the ramp load test with a load duration of3750 hours. Wood (1951) did not mention the unit of 108.4. Since y∗(t)/yand 18.3 in Equation 4.1 do not have units, the unit of 108.4 should besecond0.04635 such that 108.4/t0.04635 does not have a unit. Based on howthe curve is derived, y∗(t) at any given time point and y are constantsrepresenting the population of the residual strength distribution and thepopulation of the strength distribution, respectively. The strength ratioy∗(t)/y in the Madison curve is also a fixed constant at any time point, anddoes not vary from piece to piece. Based on the Madison curve, Hendricksonet al. (1987) develop a damage accumulation model, which isdα(t)dt= a(σ(t)− σ0)b+,where a, b and σ0 are model parameters (constants) to be estimated, and(σ(t) − σ0)+ = max(σ(t), σ0). Taking the load ratio σ(t) = σ as a fixedconstant, the damage accumulated by time t isα(t) = a(σ − σ0)b+t.804.2. Damage accumulation modelsAt the breaking time tc (a constant),α(tc) = a(σ − σ0)b+tc = 1.Thus,tc =1a(σ − σ0)b+. (4.2)When σ0 = 0.1803, a = 1.516 × 104 day−1 and b = 21.6, Equation 4.2corresponds to the Madison curve (Equation 4.1). Notice that a, b, σ, andσ0 are all fixed constants. Thus, the breaking time tc is a fixed constant.The proof loading procedure increases the load linearly over time toreach the pre-determined load level l. Thus, l = kt0 where t0 is the time inseconds to reach the proof loading level l. At time t0 = l/k when the proofloading is completed,y∗(t0)y× 100 =108.4t0.046350+ 18.3.The residual strength at time t0 is y∗ = (1.084/t0.046350 + 0.183)y. Since(1.084/t0.046350 + 0.183) is a known constant, the Madison curve implies alinear damage model with slope only for the proof load effect.The U.S. modelThe U.S. model (Gerhards and Link, 1986; Gerhards, 1979) is defined asdα(t)dt= exp{−a+ bσ(t)}, (4.3)where a and b > 0 are model parameters to be estimated (real constants).The model is not time scale invariant. The unit of dα(t)/dt is the inverseof the time unit while the exponentiation in Equation 4.3 cannot take units(Massa et al., 2011). Following the practice adopted in Chapter 2, we will,for expository simplicity, nondimensionalize time in this chapter in the spiritof Zhai (2011). In other words, t will now represent the number of standardtime units rather than time itself, meaning that adjustments in the model814.2. Damage accumulation modelswere the time units to be changed.For the ramp load test with σ(t) = kt/τs and k > 0, Equation 4.3 canbe written asdα(t)dt= exp{−a+ bkt/τs}. (4.4)The solution of the differential equation 4.4 for α(t) isα(t) =τsbk{exp(−a+ bkt/τs)− exp(−a)}. (4.5)The parameter a depends on the parameter b. Following Foschi andYao (1986)’s approach to solving Equation 4.5 for the random break time Tbased on a condition τs = kT and α(T ) = 1, we obtainα(T ) =τsbk{exp(−a+ bkT/τs)− exp(−a)} = 1. (4.6)Equation 4.6 implicitly implies that exp(−a) = bk/(τs[exp(b) − 1]). Theterm bk/(τs[exp(b) − 1]) is random because τs is random, but exp(−a) isnot random. The parameter a is often treated as a constant for a givenstructural member but varies between members. However, this implies thatall specimens of the same structural population have the same breakingtime if they are subjected to the same loading history. This is not realisticbecause the breaking time varies from specimen to specimen, even whenspecimens are from the same structural population. Treating the parametera as random seems to be more plausible, just like how Foschi and Yao (1986)treat their model parameters in the Canadian model (more details to follow).At time t0 = l/k when proof load is completed, the accumulated damageisα(t0) =τsbk{exp(−a+ bkt0/τs)− exp(−a)}. (4.7)Now we take this proof loaded specimen, and test it to failure at the 2ndstage. The residual time–to–failure at the 2nd stage is Tf . Then1− α(t0) = α(Tf ). (4.8)824.2. Damage accumulation modelsPorting Equation 4.5 into Equation 4.8 yields1−τsbk{exp(−a+ bkt0/τs)− exp(−a)} =τsbk{exp(−a+ bkTf/τs)− exp(−a)}.(4.9)Solving Equation 4.9 for the residual time Tf , yieldsTf =τsbklog [exp(b)− exp(bkt0/τs) + 1] . (4.10)Thus, the residual strength Y ∗ isY ∗ = Tfk =τsblog [exp(b)− exp(bkt0/τs) + 1] . (4.11)Notice that we use Y ∗ instead of y∗ in the case of the Madison curve toemphasize that the residual strength in the U.S. model is a random variable.Equation 4.11 provides a deterministic relationship between the residualstrength Y ∗ and the strength (i.e. the short–term strength τs). Based onthe analytical form of Equation 4.11, the relationship between the residualstrength Y ∗ and the short–term strength τs is not obvious. We approximatethe terms log [1 + exp(b)− exp(bkt0/τs)] and exp(bkt0/τs) by their Taylorseries expansions at 0 to first order. ThenY ∗ ≈τsb[exp(b)− 1]− kt0. (4.12)Thus, the residual strength approximately is a linear function of the short–term strength with slope (exp(b)− 1)/b and intercept −kt0.The Canadian modelThe Canadian model proposed by Foschi and Yao (1986) is∂α(t)∂t= a{τ(t)− σ0τs}b+ + c{τ(t)− σ0τs}n+α(t),where a, b, c, n and σ0 are model parameters, τ(t) is the applied load at timet, and {τ(t)− σ0τs}n+ = max{0, τ(t)− σ0τs}. The model parameters in theMadison curve and the U.S. model are constants. However, the Canadian834.2. Damage accumulation modelsmodel considers the model parameters a, b, c, n and σ0 as random effects.The five parameters a, b, c, n and σ0 are constants for a given specimen, butvary randomly between specimens. Generally, the parameters a, b, c, n andσ0 are assumed to be independent log–normal random variables. No damageoccurs if τ(t) is smaller than σ0τs. The damage starts to accumulate whenτ(t) > σ0τs. The Canadian model could be simplified by dropping the secondterm c{τ(t)− σ0τs}n+α(t) (Lam et al., 2003; Zhai, 2011). Equivalently,∂α(t)∂t≈ a{τ(t)− σ0τs}b+. (4.13)We solve Equation 4.13 for the solution of α(t) under the ramp load test,and obtainα(t) ≈ak(b+ 1)(kt− σ0τs)b+1+ . (4.14)The parameter a depends on the other model parameters b and σ0. Sinceα(T ) = 1,α(T ) =ak(b+ 1)(kT − σ0τs)b+1+ = 1,a =k(b+ 1)(τs − τsσ0)b+1+.(4.15)At the end of proof loading, the accumulated damage at time t0 isα(t0) =ak(b+ 1)(kt0 − σ0τs)b+1+ . (4.16)The residual damage is 1 − α(t0). Suppose now at the 2nd stage, we testthe proof loaded specimen to failure. Then,1− α(t0) = α(Tf ) (4.17)where Tf is the time–to–failure of the proof loaded specimen at the 2ndstage. Substituting Equations 4.16 and Equation 4.14 evaluated at Tf to844.2. Damage accumulation models4.17 yields1−ak(b+ 1)(kt0 − σ0τs)b+1+ =ak(b+ 1)(kTf − σ0τs)b+1+ . (4.18)Solving Equation 4.18 for Tf , which isTf ={[(τs − σ0τs)b+1+ − (kt0 − σ0τs)b+1+] 1b+1+ σ0τs}1k.Thus, the residual strength Y ∗ at time t0 is Tfk, which isY ∗ =[(τs − σ0τs)b+1+ − (kt0 − σ0τs)b+1+] 1b+1+ σ0τs. (4.19)Equation 4.19 provides a deterministic relationship between the residualstrength Y ∗ and the short–term strength τs. The analytical form of Y ∗ isa complicated function of τs, and the random effects b and σ0 are not inde-pendent of the residual strength Y ∗. As a result, the relationship betweenthe residual strength Y ∗ and the short–term strength τs is not obvious.One approach to identifying the relationship for a particular application isgiven in Lam et al. (2003). The random effects b, c, σ0, and τs are assumedto be independent log–normal distribution. The parameters of the log–normal distributions of the random effects b, c, σ0, and τs are estimated fromduration–of–load data. After estimating the parameters of the log–normaldistributions and taking them as known, we can generate sample from eachestimated log–normal distribution. We calculate the residual strength Y ∗based on the generated samples of b, c, σ0, and τs using Equation 4.19. In-formation about the degree of damage could be found by comparing thesample distribution of Y ∗ and the estimated distribution of the short–termstrength τs. As mentioned above, the Summer–of–2011 experiment is notdesigned for the standard DOL models where load increases linearly againsttime with the same loading rate for each specimen. Thus we are unable toempirically assess the desired relationship implied by our analysis for theCanadian model.854.3. Proof load effect: an empirical assessment4.2.3 SummaryWe have derived the relationship between the strength and the residualstrength based on the Madison curve, the U.S. model and the Canadianmodel under load control. The Madison curve suggests a linear relationshipbetween a representative strength of the strength distribution and a rep-resentative strength of the residual strength distribution. The U.S. modelsuggests approximating the deterministic relationship between the strengthand the residual strength by a linear model with slope and intercept. Due tothe random effects, the relationship between the strength and the residualstrength is not analytically tractable using the Canadian model. Since theSummer–of–2011 experiment is conducted under displacement control, therelationships derived from the DOL models are not directly applicable toour data sets. But they serve the major purpose of suggesting a theoreticalmodel for damage that will support more empirical work in the sequel.4.3 Proof load effect: an empirical assessment4.3.1 Wood literature review on proof load effectProof loading has been used to eliminate exceptionally weak manufacturedproducts as a means of controlling quality and reducing consumers’ risk.Studies have been done on achieving optimal proof load levels to balance theneed to insure quality against the need to reduce manufacturing costs (Shiet al., 2015). One common concern in such studies is the degree of damagethat may result from proof loading, a subtle issue involved in optimizing theproof loading procedure.Strickler et al. (1969) describe two experiments in proof loading struc-tural end-jointed lumber in the MOR mode. The first experiment is con-ducted to investigate whether significant damage accumulates in the joints.The second experiment is conducted to investigate whether proof loadingin the MOR mode eliminates the specimens with weakest tensile strength.The damage concern is addressed in the first experiment. Comparing theMOR averages of the proof loaded groups and the control group where spec-864.3. Proof load effect: an empirical assessmentimens are tested to failure in the MOR mode without proof loading revealsno significant difference between the group averages (the significance levelis not given for the hypothesis test). The paper finds that no reduction inlumber strength due to proof loading with a load up to 70 percent of themean of the MOR distribution.Madsen (1976) describes an experiment conducted with more than 1000specimens of size 2× 6, 12 feet long from the “# 2 and better” grade of theHem-Fir species group. The specimens are randomly divided into 6 groups:two proof loaded groups that are tested to failure in the UTS mode in thesame direction as the applied proof load (first category); two proof loadedgroups that are tested to failure in the UTS mode in the opposite directionto the applied proof load (second category); one control group that testsspecimens’ strong sides to failure in the UTS mode; one control group thattests specimens’ weak sides to failure in the UTS mode. The proof loadstress was taken to be 4000 psi. The percentages of specimens that failedbelow the proof load level are between 5% to 10% depending on the proofloaded group. The paper compares visually the empirical cumulative distri-butions of the proof loaded group and the control group for each category.For the first category, a few of the surviving specimens failed below theproof load level during the follow up test, indicating damage. However, theempirical cumulative distribution of the proof loaded group followed closelythat of the control group in general. No apparent damage on strength wasobserved as a result of proof loading. For the second category, many speci-mens failed below the proof load level. Comparing the distributions of theproof loaded group and the control group indicates that some damage mayhave occurred during the proof loading procedure for the second category.However, the differences observed in the distributions could also be due tothe test orientation or a mixed effect of damage and orientation.Marin and Woeste (1981) describe a reverse proof loading experimentwith two hundred specimens in the control group and two hundred speci-mens in the proof loaded group where specimens were proof loaded in theMOR mode. The proof load level is the 5th percentile of the MOR distri-bution. The authors of the paper fitted Weibull distributions to the MOR874.3. Proof load effect: an empirical assessmentmeasurements of the proof loaded group and the MOR measurements of thecontrol group, and compared the two fitted distributions. The proof loadedgroup has a right strength tail that resembles the right strength tail of thecontrol group, but it has a much higher fifth percentile. Only one proofloaded specimen failed below the proof load level. Any damage which didnot result in failure below the proof load level is not detected.Woeste et al. (1986) present the results of an investigation of variousproof loading procedures for assessing lumber strength. The specimens wereassigned to six groups: three bending groups and three tension groups. Spec-imens in one of the three bending groups, the bending control group, weretested to failure in the MOR mode using random edge placement. Speci-mens in the second bending group, a single proof load sample, were proofloaded in the MOR mode using random edge placement. The survivors ofthe single proof load sample were then tested to failure in the MOR modeusing the opposite edge. Specimens in the third bending group, a reverseproof load sample, were proof loaded in the MOR mode using both edges.The survivors of the reverse proof load bending group were tested to failurein the MOR mode. The three tension groups were designed in a similar wayto the bending groups, but in the UTS mode. The percentages of specimensthat failed during the proof loading procedures ranged from 7.1% to 11.1%.All survivors subjected to reverse bending proof loading failed above theproof load level. Seven out of 197 specimens subjected to a single bendingproof load and loaded to failure on the opposite edge failed below the proofload level. This does not necessarily imply that the specimens were damageddue to proof loading. It could be due to the fact that the bending strengthfor one edge is not identical to the bending strength for the opposite ori-entation. The authors conclude that there was no apparent damage due toproof loading for both MOR or UTS.Katzengruber et al. (2006) present the results of an experiment with4886 specimens proof loaded in the UTS mode to assure the quality offinger-jointed structural timber. The specimens were proof loaded to a firstproof load level. Then after a short relief period, the specimens were proofloaded again to a proof load level that was slightly higher than the first884.3. Proof load effect: an empirical assessmentproof load level. In total, 65 specimens failed because of the double stressproof loading procedure. Among them, 37 specimens failed during the firstproof loading procedure and 28 specimens failed at the second proof loadstep. Only two specimens among the 28 specimens that failed at the secondproof load step failed at a level that was below the first proof load level.The authors concluded that “within a double proof load procedure, 99.96%of all specimens could sustain higher stresses than at first time, indicatingnot being damaged” (2/4886 = 99.96%).4.3.2 Untangling the effects of dependence and damageIn this section, we consider an empirical approach to assess the damageaccumulated in the strength of the proof load survivors. Here we consider arandomly selected lumber specimen i with two strength properties Xi and Yias realizations of a random pairX and Y , only one of which can be measured.In our application, the random pair will be the log–transformed UTS and theMOR, whose relationship is of considerable practical importance. Chapter3 showed how the dependence between X and Y could be estimated inthe absence of damage when the two random variables have a joint normaldistribution. The ith specimen is first proof loaded in the X mode up toa pre–determined load level l. If Xi ≤ l, the specimen fails and Xi = xiis observed. But if Xi > l, then the specimen is a “survivor” and Xi isnot observed. The survivor is then tested to failure in the Y mode. If thespecimen does not fail during the proof loading procedure and the proofloading procedure does not damage the specimen, Yi = yi would then beobserved. No damage in all survivors would put us back into the situationaddressed in Chapter 3, which would be approximately the case if the loadl were set at a relatively low level. On the other hand, damage may occurif l were set at a moderate to high level. In that case we observe Y ∗i = y∗iinstead of Yi = yi where Y ∗i is the ith specimen’s residual strength in the Ymode after proof loading in the X mode.The difficult problem faced in this section is about assessing the degreeof the damage accumulated in the strength of the survivors due to proof894.3. Proof load effect: an empirical assessmentloading. It would not be difficult if we had also observed the {Yi} for eachsurvivor in which case a “reference set” of Y –values is readily assembledfor comparison with the Y ∗–values. This leads us naturally then to theproblem of determining the appropriate reference set, and thus the solutionto our problem. The damage assessment methods reviewed in Section 4.3.1are not applicable for the SPLD situation considered in Chapter 3, as theywould suggest comparing the distribution of Y ∗ conditional on X > l withthe marginal distribution of Y . This suggestion would not work in general.In fact the appropriate reference set depends on the unknown correlationρ in the bivariate normal distribution. But the suggestion does work inthe case ρ = 0. When ρ = 0, the size of Y would bear no relation to thesize of X. In particular, the unobserved Yi corresponding to the Y ∗i wouldjust be a random sample from the marginal Y –distribution as suggestedabove. At the other extreme when ρ = 1 (approximately), then apart froma linear transformation, X and Y would be about equal. In other words,the unobserved Yi of the survivors would be like the unobserved Xi, whichare the largest values in the sample of specimens. If, for example, 40% ofthe specimens survived, we could regard these Y ’s (the correct reference set)as having been sampled from the truncated Y distribution consisting onlyof values greater than the 60th percentile. The correct reference set at thisextreme case when ρ = 1 would be the largest 40% of the values in a secondshoulder group of Y values collected by testing a sizeable independent sampleof specimens to failure in the Y mode. If there were no damage accumulatedin the survivors, an empirical Q–Q plot of Y ∗ against Y would lie close tothe 45–degree line.The question is what can be done in a more realistic case where ρ isunknown and 0 < ρ < 1? The answer is to first estimate ρ and then as anapproximation, treat it as known. We do this by running a second SPLDexperiment with a proof load level set so low that little or no damage willoccur in the survivors. Using the theory developed in Chapter 3, we maynow estimate the parameters of the bivariate normal distribution of X andY , in particular the correlation parameter ρ. Once these parameters areestimated, we treat them as known. The known parameters in the bivariate904.3. Proof load effect: an empirical assessmentnormal distribution enable us to find the estimated conditional distributionof [Y |X > l] for any proof load level l even when l is large enough thatsurvivors could be damaged. To assess the damage, we can compare theestimated theoretical quantiles of [Y |X > l] with the empirical quantiles of[Y ∗|X > l]. The difference between the estimated theoretical quantile of[Y |X > l] and the empirical quantile of [Y ∗|X > l] provides informationabout the damage if any damage has been sustained. In particular, if theestimated theoretical quantile is similar to the empirical quantile, then nodamage has occurred as a result of proof loading. The most important aspectof this new approach is that it moves away from thinking about changes inindividual specimens to thinking about changes in the subpopulation of allspecimens that are stressed by a proof loading procedure. The approachconsidered in this section also has other potential applications. For example,it could be applied to relate strength properties measured by short–termand long–term tests as in predicting the duration–of–load effect in a givenpopulation of lumber. The details now follow.Suppose that X and Y have a bivariate normal distribution,(XY)v N2([µXµY],[σ2X ρσXσYρσXσY σ2Y]).Given Y = y, the conditional random variable [X|Y ] follows a normal dis-tribution with mean µX + ρσX(y − µY )/σY and variance (1 − ρ2)σ2X . Thedensity function of the conditional variable [X|Y ] isf[X|Y ](x|y) =1√2pi√1− ρ2σXexp{−(x− µX − σXρ(y − µY )/σY )22σ2X(1− ρ2)}.In making the calculation in Equation 4.20 below, we have used this propertyof conditional normal distribution of a bivariate normal random variable.914.3. Proof load effect: an empirical assessmentThe density function of [Y |X > l] with θ = (µX , σX , µY , σY , ρ)T isf[Y |X>l](y;θ) =fY,X>l(y,X > l;θ)P(X > l;µX , σX)=1P(X > l;µX , σX)∫ ∞lfY,X(y, x;θ)dx=1P(X > l;µX , σX)∫ ∞lfY (y;µY , σY )f[X|Y ](x|y;θ)dx=11− Φ([l − µX ]/σX)exp{−(y − µY )2/(2σ2Y )}σY√2pi×∫ ∞lexp{− (x−µX−σXρ(y−µY )/σY )22σ2X(1−ρ2)}√2pi√1− ρ2σXdx=exp{−(y − µY )2/(2σ2Y )}σY√2pi (1− Φ([l − µX ]/σX))×[1− Φ(l − µX − ρσX(y − µY )/σYσX√1− ρ2)], (4.20)where fX(·) is the density function of X, and fY,X(·) is the joint densityfunction of X and Y , and fY (·) is the density function of Y .The distribution function of [Y |X > l] isF[Y |X>l](y;θ) =∫ y−∞f[Y |X>l](y;θ)dy=∫ y−∞exp{−(y − µY )2/(2σ2Y )}σY√2pi(1− Φ([x− µX ]/σX))×[1− Φ(l − µX − ρσX(y − µY )/σYσX√1− ρ2)]dy. (4.21)The theoretical quantile function of [Y |X > l] is the inverse function ofF[Y |X>l](y;θ), denoted byQ[Y |X>l](p;θ) where p is a constant and 0 < p < 1.We solve for the y such that F[Y |X>l](y;θ) = p by solving F[Y |X>l](y;θ)−p =0 using the Newton-Raphson method. Then, the quantile function evaluatedat that p is Q[Y |X>l](p;θ) = y.Suppose we conduct a SPLD experiment that loads the specimens in the924.3. Proof load effect: an empirical assessmentX mode with a proof load level, and then tests the survivors to failure in theY mode. The proof load level is low enough to ensure survivors suffer littleor no damage. For example, the 5th percentile of the strength distribution isgenerally considered to be low enough to insure insignificant damage in thestrength of the proof load survivors. The SPLD with a shoulder approach inChapter 3 would allow us to obtain the maximum likelihood estimate (MLE)of θ, denoted by θˆMLE. Then for any proof load level l, the approximatedquantile function is Q[Y |X>l](p; θˆMLE).Suppose that a second SPLD experiment is conducted. The experimentproof loads the specimens in the X mode up to a proof load level l2. Thesurvivors are tested to failure in the Y mode. We are concerned aboutthe damage accumulated in the strength of the proof load survivors. Thesurvivor observations are denoted as {y∗i : i = 1, 2, · · · , ny∗}. The empiricaldistribution pi of each y∗i with a continuity correction ispi =sum(I{y∗j ≤ y∗i : i = 1, 2, · · · , ny∗})− 0.5ny∗.The estimated theoretical quantiles of [Y |X > l2] is Q[Y |X>l2](pi; θˆMLE) fori = 1, 2, · · · , ny∗ . We plot the empirical quantile y∗i against the estimatedtheoretical quantile Q[Y |X>l2](pi; θˆMLE) for i = 1, 2, · · · , ny∗ (Q–Q plot). Ifthe proof loading procedure does not damage the survivors, the empiricalquantile of [Y ∗|X > l2] should be similar to the estimated theoretical quan-tile of [Y |X > l2]. The Q–Q line is expected to stay close to the 45–degreeline. Any departure from the 45–degree line is due to the damage causedby proof loading as simulation studies in Section 3.3, Chapter 3 indicatedthat θˆMLEis accurate. Simulation studies are conducted to verify that theproposed Q–Q line reveals the damage as expected on the reasoning above,and the results are reported in the next section.4.3.3 Simulation studiesSimulation studies in Parts I and II treat the parameter θ as known, andsimulation studies in Part III treat θ as unknown. In all three parts, we934.3. Proof load effect: an empirical assessmentconsider a damage model [Y ∗|X > l2] = [Y |X > l2] − α/[Y |X > l2] whereα ≥ 0. This damage model reflects our intuition and preliminary data anal-ysis of the damage accumulated in the strength of the proof load survivorsbased on the Summer–of–2011 experiment data: more severe for weakerspecimens and less severe for strong specimens.Part IA SPLD sample of size N is simulated from a bivariate normal distribu-tion with µX = 6.48, σX = 1.85, µY = 1.40, σY = 0.4, and ρ = 0.2. Thespecimens are proof loaded in the X mode up to a load level l2 specified sothat 50% of the specimens fail below the load level. We observe Xi if Xi issmaller than the proof load level l2, and observe Y ∗i = Yi − α/Yi if Xi > l2.The observations on the survivors are {y∗1, y∗2, · · · , y∗ny∗} where ny∗ = N/2.We calculate the empirical cumulative distribution function of Y ∗ for eachy∗i , denoted as pi. The theoretical quantile Q[Y |X>l2](pi;θ) is calculated foreach pi where we take θ as the true value. The reason why we take θ asthe true value for calculating the quantile Q[Y |X>l2](pi;θ) is because thegoal of the simulation studies in part I is to verify whether the proposedQ–Q line in Section 4.3.2 resembles the damage function. Note that in prac-tice, good estimates of θ can be obtained by the SPLD with a shoulderapproach presented in Chapter 3. The proposed Q–Q line for revealing thedamage information is produced by plotting y∗i against Q[Y |X>l2](pi;θ) fori = 1, 2, · · · , ny∗ .Figure 4.1 presents the Q–Q plots for the cases where α = 0 with N =100, 250, 500, 1000. When α = 0, no damage occurs on the survivors and sothe empirical quantiles are expected to resemble the theoretical quantiles.In other words, the Q–Q line should lie close to the 45–degree line. In fact,each Q–Q line in Figure 4.1 lies close to the 45–degree line (the solid line), inagreement with theory and confirms that survivors have not been damaged.Similarly, Figure 4.2 presents the Q–Q plots for the cases when α = 0.1with N = 100, 250, 500, 1000. Figure 4.3 presents the Q–Q plots for thecases when α = 0.5 with N = 100, 250, 500, 1000. The solid lines in Figures944.3. Proof load effect: an empirical assessment4.2 and 4.3 represent the true damage model. The corresponding Q–Q lineresembles the true damage model quite well.Figure 4.1: Q–Q plot: y∗i against Q[Y |X>l2](pi;θ) for α = 0 and N =100, 250, 500, or 1000. The solid line is the real damage model Y ∗ = Y .The points lie on the 45 degrees line as they should. There is no damage.1.01.52.02.53.00.5 1.0 1.5 2.0 2.5Theoretical quantileEmpirical quantileα= 0, N = 1000.51.01.52.00.5 1.0 1.5 2.0 2.5Theoretical quantileEmpirical quantileα= 0, N = 250121 2Theoretical quantileEmpirical quantileα= 0, N = 5000.51.01.52.02.51 2Theoretical quantileEmpirical quantileα= 0, N = 1000954.3. Proof load effect: an empirical assessmentFigure 4.2: Q–Q plot: y∗i against Q[Y |X>l2](pi;θ) for α = 0.1 and N =100, 250, 500, or 1000. The solid line is the damage model Y ∗ = Y − 0.1/Y .The points lie closely to the true damage model.0.51.01.52.00.5 1.0 1.5 2.0 2.5Theoretical quantileEmpirical quantileα= 0.1, N = 1000.51.01.52.00.5 1.0 1.5 2.0 2.5Theoretical quantileEmpirical quantileα= 0.1, N = 2500.51.01.52.02.51 2Theoretical quantileEmpirical quantileα= 0.1, N = 500121 2Theoretical quantileEmpirical quantileα= 0.1, N = 1000964.3. Proof load effect: an empirical assessmentFigure 4.3: Q–Q plot: y∗i against Q[Y |X>l2](pi;θ) for α = 0.5 and N =100, 250, 500, or 1000. The solid line is the real damage model Y ∗ = Y −0.5/Y . More severe damage occurs. The plots correctly reflect the changein the Y values.0.51.01.52.00.5 1.0 1.5 2.0Theoretical quantileEmpirical quantileα= 0.5, N = 1000.00.51.01.52.00.5 1.0 1.5 2.0 2.5Theoretical quantileEmpirical quantileα= 0.5, N = 250−10121 2Theoretical quantileEmpirical quantileα= 0.5, N = 500−3−2−10121 2Theoretical quantileEmpirical quantileα= 0.5, N = 1000974.3. Proof load effect: an empirical assessmentPart IIResults in Part II differ from those in Part I because α is estimated as wesee below.A SPLD sample of size N = 300 is simulated from a bivariate normaldistribution with µX = 6.48, σX = 1.85, µY = 1.40, σY = 0.4, and ρ is either0.2 or 0.8. The specimens are proof loaded in the X mode up to a load levell2. The load level is specified so that 50% of the specimens fail below the loadlevel. We observe Xi if Xi is smaller than the proof load level l2, and observeY ∗i = Yi−α/Yi if Xi > l2. The specification of α is guided by the Summer–of–2011 experiment data. The minimum of the observed log–transformedUTS value is 0.51 from the R40 proof loaded group, which corresponds to the1st empirical quantile of the log–transformed UTS distribution. Realisticvalues of α should be specified such that P(Y ∗ > 0.51|X > l2) = P(Y −α/Y > 0.51|X > l2) has a high probability (say 0.90) when ρ is large andpositive. Therefore, the simulation studies specify α to be 0, 0.1, 0.3, 0.5or 0.7 when ρ = 0.8. When ρ = 0.2, the correlation between X and Y isweak. Proof loading a specimen in the X mode should only cause slight ornegligible damage in its strength in the Y mode. Thus, α is specified to be0, 0.05, 0.10, 0.15 or 0.20 when ρ = 0.2. No damage occurs on the survivorswhen α = 0.The simulated observations on the survivors are denoted as{y∗1, y∗2, · · · , y∗ny∗} where ny∗ = N/2 = 150. We calculate the empirical cu-mulative distribution function of Y ∗ for each y∗i , denoted as pi. The theoret-ical quantile Q[Y |X>l2](pi;θ) is calculated for each pi where θ is taken as thetrue value. The least squares estimate of α is found by minimizing the sumof squares of the errors,∑ny∗i=1(y∗i − Q[Y |X>l2](pi;θ) + α/Q[Y |X>l2](pi;θ))2.Thus, the least squares estimate of α isαˆLS = −∑ny∗i=1(y∗i −Q[Y |X>l2](pi;θ))/Q[Y |X>l2](pi;θ)∑ny∗i=1(1/Q[Y |X>l2](pi;θ))2.The above procedure is repeated 104 times. Table 4.1 reports the averageand the standard deviation of the 104 least squares estimates of α. For each984.3. Proof load effect: an empirical assessmentsimulation case, the average of the least squares estimates stays in a closeneighbourhood of the true value. The simulation results suggest that theleast squares estimate of α is accurate.Table 4.1: Known θ: performance of the least squares estimate of α. Thesimulation setting is described as in the text.α = 0 α = 0.05 α = 0.10 α = 0.15 α = 0.20ρ = 0.2 αEST 0.00 0.05 0.10 0.15 0.20SE 0.04 0.04 0.05 0.05 0.06α = 0 α = 0.10 α = 0.30 α = 0.50 α = 0.70ρ = 0.8 αEST 0.00 0.10 0.30 0.50 0.70SE 0.04 0.04 0.04 0.05 0.05Part IIIThe simulation studies in Parts I and II treat the parameter θ as known.In Part III, the parameter θ is treated as unknown. One additional SPLDsample is generated to provide an estimate of θ using the SPLD with ashoulder approach presented in Chapter 3.A sample of size 1000 for the SPLD with a shoulder approach is simulatedbased on a bivariate normal distribution. The parameters of the bivariatenormal distribution are µX = 6.48, σX = 1.85, µY = 1.43, σY = 0.40 and ρis either 0.2 or 0.8. Following the optimal allocation developed in Chapter3, we assign 50% of the specimens to the SPLD group and the rest to theshoulder group. The shoulder group of size 500 (1000 × 0.5) is simulatedfrom the univariate normal distribution with µY = 1.43 and σY = 0.40. Forthe SPLD group of size 500, specimens are proof loaded in the X mode. Theproof load level is specified such that 20% of the specimens fail below the loadlevel. In practice, we will also use a low proof load level for safely assumingthat the damage accumulated in the proof load survivors is negligible. Theproof load survivors are tested to failure in the Y mode. We calculate theMLE of θ that optimizes the likelihood given the generated SPLD with ashoulder sample, and denote it as θˆMLE.994.3. Proof load effect: an empirical assessmentNext, we generate a second SPLD sample based on the same bivari-ate normal distribution. The sample size is 300. The specimens are proofloaded in the X mode up to a load level l2 specified so that 50% of the spec-imens fail below the load level. We observe Xi if Xi is smaller than l2, andobserve Y ∗i = Yi − α/Yi otherwise. The simulation studies specify α to be0, 0.1, 0.3, 0.5 or 0.7 when ρ = 0.8. When ρ = 0.2, α is specified to be 0, 0.05,0.10, 0.15, and 0.20. No damage occurs on the survivors when α = 0. Thegenerated observations on the survivors are denoted as {y∗1, y∗2, · · · , y∗ny∗}where ny∗ = 150 (300/2).We calculate the empirical cumulative distribution function of [Y ∗|X >l2] for each y∗i , denoted as pi. The estimated theoretical quantileQ[Y |X>l2](pi; θˆMLE) is calculated for each pi. Given the generated obser-vations y∗i ’s and the estimated theoretical quantiles Q[Y |X>l2](pi; θˆMLE)’s,we estimate α by the least squares method that minimizes the sum of thesquares of the errors,ny∗∑i=1(y∗i −Q[Y |X>l2](pi; θˆMLE) + α/Q[Y |X>l2](pi; θˆMLE))2,with respect to α. Thus, the approximate least squares estimate of α isˆˆαLS = −∑ny∗i=1(y∗i −Q[Y |X>l2](pi; θˆMLE))/Q[Y |X>l2](pi; θˆMLE)∑ny∗i=1(1/Q[Y |X>l2](pi; θˆMLE))2, (4.22)where the double hat notation means that this is an estimate of an estimatederived by plugging in θˆMLEfor the unknown parameter θ.The above procedure is repeated 104 times. Table 4.2 reports the averageand the standard deviation of the 104 approximate least squares estimate ofα. The average of the approximate least squares estimates stays in a closeneighbourhood of the true value, which indicates that our empirical approachprovides an accurate estimate of α. When ρ = 0.2, the coefficient of variationin the α estimate is large. The required sample sizes to detect significantdamage when ρ is small could be investigated. However, assessing the proofload damage when X and Y are weakly correlated is not of practical interest.1004.3. Proof load effect: an empirical assessmentTable 4.2 also reports the average and the standard deviation of the 104MLE’s of θ to demonstrate that the accuracy of the MLE of θ performsreasonably well.Table 4.2: Unknown θ : performance of the least squares estimate of α. Thesimulation setting is described as in the text.α = 0 α = 0.05 α = 0.10 α = 0.15 α = 0.20αEST 0.00 0.05 0.10 0.15 0.20SE 0.07 0.08 0.09 0.11 0.13µX = 6.48EST 6.48 6.48 6.48 6.48 6.48SE 0.20 0.20 0.20 0.20 0.20σX = 1.85EST 1.84 1.84 1.84 1.84 1.84SE 0.16 0.16 0.16 0.16 0.16µY = 1.43EST 1.43 1.43 1.43 1.43 1.43SE 0.02 0.02 0.02 0.02 0.02σY = 0.4EST 0.40 0.40 0.40 0.40 0.40SE 0.01 0.01 0.01 0.01 0.01ρ = 0.2EST 0.19 0.19 0.19 0.19 0.19SE 0.18 0.18 0.18 0.18 0.18α = 0 α = 0.1 α = 0.3 α = 0.5 α = 0.7αEST 0.00 0.10 0.30 0.50 0.70SE 0.06 0.06 0.07 0.07 0.08µX = 6.48EST 6.48 6.48 6.48 6.48 6.48SE 0.20 0.20 0.20 0.20 0.20σX = 1.85EST 1.84 1.84 1.84 1.84 1.84SE 0.15 0.15 0.15 0.15 0.15µY = 1.43EST 1.43 1.43 1.43 1.43 1.43SE 0.01 0.01 0.01 0.01 0.01σY = 0.4EST 0.39 0.39 0.39 0.39 0.39SE 0.01 0.01 0.01 0.01 0.01ρ = 0.8EST 0.79 0.79 0.79 0.79 0.79SE 0.06 0.06 0.06 0.06 0.06Summary of the simulation resultsThe simulation studies in Parts I to III verity that the empirical approachpresented in Section 4.3.2 provides an accurate estimate of α. When ρ = 0.2,the standard deviation in the least squares estimates of α when θ is unknownis about twice the standard deviation in the least squares estimates of α when1014.3. Proof load effect: an empirical assessmentθ is known. However, the differences in the precisions of the least squaresestimates of α are not as large when ρ = 0.8.4.3.4 Illustrative example: the Summer–of–2011experimentFor this application, we take X to be the MOR while the role of Y is playedby log(UTS) so that the pair of variables has a more nearly bivariate normaldistribution. The reader is reminded that the Summer–of–2011 experimentwas set up to be symmetric where proof loading was conducted in the Xmode in three samples, and in the Y mode in three others. The symmetricdesign needs to be taken into consideration in applying our theory since inthe latter case, the roles of X and Y are interchanged. Among all the proofloaded groups (i.e. SPLD groups) of the Summer–of–2011 experiment, theT20 group has the lowest observed proof load level, one that correspondsto the 16th empirical percentile of the Y –distribution. The survivors of theproof loaded group T20 are assumed to have no or negligible damage. Tofind the MLE of θ, we optimize the joint likelihood of θ given the T20,T100, and R100 samples with respect to θ. The groups T100 and R100 arethe shoulder samples where 100% of the specimens in the sample are testedto failure without any proof loading. The total number of observationsis 87 + 174 × 2 = 435. The proof loaded group T20 has 87 specimens.Each shoulder group has 174 specimens. Only the T20 sample containsinformation about the correlation between X and Y . Table 4.3 summarizesthe MLE’s of the parameters with their standard errors where X and Yare MOR and log(UTS), respectively. We also let X∗ and Y ∗ denote theresidual MOR after proof loading in the UTS mode and the log–transformedresidual UTS after proof loading in the MOR mode, respectively — notethe interchanged roles of X and Y in the former case.Once the parameters of the joint distribution of X and Y are estimated,the damage caused by proof loading can be assessed for each proof loadedgroup (T60/40, R60/40/20). The proof load level of T60 is 1.59 in the Ymode (unitless). The survivors are tested to failure in the X mode. The1024.3. Proof load effect: an empirical assessmentTable 4.3: Maximum likelihood estimates of the parameters for the bivariatenormal distribution of X and Y , where X is MOR and Y is log(UTS).µX σX µY σY ρEst. 6.51 1.84 1.45 0.39 0.75SE 0.13 0.10 0.03 0.02 0.20number of survivor observations is 35, denoted as {x∗1, x∗2, · · · , x∗35}. Theempirical distribution of each x∗i ispi =sum(I{x∗j ≤ x∗i : j = 1, 2, · · · , 35})− 0.535.The estimated theoretical quantile is Q[X|Y >1.59](pi; θˆMLE) where θˆMLE=(µˆX = 6.51, σˆX = 1.84, µˆY = 1.45, σˆY = 0.39, ρˆ = 0.75)T . Figure 4.4 plotsthe estimated theoretical quantile Q[X|Y >1.59](pi; θˆMLE) against the empir-ical quantile x∗i for i = 1, 2, · · · , 35. Most points lie close to the 45–degreeline. Surprisingly, no apparent damage is observed on the T60 survivors eventhough the proof load level is quite high. We return to this point below inthe discussion.Similarly, Figure 4.5 plots the empirical quantile of [X∗|Y > 1.38] againstthe estimated theoretical quantile of [X|Y > 1.38] for the proof loaded groupT40. The number of observations is 49. The damage on the stronger spec-imens is not apparent. We observe some damage on the weaker specimensas the left tail of the T40’s Q–Q line is mostly below the 45–degree line.However, the very weakest specimens seem to be even stronger than theundamaged specimens. This result is counter-intuitive. But as the previoussimulation studies show, the method does find the true damage. So thisresult seems likely to be an anomaly due to the small sample size. It is alsopossible that the failure mode for the very weakest specimens may have beendifferent from the majority.1034.3. Proof load effect: an empirical assessmentFigure 4.4: T60’s Q–Q plot. The empirical quantile of [X∗|Y > 1.59] isplotted against the estimated theoretical quantile of [X|Y > 1.59]. Thesolid line is the 45–degree line. No apparent damage is observed.68106 8 10Theoretical quantileEmpirical quantileT60: Q−Q plot1044.3. Proof load effect: an empirical assessmentFigure 4.5: T40’s Q–Q plot. The empirical quantile of [X∗|Y > 1.38] isplotted against the estimated theoretical quantile of [X|Y > 1.38]. Thesolid line is the 45–degree line. No apparent damage in the strength of thestrong specimens. Damage occurs on the weaker specimens.68104 6 8 10Theoretical quantileEmpirical quantileT40: Q−Q plot1054.3. Proof load effect: an empirical assessmentFigure 4.6 plots the empirical quantile of [Y ∗|X > 7.09] against theestimated theoretical quantile of [Y |X > 7.09] for the proof loaded groupR60. The number of observations is 33. Damage to the survivors is notapparent.Figure 4.6: R60’s Q–Q plot. The empirical quantile of [Y ∗|X > 7.09] isplotted against the estimated theoretical quantile of [Y |X > 7.09]. Thesolid line is the 45–degree line. No apparent damage is observed.1.21.62.01.5 2.0Theoretical quantileEmpirical quantileR60: Q−Q plotFigure 4.7 plots the empirical quantile of [Y ∗|X > 6.11] against theestimated theoretical quantile of [Y |X > 6.11] for the proof loaded groupR40. The number of observations is 47. Damage to the stronger specimensis not apparent, but the weaker survivors are damaged.1064.3. Proof load effect: an empirical assessmentFigure 4.7: R40’s Q–Q plot. The empirical quantile of [Y ∗|X > 6.11] isplotted against the estimated theoretical quantile of [Y |X > 6.11]. Thesolid line is the 45–degree line. No apparent damage in the strength of thestrong specimens. Damage occurs on the weaker specimens.0.51.01.52.01.0 1.5 2.0Theoretical quantileEmpirical quantileR40: Q−Q plot1074.3. Proof load effect: an empirical assessmentFigure 4.8 plots the empirical quantile of [Y ∗|X > 4.96] against theestimated theoretical quantile of [Y |X > 4.96] for the proof loaded groupR20. The number of observations is 67. The damage on the survivors is notapparent.Figure 4.8: R20’s Q–Q plot. The empirical quantile of [Y ∗|X > 4.96] isplotted against the estimated theoretical quantile of [Y |X > 4.96]. Thesolid line is the 45–degree line. No apparent damage is observed.0.81.21.62.01.0 1.5 2.0Theoretical quantileEmpirical quantileR20: Q−Q plot1084.3. Proof load effect: an empirical assessmentBased on Figures 4.4 to 4.8, we observe the following phenomenon:1. If the proof load level is high enough (e.g. R60 and T60 groups),then the specimens either fail completely or survive without damage.Specimens that survive the proof loading procedure are so strong thatthey remain unharmed by the proof loading. A contributing factor maybe the wide gaps in the more extreme tails of a strength distribution,so that the strength of the weakest survivors is well separated from thestrength of the strongest failure. Hence the survivors may be insulatedfrom the impact of proof loading.2. For intermediate proof load levels (e.g. T40 and R40), the strongersurvivors remain undamaged. The damage occurs to the weaker sur-vivors.3. For low proof load levels (e.g. R20), no damage is seen to the survivors.To conclude, we expect more severe damage on the weaker survivors andnegligible damage to the strongest survivors.Figures 4.4 to 4.8 are plotted based on the MLE of θ, and they ignorethe variability in the MLE of θ, especially the MLE of ρ. The MLE of ρgiven in Table 4.3 is 0.75 with a standard error of 0.20. Its standard error ishigh mainly because only the T20 sample contains information about ρ butits sample size is small (87 specimens). The approximate 95% confidenceinterval of ρ is (0.36, 1.14) (0.75± 1.96 ∗ 0.2). This approximate confidenceinterval seems unhelpful for two reasons. The first reason is that ρ cannot begreater than one. The second reason is that a ρ value of as low as 0.36 is notplausible because the dependence between MOR and the log–transformedUTS is expected to be strong. Due to these two reasons, we reproduce eachfigure of Figures 4.4 to 4.8 for ρ = 0.55, 0.65, 0.75 and 0.85. These figuresare given in Appendix C, and suggest the same conclusion as the case whenρ is taken at its MLE: more severe damage on the weaker survivors andnegligible damage to the strongest survivors except two cases. These twocases are the cases when ρ = 0.55 and ρ = 0.65 in Figure C.1, the empiricalquantiles of the T60 proof loaded survivors [Y ∗|X > 1.59] are larger than1094.3. Proof load effect: an empirical assessmentthe estimated theoretical quantiles of [Y |X > 1.59], indicating that a ρ valuesmaller than 0.65 may not be plausible. The amount of damage accumulatedin the strength of the strongest survivors is not affected by the ρ value, butthe ρ value affects the amount of damage accumulated in the strength of theweaker specimens.Based on our empirical analysis, we postulate, on heuristic grounds andfor model simplicity (identifiability), the damage model[Y ∗|X > l2]d= [(Y − α/Y )|X > l2] = [Y |X > l2]− α/[Y |X > l2],whered= means equal in distribution, and α ≥ 0. We assume as an ap-proximation [Y |X > l2] > 0 because the probability of [Y |X > l2] be equalor smaller than zero is negligible in our application of the summer–of–2011experiment. In our application of the summer–of–2011 experiment, the sam-ple mean and the sample standard deviation of the MOR are 6.62 and 1.88,respectively. The sample mean and the sample standard deviation of thelog–transformed UTS are 1.43 and 0.40, respectively. We let the roles of Xand Y be played by the MOR and the log–transformed UTS, respectively.The marginal parameters µX and σX (µY and σY ) are specified as the sam-ple mean and the sample deviation of X (Y ), respectively. The correlationparameter is taken to be ρ = 0.2 or ρ = 0.8. Using Equation 4.21 with thosespecified parameter values, Table 4.4 provides the probabilities of [Y |X > l2]being equal to or smaller than zero for various proof load levels l2 when theroles of X and Y are played by the MOR and the log–transformed UTS.The probabilities of [Y |X > l2] being equal to or smaller than zero are negli-gible, especially when X and Y are highly correlated. Table 4.4 also reportsthe probabilities of [Y |X > l2] being equal to or smaller than zero when theroles of X and Y be played by the log–transformed UTS and the MOR,respectively, indicating the probabilities are negligible as well.Two random variables are said to be equal in distribution if they havethe same probability distribution. The proposed damage model links thetwo distributions (i.e. two populations) [Y |X > l2] and [Y ∗|X > l2]. Whenα = 0, no damage occurs on the survivors. The deterministic relationship1104.3. Proof load effect: an empirical assessmentTable 4.4: Probability of [Y |X > l2] being equal to or smaller than 0. Whenp = 20%, 40%, 60%, or 80%, l2 is specified to be the 20th, 40th, 60th, or 80thquantile of the marginal distribution of X, respectively.X = log(UTS), Y = MOR X = MOR,Y = log(UTS)µX = 1.43, σX = 0.40 µX = 6.62, σX = 1.88µY = 6.62, σY = 1.88 µY = 1.43, σY = 0.40ρ = 0.2p = 20% 1.44× 10−4 1.16× 10−4p = 40% 1.09× 10−4 8.79× 10−5p = 60% 8.17× 10−5 6.56× 10−5p = 80% 5.57× 10−5 4.45× 10−5ρ = 0.8p = 20% 5.87× 10−8 3.69× 10−8p = 40% 1.33× 10−9 7.86× 10−10p = 60% 2.90× 10−11 1.61× 10−11p = 80% 1.80× 10−13 9.38× 10−14between [Y ∗|X > l2] and [Y |X > l2] is not available based on the Q–Q plot.The Q–Q plot only compares the distributional relationship between twodistributions, and cannot provide their deterministic relationship. Figure4.9 illustrates the damage function for various α values. For any given valueof α, the damage is always more severe for weaker survivors. For any givenspecimen, the damage becomes more severe as α increases.We now turn to the implementation of our methodology, and in partic-ular, the estimation of α. Without loss of generality, we consider a SPLDgroup where the specimens are proof loaded in the X mode with a proofload level l2. The survivors are tested to failure in the Y mode. Supposethat {y∗1, y∗2, · · · , y∗ny∗} are the residual strengths sampled from the condi-tional distribution of the survivors [Y ∗|X > l2]. The empirical cumulativedistribution function of [Y ∗|X > l2] for each y∗i is pi where i = 1, 2, · · · , ny∗ .The theoretical quantile of [Y |X > l2] is Q[Y |X>l2](pi;θ). We find the ap-proximate least squares estimate of α by minimizing the sum of the squaresof the errorsny∗∑i=1(y∗i −Q[Y |X>l1](pi; θˆMLE) + α/Q[Y |X>l1](pi; θˆMLE))2,1114.3. Proof load effect: an empirical assessmentFigure 4.9: The function Y ∗ = Y − α/Y for α = 0, 0.25, 0.5 and 0.75.01231 2 3YY*α= 0 α= 0.25 α= 0.5 α= 0.751124.3. Proof load effect: an empirical assessmentwhere θˆMLE= (µˆX = 6.51, σˆX = 1.84, µˆY = 1.45, σˆY = 0.39, ρˆ = 0.75)T .The approximate least squares estimate of α isˆˆαLS = −∑ny∗i=1(y∗i −Q[Y |X>l2](pi; θˆMLE))/Q[Y |X>l2](pi; θˆMLE)∑ny∗i=1(1/Q[Y |X>l2](pi; θˆMLE))2. (4.23)The estimated standard error of ˆˆαLS isSˆE(ˆˆαLS) =√√√√∑ny∗i=1(y∗i −Q[Y |X>l2](pi; θˆMLE) + αˆ/Q[Y |X>l2](pi; θˆMLE))2(ny∗ − 1)∑ny∗i=1(1/Q[Y |X>l2](pi; θˆMLE))2.(4.24)Table 4.5 summarizes the estimates of α and the corresponding standarderrors for each proof loaded group T60, T40, R60, R40 and R20. Notice thatthe estimated standard error of ˆˆαLS in Equation 4.24 ignores the variabilityin the maximum likelihood estimator of θ. An alternative way to calcu-late the standard error will be suggested later after the illustration of theproposed empirical approach.For the proof loaded group T60, the approximate least squares estimateis less than zero. We truncate the negative estimate at zero. The dis-tributional relationship between the strength [Y |X > l2] and the residualstrength [Y ∗|X > l2] for each proof loaded group can be obtained from Ta-ble 4.5. For example, for the proof loaded group T40, the distributionalrelationship between [X∗|Y > 1.38] and [X|Y > 1.38] is [X∗|Y > 1.38]d=[(X−1.55/X)|Y > 1.38] = [X|Y > 1.38]−1.55/[X|Y > 1.38] where X is theMOR strength, Y is the log–transformed UTS strength, and [X∗|Y > 1.38]is the residual MOR strength after proof loading in the UTS mode.Table 4.5: Approximate least squares estimates of α with estimated standarderrors for the proof loaded groups T60, T40, R60, R40 and R20.T60 T40 R60 R40 R20αEST 0 1.38 0.03 0.12 0.01SE NA 0.29 0.02 0.02 0.011134.3. Proof load effect: an empirical assessmentFigures 4.10 to 4.14 plot the fitted damage model for the proof loadedgroups T60, T40, R60, R40, and R20, respectively. The fitted damage modelexplains the Q–Q line reasonably well.Figure 4.10: T60: Q–Q plot with the fitted damage model [X∗|Y > 1.59]d=[X|Y > 1.59]. The solid line is the fitted damage model.68106 8 10Theoretical quantileEmpirical quantileT60: α= 01144.3. Proof load effect: an empirical assessmentFigure 4.11: T40: Q–Q plot with the fitted damage model [X∗|Y > 1.38]d=[(X − 1.55/X)|Y > 1.38] = [X|Y > 1.38]− 1.55/[X|Y > 1.38].468104 6 8 10Theoretical quantileEmpirical quantileFitted damage model The 45 degrees lineT40: α= 1.551154.3. Proof load effect: an empirical assessmentFigure 4.12: R60: Q–Q plot with the fitted damage model [Y ∗|X > 7.09]d=[(Y − 0.03/Y )|X > 7.09] = [Y |X > 7.09]− 0.03/[Y |X > 7.09].1.01.52.01.5 2.0Theoretical quantileEmpirical quantileFitted damage model The 45 degrees lineR60: α= 0.031164.3. Proof load effect: an empirical assessmentFigure 4.13: R40: Q–Q plot with the fitted damage model [Y ∗|X > 6.11]d=[(Y − 0.12/Y )|X > 6.11] = [Y |X > 6.11]− 0.12/[Y |X > 6.11].0.51.01.52.01.0 1.5 2.0Theoretical quantileEmpirical quantileFitted damage model The 45 degrees lineR40: α= 0.111174.3. Proof load effect: an empirical assessmentFigure 4.14: R20: Q–Q plot with the fitted damage model [Y ∗|X > 4.96]d=[(Y − 0.01/Y )|X > 4.96] = [Y |X > 4.96]− 0.01/[Y |X > 4.96].1.01.52.01.0 1.5 2.0Theoretical quantileEmpirical quantileFitted damage model The 45 degrees lineR20: α= 0.011184.3. Proof load effect: an empirical assessmentBootstrap standard errorsThe standard errors reported in Table 4.5 ignore the variability in the max-imum likelihood estimator of θ. Based on the simulation studies in Section4.3.3, the differences in the standard errors of the least squares estimateswith known and unknown θ are not dramatical large when the correlationis strong. However, a more appropriate standard error for the least squaresestimate of α could be obtained by the bootstrap method with the followingsteps:1. Create a bootstrap sample of size 435 by resampling 174 specimensfrom the T100 sample with replacement, resampling 87 specimens fromthe T20 sample with replacement, and resampling 174 specimens fromthe R100 sample with replacement.2. Find the MLE of θ by optimizing the joint likelihood function giventhe bootstrap sample of the T100, T20 and R100 groups, and denotethe MLE as θˆMLE1 .3. Create a bootstrap sample of size ny∗ by resampling from the residualstrengths {y∗1, y∗2, · · · , y∗ny∗} with replacement, and denote that boot-strap sample by {y∗11 , y∗12 , · · · , y∗1ny∗}. Calculate the empirical cumu-lative function for each resampled residual strength, and denote themby p1i for i = 1, 2, · · · , ny∗ .4. Calculate the estimated theoretical quantile Q[Y |X>l2](p1i , θˆMLE1 ) fori = 1, 2, · · · , ny∗ .5. Calculate ˆˆαLS by plugging Q[Y |X>l2](p1i , θˆMLE1 )’s and y∗1i ’s into Equa-tion 4.23.6. Repeat Steps 1 to 5 for B times.7. The standard deviation of the B least squares estimates is an estimateof the standard error for the least squares estimate of α.1194.4. Summary and conclusionsHowever the substantial amount of computing time that would be requiredto carry out this analysis forces us to leave its implementation to futurework.4.4 Summary and conclusionsThis chapter presents three approaches to estimating the relationship be-tween the strength and the residual strength after proof loading. Section4.1 introduces degradation modelling, and explains why the models devel-oped there are not helpful for exploring the relationship between the strengthand the residual strength.Section 4.2 reviews the wood scientists’ accumulation models, and derivesthe relationship between the strength and the residual strength analyticallybased on the Madison curve, the U.S. model and the Canadian model underload control. The Madison curve suggests a linear relationship betweena representative strength of the strength distribution and a representativestrength of the residual strength distribution. The U.S. model suggestsa linear deterministic relationship between the strength and the residualstrength. The analytical relationship between the strength and the residualstrength is not available for the Canadian model due to its random effects.Section 4.3 offers an empirical solution, which introduces a particularSPLD experiment with a low proof load level. Given the data collected fromthis particular SPLD experiment and a shoulder group, the SPLD with ashoulder approach presented in Chapter 3 enables us to estimate the theo-retical quantiles of the undamaged strength distribution. The comparisonsbetween the empirical quantiles of the residual strength distribution andthe estimated theoretical quantiles of the undamaged strength distributionreveal the damage information. The proposed empirical approach is thenillustrated with the Summer–of–2011 experiment data. Working with theproof loaded groups T60, T40, T20, R60, and R40, we propose the damagemodel [Y ∗|X > l2]d= [(Y − α/Y )|X > l2] = [Y |X > l2] − α/[Y |X > l2]where l2 is the proof load level, [Y ∗|X > l2] is the residual strength inthe Y mode after proof loading in the X mode up to the load level l2 and1204.4. Summary and conclusions[Y |X > l2] > 0 is the strength in Y mode after proof loading in the Xmode up to the load level l2. The roles of X and Y in the damage modelare interchangeable. No apparent damage is found on the proof load sur-vivors for the proof loaded groups T60, R60, and T20. For the proof loadedgroup T40, the damage model is [X∗|Y > 1.38]d= [(X − 1.55/X)|Y >1.38] = [X|Y > 1.38] − 1.55/[X|Y > 1.38] where X is the MOR strength,Y is the log–transformed UTS strength, and [X∗|Y > 1.38] is the resid-ual strength in the MOR mode after proof loading in the UTS mode upto 1.38. For the proof loaded group R40, the damage model is [Y ∗|X >6.11]d= [(Y −0.12/Y )|X > 6.11] = [Y |X > 6.11]−0.12/[Y |X > 6.11] where[Y ∗|X > 6.11] is the log–transformed residual strength in the UTS modeafter proof loading in the MOR mode up to 6.11 and [Y |X > 6.11] is thelog–transformed strength in the UTS mode after proof loading in the MORmode up to 6.11.This empirical approach is not restricted to applications where specimensare proof loaded in one mode and tested to fail in another mode if it survives.For example it could be applied to situations where specimens are proofloaded in one mode and tested to failure using the same test mode if itsurvives. The approach could also be used to characterize the damage induration of load applications, provided sufficient data are available.121Chapter 5Breaking the same boardtwice again with anapplication to theSummer–of–2011 ExperimentChapter 3 presents the single proof load design (SPLD) with a shoulderapproach for estimating dependence between a pair of normally distributedrandom variables that cannot be observed simultaneously. The approach canbe applied to model the dependence between destructive lumber strengthproperties by applying the proof load technique. In Chapter 3, we assumedthat no damage occurred on the proof load survivors. However, damage mayhave accumulated in the strength of the proof load survivors during the proofloading procedure. To address this concern, Chapter 4 explores the proofload effects on strength properties by investigating the relationship betweenthe strength and the residual strength after proof loading. The empiricalapproach illustrated with the Summer–of–2011 experiment data in Section4.3 indicates that the proof load survivors of the proof loaded groups T40 andR40 are damaged. The damage accumulated in the strength of the T40/R40proof load survivors is represented by a damage model that describes thedistributional relationship between the strength and the residual strengthafter proof loading. Due to the damage accumulated in the strength of thesurvivors, the SPLD with a shoulder approach presented in Chapter 3 cannotbe applied directly to the proof loaded groups T40 and R40 to assess thedependence between the destructive strength properties. To generalize the1225.1. The Generalized SPLD with a shoulder approachSPLD with a shoulder approach to include the damage case, this chapterdemonstrates how the damage model is taken into consideration by theSPLD with a shoulder approach (referred to as the generalized SPLD witha shoulder approach hereafter). This chapter also illustrates the generalizedSPLD with a shoulder approach with the Summer–of–2011 experiment data.The illustration based on those data in this chapter provides answers to oneof the most important goals that drive this thesis’s development, which is toassess the dependence and thus to determine the relationship between thedestructive strength properties.5.1 The Generalized SPLD with a shoulderapproachOne of our ultimate goals is to gain insight into the dependence informationbetween two random variables X and Y that cannot be observed simultane-ously. Here, X and Y could be the destructive lumber strength properties.The random variables X and Y are assumed to follow a bivariate normaldistribution,(XY)v N2([µXµY],[σ2X ρσXσYρσXσY σ2Y]).The parameter of interest is the correlation parameter ρ that represents theX − Y dependence.Without loss of generality, we consider a SPLD experiment where speci-mens are proof loaded in the X mode up to a pre–determined proof load levell. The survivors are tested to failure in the Y mode. The SPLD experimentenables us to observe two random variables Z and U whereZ ={X if X < lY if X > land U ={1 if X < l0 if X > l.The component of the likelihood function of θ = (µX , σX , µY , σY , ρ)T givena single observation s = (z, u) collected from the SPLD experiment where1235.1. The Generalized SPLD with a shoulder approachspecimens are proof loaded in the X mode up to the pre–determined loadlevel l, isL(θ; s) = {fX(x)}u{f[Y |X>l](y)P(X > l)}1−u=exp[− (x−µX)22σ2X]σXuexp[− (y−µY )22σ2Y]σY∫ ∞ale−o2/2do1−u,where fX(·) is the density of X, f[Y |X>l](·) is the conditional density ofY given X > l, and al = {l − µX − ρ(σX/σY )(y − µY )}/{σX√1− ρ2}.Given a sample of observations s = {(z1, u1), (z2, u2), · · · , (zn, un)}, the jointlikelihood function of θ isL(θ; s) =n∏i=1L(θ; si).Chapter 3 presents that the SPLD needs a shoulder group of Y data inorder to estimate the parameter θ properly when the sample is of a realisticsize. The specimens of the shoulder group of Y are tested to failure in the Ymode. The component of the joint likelihood function of θ given the singleobservation s = (z, u) from the SPLD experiment and a single observationy from the shoulder group of Y isLPFY (θ; s, y) = L(θ; s)LY (µY , σY ; y), (5.1)where LY (µY , σY ; y) = exp{−(y − µY )2/(2σ2Y )}/(σY√2pi). Given a sampleof observations s = {(z1, u1), (z2, u2), . . . , (zn, un)} collected from the SPLDexperiment and a sample of observations y = {y1, y2, · · · , yny} collectedfrom the shoulder group of Y , the likelihood function of θ isLPFY (θ; s,y) =(n∏i=1L(θ; si))(ns∏i=1LY (µY , σY ; yi)).The parameter θ is then estimated by the maximum likelihood method. Themaximum likelihood estimate (MLE) of θ maximizes the joint likelihood1245.1. The Generalized SPLD with a shoulder approachLPFY (θ; s,y) with respect to θ.However when the proof load level is specified such that damage accu-mulated in the strength of the proof load survivors, say l2, we do not observe(Z,U) from the SPLD experiment. Instead, we observe (Z∗, U) whereZ∗ ={X if X < l2Y ∗ if X > l2and U ={1 if X < l20 if X > l2,and Y ∗ is the residual strength in the Y mode after proof loading in the Xmode up the load level l2. Given a single observation s∗ = (z∗, u) from theSPLD experiment and the single observation y from the shoulder group ofY , the component of the joint likelihood function of θ isL∗PFY (θ; s∗, y) = L∗(θ; s∗)LY (µy, σy; y),where L∗(θ; s∗) is the component of the likelihood function of θ given thesingle observation s∗ = (z∗, u). Thus,L∗(θ; s∗) = {fX(x)}u{f[Y ∗|X>l2](y∗)P(X > l2)}1−u,where f[Y ∗|X>l2](·) is the conditional density function of Y∗ given X > l2.Given a sample of observations s∗ with size n and a sample of observationsfrom the shoulder group y with size ns, the joint likelihood function of θ isL∗PFY (θ; s∗,y) =(n∏i=1L∗(θ; s∗i ))(ns∏i=1LY (µY , σY ; yi)).The density function of [Y ∗|X > l2] is unknown. So, the likelihood func-tion component L∗PFY (θ; s∗, y) is unknown when we observe s∗ = (z∗ =y∗, u = 0). If the deterministic relationship between the random variables[Y ∗|X > l2] and [Y |X > l2] were available, we could derive the densityfunction of [Y ∗|X > l2] by implementing change of variables on the den-sity function of [Y |X > l2] using their deterministic relationship. However,only the distributional relationship between [Y ∗|X > l2] and [Y |X > l2] isassessable. The relationship between [Y ∗|X > l2] and [Y |X > l2] is found1255.1. The Generalized SPLD with a shoulder approachby comparing the empirical quantiles of [Y ∗|X > l2] with the estimatedtheoretical quantiles of [Y |X > l2] in Chapter 4 (Q–Q plot). The deter-ministic relationship between [Y ∗|X > l2] and [Y |X > l2] is not availablebased on the Q–Q plot. The Q–Q plot only compares the distributional re-lationship between two distributions, and cannot provide their deterministicrelationship.In summary, when the proof load level is specified at a level such thatdamage accumulated in the strength of the proof load survivors, we observes∗ for the unknown likelihood function component L∗PFY (θ; s∗,y), but wedo not have the observation s whose corresponding likelihood function com-ponent LPFY (θ; s,y) is known and of interest. This chapter tackles thisissue by the generalized SPLD with a shoulder approach that generates thedesired observation s from the observation s∗. More specifically, the desiredsample from [Y |X > l2] is generated based on a sample from [Y ∗|X > l2]using their distributional relationship. Once the desired observation s forthe likelihood LPFY (θ; s,y) is available, we get the MLE of θ by maximiz-ing the joint likelihood function LPFY (θ; s,y) with respect to θ. When nodamage occurs on the proof load survivors, we have the desired observations and no data generation is needed. In this case, the generalized SPLD witha shoulder approach is identical to the SPLD with a shoulder approach.5.1.1 Generating s from s∗This section illustrates how the generalized SPLD with a shoulder approachgenerates a desired observation s from an observation s∗ in case of damage.More specifically, this section illustrates how a sample from the distribu-tion of [Y |X > l2] is generated based on a sample from the distributionof [Y ∗|X > l2] using their distributional relationship. Suppose that therandom variables [h(Y )|X > l2] and [Y ∗|X > l2] are equal in distributionwhere h(·) is a continuous and invertible function. In our lumber applica-tion, the function h(·) is specified as the functional form used in the em-pirical proof load damage model presented in Section 4.3, Chapter 4. So,[h(Y )|X > l2] = [(Y − α/Y )|X > l2] = [Y |X > l2] − α/[Y |X > l2] where1265.1. The Generalized SPLD with a shoulder approachα ≥ 0. In our application of the summer–of–2011 experiment, the probabil-ities of [Y |X > l2] being equal to or smaller than zero are negligible basedon Table 4.4 when the roles of X and Y are played by the MOR and thelog–transformed UTS. We assume as an approximation [Y |X > l2] > 0. If[h(Y )|X > l2] and [Y ∗|X > l2] are equal in distribution, thenF[h(Y )|X>l2](h(y)) = F[Y ∗|X>l2](y∗),where F[h(Y )|X>l2](·) and F[Y ∗|X>l2] are the distribution functions of[h(Y )|X > l2] and [Y ∗|X > l2], respectively. Both [h(Y )|X > l2] and[Y ∗|X > l2] are continuous random variables, and thus their distributionfunctions are invertible.Let s∗ = {x1, x2, · · · , xks , y∗1, y∗2, · · · , y∗n−ks} be the observed sample ofsize n from a SPLD experiment where specimens are proof loaded in theX mode up to the load level l2, and the survivors are tested to failure inthe Y mode. The observations {x1, x2, · · · , xks} are collected from the ksspecimens that fail below the proof load level l2, while {y∗1, y∗2, · · · , y∗n−ks}are the observations collected from the n−ks damaged proof load survivors.Since {y∗1, y∗2, · · · , y∗n−ks} is an observed sample from [Y∗|X > l2], the sam-ple {ui = F[Y ∗|X>l2](y∗i ) : i = 1, 2, · · · , n − ks} is an observed sample fromthe uniform distribution with parameters 0 and 1. We generate observa-tions {yi : i = 1, 2, · · · , n − ks} by specifying yi = h−1(F−1[Y ∗|X>l2](u)) =h−1(y∗i ). The function h−1(·) is the inverse function of h(·) and h−1(y∗i ) =(y∗i +√(y∗i )2 + 4α)/2. SinceP(h−1(F−1[Y ∗|X>l2](u)) ≤ y) = P(F−1[Y ∗|X>l2](u) ≤ h(y))= P(u ≤ F[Y ∗|X>l2](h(y)))= F[Y ∗|X>l2](h(y))= P(Y ∗ ≤ h(y)|X > l2)= P(h(Y ) ≤ h(y)|X > l2)= P(Y ≤ y|X > l2)= F[Y |X>l2](y),1275.1. The Generalized SPLD with a shoulder approachthe generated sample {yi : i = 1, 2, · · · , n− ks} is an observed sample fromthe distribution of [Y |X > l2]. Based on this, we specify the desired samples for the likelihood LPFY (θ; s,y) to be {x1, x2, · · · , xks , y1, y2, · · · , yn−ks}where yi = (y∗i +√(y∗i )2 + 4α)/2. The parameters of interest are estimatedby the MLE that maximizes the joint likelihood function LPFY (θ; s,y) withrespect to θ given the generated sample s = {x1, x2, · · · , xks , y1, y2, · · · ,yn−ks} and a sample y from the shoulder group of Y . Simulation studiesin Section 5.1.2 are conducted to demonstrate the success of the general-ized SPLD with a shoulder approach, since an analytical approach in thiscomplex situation is not feasible.5.1.2 Simulation studiesThe simulation studies in this section investigate the MLE performanceobtained by the generalized SPLD with a shoulder approach. The simulationstudies contain two parts. In both parts, we consider the distributionalrelationship between [Y ∗|X > l2] and [Y |X > l2] to be the empirical damagemodel for the proof load effect presented in Chapter 4. The damage modelis [Y ∗|X > l2]d= [(Y − α/Y )|X > l2] = [Y |X > l2]− α/[Y |X > l2] whered=means equal in distribution. The first part of the simulation studies (PartI) treats the parameter α as known. The second part of the simulationstudies (Part II) treats the parameter α as unknown, and estimates α usingthe empirical approach of quantifying the proof load damage presented inSection 4.3, Chapter 4. For estimating α, this empirical approach needs thetheoretical quantiles of Y conditional on X > l2. Those theoretical quantilesare estimated in Part II by generating an additional SPLD sample.For each part of the simulation studies, the accuracy of the MLE ob-tained by the generalized SPLD with a shoulder approach is inspected. Theprecision performance of the MLE is also inspected as additional uncertaintyis introduced.Part IThe simulation studies in Part I treat the parameter α as known.1285.1. The Generalized SPLD with a shoulder approachPart I: data simulationA SPLD with a shoulder sample is generated from a bivariate normal dis-tribution. The parameters of the bivariate normal distribution are µX =6.48, σX = 1.85, µY = 1.43, σY = 0.40 and ρ is either 0.5 or 0.8. The mod-erate to high correlations are considered because assessing the proof loaddamage and then applying the generalized SPLD with a shoulder approachis not of practical interest when X and Y are weakly correlated. The totalsample size is N = 600. The specimen allocation between the SPLD groupand the shoulder group is chosen based on the simulation studies of the op-timal allocation in Section 3.3, Chapter 3. We assign 50% of the specimensto the SPLD group and the rest to the shoulder group. The proof load levelof the SPLD group is chosen based on the simulation studies of the optimalproof load level in Section 3.3, Chapter 3. So the proof load level l2 of theSPLD group is specified such that 80% of the specimens fail below the loadlevel. We generate the SPLD sample such that [(Y − α/Y )|X > l2] and[Y ∗|X > l2] are equal in distribution but are not equal deterministically.The SPLD sample size is 300 (N × 0.5 = 300). The sample for the shouldergroup is generated from the univariate normal distribution with µy = 1.43and σy = 0.40. The shoulder sample of size 300 (N × 0.5 = 300) is denotedby y = {yi, i = 1, 2, · · · , 300}.To simulate the SPLD sample s∗ such that [(Y − α/Y )|X > l2] and[Y ∗|X > l2] are equal in distribution but are not equal deterministically,we simulate two SPLD samples of size 300 each and only take the desiredobservations from each SPLD sample (details below).The first SPLD sample of size 300 is generated from the same bivariatenormal distribution where the parameters are µX = 6.48, σX = 1.85, µY =1.43, σY = 0.40 and ρ is either 0.5 or 0.8. The proof load level l2 is specifiedin the X mode such that 80% of the specimens fail below the load level.The generated sample of size 300 is {(x1i , y1i) : i = 1, 2, · · · , 300}. All xobservations are kept if the x observation is small than the proof load levell2. Equivalently, the x observations are kept where {x1i : x1i < l2, i =1, 2, · · · , 300}. The size of {x1i : x1i < l2, i = 1, 2, · · · , 300} is 240 (N×0.5×1295.1. The Generalized SPLD with a shoulder approach0.8 = 240). All other generated observations including the y observationsare discarded.The second SPLD sample is generated from the same bivariate normaldistribution. The bivariate normal sample is {(x2i , y2i) : i = 1, 2, · · · , 300}.However, this time, we take all y observations where the corresponding xobservation is greater than the proof load level l2. Equivalently, the y ob-servations are kept where {y2j : x2j > l2, j = 1, 2, · · · , 300}. The size of{y2j : x2j > l2, j = 1, 2, · · · 300} is 60 (N × 0.5× 0.2 = 60). Each selected yis transformed to a residual strength y∗ by y∗ = y − α/y. The specificationof α is guided by the data values from the Summer–of–2011 experiment.The minimum of the observed log–transformed UTS value is 0.51 from theR40 proof loaded group, which corresponds to the 1st empirical quantileof the log-transformed UTS distribution. Realistic values of α should bespecified such that P(Y ∗ > 0.51|X > l2) = P(Y − α/Y > 0.51|X > l2) hasa high probability (say 0.90) when the positive correlation between X andY is strong. Therefore, the simulation studies specify α to be 0, 0.1, 0.3, 0.5or 0.7. No damage occurs on the survivors when α = 0.The simulated s∗ sample is the union of the two samples {x1i : x1i <l2, i = 1, 2, · · · , 300} ∪ {y∗2j : x2j > l2, j = 1, 2, · · · 300}. The sample{y∗2j : x2j > l2, j = 1, 2, · · · 300} satisfies the desired simulation settingwhere [Y ∗|X > l2] and [(Y − α/Y )|X > l2] are equal in distribution, butare not equal deterministically.Up to this point, we have simulated a shoulder sample y and a SPLDsample s∗ where the proof load survivors are damaged. The generatedsample for investigating the generalized SPLD with a shoulder approachis {s∗ ∪ y}.Part I: implementation of the simulated dataThe desired sample s for the likelihood LPFY (θ; ·) is specified to be {x1i :x1i < l2, i = 1, 2, · · · , 300} ∪ {y2j : x2j > l2, j = 1, 2, · · · 300} where y2j =h−1(y∗2j ) = y∗2j +√(y∗2j )2 + 4α. We calculate the MLE of θ that maxi-mizes the joint likelihood LPFY (θ; s,y) given the generated observations1305.1. The Generalized SPLD with a shoulder approachy = {yi, i = 1, 2, · · · , 300} and s = {x1i : x1i < l2, i = 1, 2, · · · , 300} ∪ {y2j :x2j > l2, j = 1, 2, · · · 300}.The above procedures are repeated 104 times. The average and thestandard deviation of the MLE’s across 104 data sets are summarized inTable 5.1. The average of the MLE’s stays in a close neighbourhood of thetrue value, indicating that the MLE of θ is an accurate estimate.Table 5.1: Known α with a sample size N = 600: performance of the MLEobtained by the generalized SPLD with a shoulder approach. The damagemodel is [Y ∗|X > l2]d= [(Y −α/Y )|X > l2]. Simulation details are describedin the text.α = 0 α = 0.1 α = 0.3 α = 0.5 α = 0.7µX = 6.48EST 6.48 6.48 6.48 6.48 6.48SE 0.11 0.11 0.11 0.11 0.11σX = 1.85EST 1.85 1.85 1.85 1.85 1.85SE 0.09 0.09 0.09 0.09 0.09µY = 1.43EST 1.43 1.43 1.43 1.43 1.43SE 0.02 0.02 0.02 0.02 0.02σY = 0.40EST 0.40 0.40 0.40 0.40 0.40SE 0.01 0.01 0.01 0.01 0.01ρ = 0.50EST 0.50 0.50 0.50 0.50 0.50SE 0.09 0.09 0.09 0.09 0.09α = 0 α = 0.1 α = 0.3 α = 0.5 α = 0.7µX = 6.48EST 6.48 6.48 6.48 6.48 6.48SE 0.11 0.11 0.11 0.11 0.11σX = 1.85EST 1.85 1.85 1.85 1.85 1.85SE 0.09 0.09 0.09 0.09 0.09µY = 1.43EST 1.43 1.43 1.43 1.43 1.43SE 0.02 0.02 0.02 0.02 0.02σY = 0.40EST 0.40 0.40 0.40 0.40 0.40SE 0.01 0.01 0.01 0.01 0.01ρ = 0.80EST 0.80 0.80 0.80 0.80 0.80SE 0.05 0.05 0.05 0.05 0.05We conduct the above simulation studies (data simulation and implemen-tation of simulated data) again, but with a smaller sample size N = 300.1315.1. The Generalized SPLD with a shoulder approachThe results are summarizes in a similar fashion as Table 5.1, and are re-ported in Table 5.2. The MLE of θ remains accurate even for the smallersample size N = 300.Table 5.2: Known α with a sample size N = 300: performance of the MLEobtained by the generalized SPLD with a shoulder approach. The damagemodel is [Y ∗|X > l2]d= [(Y −α/Y )|X > l2]. Simulation details are describedin the text.α = 0 α = 0.1 α = 0.3 α = 0.5 α = 0.7µX = 6.48EST 6.48 6.48 6.48 6.48 6.48SE 0.16 0.16 0.16 0.16 0.16σX = 1.85EST 1.84 1.84 1.84 1.84 1.84SE 0.12 0.12 0.12 0.12 0.12µY = 1.43EST 1.43 1.43 1.43 1.43 1.43SE 0.03 0.03 0.03 0.03 0.03σY = 0.40EST 0.40 0.40 0.40 0.40 0.40SE 0.02 0.02 0.02 0.02 0.02ρ = 0.50EST 0.50 0.50 0.50 0.50 0.50SE 0.12 0.12 0.12 0.12 0.12α = 0 α = 0.1 α = 0.3 α = 0.5 α = 0.7µX = 6.48EST 6.48 6.48 6.48 6.48 6.48SE 0.16 0.16 0.16 0.16 0.16σX = 1.85EST 1.84 1.84 1.84 1.84 1.84SE 0.13 0.13 0.13 0.13 0.13µY = 1.43EST 1.43 1.43 1.43 1.43 1.43SE 0.03 0.03 0.03 0.03 0.03σY = 0.40EST 0.40 0.40 0.40 0.40 0.40SE 0.02 0.02 0.02 0.02 0.02ρ = 0.80EST 0.80 0.80 0.80 0.80 0.80SE 0.08 0.08 0.08 0.08 0.08Part IIThe simulation studies in Part II treat the parameter α as unknown, andestimate it by the empirical approach of quantifying proof load damagepresented in Section 4.3, Chapter 4.1325.1. The Generalized SPLD with a shoulder approachPart II: data simulationA SPLD with a shoulder sample of size N1 = 1200 is simulated based ona bivariate normal distribution. The parameters of the bivariate normaldistribution are µX = 6.48, σX = 1.85, µY = 1.43, σY = 0.40 and ρ iseither 0.5 or 0.8. Following the optimal allocation developed in Chapter3, we assign 50% of the specimens to the SPLD group and the rest to theshoulder group. The shoulder group of size 600 (N1×0.5) is simulated fromthe univariate normal distribution with µY = 1.43 and σY = 0.40. For theSPLD group of size 600 (N1 × 0.5), specimens are proof loaded in the Xmode. The proof load level is specified such that 20% of the specimens failbelow the load level. In practice, we will also use a low proof load level forsafely assuming that the damage accumulated in the proof load survivors isnegligible. The survivors are tested to failure in the Y mode. We calculatethe MLE of θ that optimizes the likelihood function LPFY (θ; ·) (Equation5.1) given the generated SPLD with a shoulder sample, and denote it asθˆMLE.Next, we generate a second SPLD with a shoulder sample where the proofload survivors are damaged. The sample size is N2 = 600. We assign 50% ofthe specimens to the SPLD group and the rest to the shoulder group of Y .The proof load level l2 of the SPLD group is specified such that we expect80% of the specimens are failed below the proof load level. The SPLD samples∗ is generated such that [(Y − α/Y )|X > l2] and [Y ∗|X > l2] are equalin distribution but are not equal deterministically. The shoulder sample ofsize 300 (N2×0.5) is generated from the univariate normal distribution withµy = 1.43 and σy = 0.40. The shoulder sample is denoted by y = {yi, i =1, 2, · · · , 300}.To simulate the sample s∗ of size 300 such that [(Y −α/Y )|X > l2] and[Y ∗|X > l2] are equal in distribution but are not equal deterministically,we simulate two SPLD samples of size 300 each and only take the desiredobservations from each proof loaded sample (details below).The first SPLD sample of size 300 is generated from the same bivariatenormal distribution where the parameters are µX = 6.48, σX = 1.85, µY =1335.1. The Generalized SPLD with a shoulder approach1.43, σY = 0.4 and ρ is either 0.5 or 0.8. The proof load level l2 is specified inthe X mode such that 80% of specimens fail below the load level. The gen-erated sample of size 300 is {(x1i , y1i) : i = 1, 2, · · · , 300}. All x observationsare kept if the x observation is small than the proof load level l2. Equiva-lently, the x observations are kept where {x1i : x1i < l2, i = 1, 2, · · · , 300}.The size of {x1i : x1i < l2, i = 1, 2, · · · , 300} is 240 (N2 × 0.5 × 0.8 = 240).All other generated observations including the y observations are discarded.The second SPLD sample is generated from the same bivariate normalsample. The bivariate normal sample is {(x2i , y2i) : i = 1, 2, · · · , 300}.However, this time, we take all y observations where the corresponding xobservation is greater than the proof load level l2. Equivalently, the y ob-servations are kept where {y2j : x2j > l2, j = 1, 2, · · · , 300}. The size of{y2j : x2j > l2, j = 1, 2, · · · 300} is 60 (N2 × 0.5 × 0.2 = 60). Each selectedy is transformed to a residual strength y∗ by y∗ = y − α/y. The simula-tion studies specify α to be 0, 0.1, 0.3, 0.5 or 0.7. No damage occurs on thesurvivors when α = 0.The simulated s∗ sample is the union of the two samples {x1i : x1i <l2, i = 1, 2, · · · , 300} ∪ {y∗2j : x2j > l2, j = 1, 2, · · · 300}. The simulatedsample {y∗2j : x2j > l2, j = 1, 2, · · · 300} provides the desired simulationsetting where [Y ∗|X > l2] and [(Y −α/Y )|X > l2] are equal in distribution,but are not equal deterministically.Up to this point, we have simulated a SPLD sample s∗ where the proofload survivors are damaged and a shoulder sample y. The sample for thegeneralized SPLD with a shoulder approach is {s∗ ∪ y}.Part II: Implementation of the simulated dataThe empirical quantile of [Y ∗|X > l2] for each y∗2j is pj wherepj =sum{I{y∗2j˜ ≤ y∗2j} : j˜ = 1, 2, · · · , 60} − 0.560.1345.1. The Generalized SPLD with a shoulder approachThe estimated theoretical quantile of [Y |X > l2] is Q[Y |X>l2](pj , θˆMLE).The approximate least squares estimate of α is calculated byˆˆαLS = −∑60j=1(y∗2j −Q[Y |X>l2](pj , θˆMLE))/Q[Y |X>l2](pj , θˆMLE)∑60j=1(1/Q[Y |X>l2](pj , θˆMLE))2.The desired sample s for the likelihood LPFY (θ; s,y) is specified to be {x1i :x1i < l2, i = 1, 2, · · · , 300} ∪ {y2j : x2j > l2, j = 1, 2, · · · 300} where y2j =hˆ−1(y∗2j ) = y∗2j +√(y∗2j )2 + 4ˆˆαLS . Invertible requires that (y∗2j )2 + 4ˆˆαLS tobe greater than or equal to 0 for all j = 1, 2, · · · , 60. We specify N1 largeenough such that θˆMLEis not very far off from the true value. If θˆMLEisvery far off from the true value, hˆ(·) may not be invertible.We calculate the MLE of θ that maximizes the joint likelihood functionLPFY (θ; s,y) given the observations y = {yi, i = 1, 2, · · · , 300} and s ={x1i : x1i < l2, i = 1, 2, · · · , 300} ∪ {y2j : x2j > l2, j = 1, 2, · · · 300}.The above procedure is repeated 104 times. Table 5.3 reports the averageand the standard deviation of the MLE’s of θ across 104 data sets. Whenρ = 0.5, the damage function hˆ(·) is not invertible about six percent of thetime when α = 0. This percentage reduces to 3%, 0.5%, 0.06%, and 0 as αincreases to 0.1, 0.3, 0.5 and 0.7, respectively. When ρ = 0.8 and α = 0, thedamage function hˆ(·) is not invertible for 3 cases. The non–invertible casesare removed from the 104 cases. The average of the MLE’s stay in a closeneighbourhood of the true value. The simulation results suggest that theMLE is an accurate estimate of θ. The mean and the standard deviation ofthe approximate least squares estimates of α are also reported in Table 5.3.The average of the approximate least squares estimates of α stays in a closeneighbourhood of the true value, which indicates that the approximate leastsquares estimate of α is accurate.The above simulation studies (data simulation and implementation ofsimulated data) are conducted again, but with different sample sizes (N1 =1200, N2 = 300). When ρ = 0.5, the damage function hˆ(·) is not invertibleabout four percent of the time when α = 0. This percentage reduces to0.4% as α increases to 0.7. When ρ = 0.8 and α = 0 (α = 0.1), the damage1355.1. The Generalized SPLD with a shoulder approachTable 5.3: Unknown α with sample sizes N1 = 1200 and N2 = 600: per-formance of the MLE obtained by the generalized SPLD with a shoulderapproach. The sample size of the SPLD with a shoulder sample with-out proof load damage is N1. The sample size of the SPLD with ashoulder sample with proof load damage is N2. The damage model is[Y ∗|X > l2]d= [(Y − α/Y )|X > l2]. Simulation details are described inthe text.α = 0 α = 0.1 α = 0.3 α = 0.5 α = 0.7µX = 6.48EST 6.48 6.48 6.48 6.48 6.48SE 0.11 0.11 0.11 0.11 0.11σX = 1.85EST 1.84 1.84 1.84 1.84 1.84SE 0.09 0.09 0.09 0.09 0.09µY = 1.43EST 1.43 1.43 1.43 1.43 1.43SE 0.02 0.02 0.02 0.02 0.02σY = 0.40EST 0.40 0.40 0.40 0.40 0.40SE 0.01 0.01 0.01 0.01 0.01ρ = 0.50EST 0.51 0.50 0.50 0.49 0.49SE 0.11 0.11 0.12 0.12 0.12αEST 0.01 0.10 0.30 0.50 0.70SE 0.12 0.13 0.15 0.16 0.18α = 0 α = 0.1 α = 0.3 α = 0.5 α = 0.7µX = 6.48EST 6.48 6.48 6.48 6.48 6.48SE 0.11 0.11 0.11 0.11 0.11σX = 1.85EST 1.84 1.84 1.84 1.84 1.84SE 0.09 0.09 0.09 0.09 0.09µY = 1.43EST 1.43 1.43 1.43 1.43 1.43SE 0.02 0.02 0.02 0.02 0.02σY = 0.40EST 0.40 0.40 0.40 0.40 0.40SE 0.01 0.01 0.01 0.01 0.01ρ = 0.80EST 0.80 0.80 0.80 0.80 0.80SE 0.07 0.07 0.06 0.06 0.06αEST 0.00 0.10 0.30 0.50 0.70SE 0.11 0.11 0.12 0.13 0.141365.2. Application: Summer–of–2011 experimentfunction hˆ(·) is not invertible for 4 cases (1 case). The non–invertible casesare removed from the 104 cases. The results are summarizes in a similarfashion as Table 5.3, and are reported in Tables 5.4. The accuracy of theMLE of θ remains reasonably well.5.2 Application: Summer–of–2011 experimentThe Summer–of–2011 experiment has two shoulder groups T100 and R100.The specimens of the shoulder groups T100 and R100 are tested to failure inthe UTS mode and in the MOR mode, respectively. The Summer–of–2011experiment also has three proof loaded groups (R20, R40, and R60) wherethe specimens are proof loaded in the MOR mode, and the survivors aretested to failure in the UTS mode. In addition, the experiment has threeproof loaded groups (T20, T40, and T60) where the specimens are proofloaded in the UTS mode and the survivors are tested to failure in the MORmode.Combining each R proof loaded group (R20/R40/R60) with the shouldergroup T100 is a sample for the generalized SPLD with a shoulder approachwhere specimens are proof loaded in the MOR mode and the survivors aretested to failure in the UTS mode. Similarly, combining each proof loadedgroup T20/T40/T60 with the shoulder group R100 provides a sample forthe generalized SPLD with a shoulder approach where specimens are proofloaded in the UTS mode and the survivors are tested to failure in the MORmode. The generalized SPLD with a shoulder approach is identical to theSPLD with a shoulder approach when no damage occurs on the proof loadsurvivors.Let X be the MOR, and Y , the log–transformed UTS. The joint distri-bution of X and Y is assumed to follow a bivariate normal distribution wherethe mean and the standard deviation of X (Y ) are µX(µY ) and σX(σY ), re-spectively. The correlation between X and Y is ρ. The parameter of interestis θ = (µX , σX , µY , σY , ρ)T . The following list summarizes the samples fromthe Summer–of–2011 experiment in the form of the generalized SPLD witha shoulder approach with their corresponding proof load damage models1375.2. Application: Summer–of–2011 experimentTable 5.4: Unknown α with sample sizes N1 = 1200 and N2 = 300: per-formance of the MLE obtained by the generalized SPLD with a shoulderapproach. The sample size of the SPLD with a shoulder sample with-out proof load damage is N1. The sample size of the SPLD with ashoulder sample with proof load damage is N2. The damage model is[Y ∗|X > l2]d= [(Y − α/Y )|X > l2]. Simulation details are described inthe text.α = 0 α = 0.1 α = 0.3 α = 0.5 α = 0.7µX = 6.48EST 6.48 6.48 6.48 6.48 6.48SE 0.16 0.16 0.16 0.16 0.16σX = 1.85EST 1.84 1.84 1.84 1.84 1.84SE 0.13 0.13 0.13 0.13 0.13µY = 1.43EST 1.43 1.43 1.43 1.43 1.43SE 0.03 0.03 0.03 0.03 0.03σY = 0.40EST 0.40 0.40 0.40 0.40 0.40SE 0.02 0.02 0.02 0.02 0.02ρ = 0.50EST 0.50 0.50 0.50 0.50 0.49SE 0.13 0.13 0.13 0.13 0.13αEST 0.00 0.10 0.30 0.50 0.70SE 0.14 0.15 0.17 0.18 0.20α = 0 α = 0.1 α = 0.3 α = 0.5 α = 0.7µX = 6.48EST 6.48 6.48 6.48 6.48 6.48SE 0.16 0.16 0.16 0.16 0.16σX = 1.85EST 1.84 1.84 1.84 1.84 1.84SE 0.13 0.13 0.13 0.13 0.13µY = 1.43EST 1.43 1.43 1.43 1.43 1.43SE 0.03 0.03 0.03 0.03 0.03σY = 0.40EST 0.40 0.40 0.40 0.40 0.40SE 0.02 0.02 0.02 0.02 0.02ρ = 0.80EST 0.80 0.80 0.80 0.80 0.80SE 0.08 0.08 0.08 0.08 0.08αEST 0.00 0.10 0.30 0.50 0.70SE 0.13 0.13 0.14 0.15 0.161385.2. Application: Summer–of–2011 experimentobtained in Chapter 4:• Sample 1: T20, R100 and T100. Chapter 4 assumed no damage onthe T20 proof load survivors.• Sample 2: T40 and R100. The proof load survivors are damaged. Thedamage model is [X∗|Y > 1.38]d= [X|Y > 1.38]− 1.55/[X|Y > 1.38].The desired likelihood sample s from the distribution of [X|Y > 1.38]is generated based on the observed sample s∗ from the distribution of[X∗|Y > 1.38].• Sample 3: T60 and R100. No damage is observed on the proof loadsurvivors.• Sample 4: R20 and T100. No damage is observed on the proof loadsurvivors.• Sample 5: R40 and T100. The proof load survivors are damaged. Thedamage model is [Y ∗|X > 6.11]d= [Y |X > 6.11]− 0.12/[Y |X > 6.11].The desired likelihood sample s from the distribution of [Y |X > 6.11]is generated based on the observed sample s∗ from the distribution of[Y ∗|X > 6.11].• Sample 6: R60 and T100. No damage is observed on the proof loadsurvivors.In the cases of no damage (i.e. Samples 1, 3, 4, and 6), the SPLDwith a shoulder approach is applied directly to obtain the MLE of θ. Inthe cases of damage (i.e. Samples 2 and 5), the generalized SPLD with ashoulder approach is applied to obtain the MLE of θ. Equivalently, thedesired observation s for the likelihood function LPFY (θ; s,y) is generatedfrom the observed sample s∗. Once the desired sample s is generated, theSPLD with a shoulder approach is applied to obtain the MLE of θ.The MLE’s of θ based on Samples 1 to 6 are reported in Table 5.5 withthe corresponding standard errors. Table 5.5 also provides sample sizes foreach sample. The MLE’s of ρ tell a consistent story that X and Y are highly1395.3. Summary and conclusionscorrelated. The approximate 95% confidence intervals of ρ contain moderateto high correlation values. If there is a need to verity the strength properties,strength properties need not both be tested. We can collect X and predictY from the collected X, and vice versa. Given X = x, we could predict Yby the conditional mean of Y given X = x, i.e. µY +σY ρ(x−µX)/σX . Theestimated conditional mean of Y given X = x isµˆY + σˆY ρˆ(x− µˆX)/σˆX ,where µˆX , σˆX , µˆY , σˆY and ρˆ are the MLE’s of µX , σX , µY , σY and ρ, re-spectively.Table 5.5: Application: Summer–of–2011 experiment. The MLE of θ andits standard error are reported based on each SPLD with a shoulder sample.The approximate 95% confidence intervals (CI) of ρ are also reported.Groups No. µX σX µY σY ρ CI of ρT20435MLE 6.51 1.84 1.45 0.39 0.75(0.36, 1)R100, T100 SE 0.13 0.10 0.03 0.02 0.20R20, T100 261MLE 6.11 1.59 1.43 0.39 0.77(0.55, 0.99)SE 0.34 0.29 0.03 0.02 0.11R40, T100 261MLE 6.26 1.73 1.43 0.40 0.68(0.46, 0.90)SE 0.23 0.21 0.03 0.02 0.11R60, T100 261MLE 6.46 1.84 1.43 0.40 0.70(0.50, 0.90)SE 0.21 0.19 0.03 0.02 0.10T40, R100 261MLE 6.56 1.86 1.53 0.55 0.85(0.60, 1)SE 0.14 0.10 0.09 0.07 0.13T60, R100 261MLE 6.64 1.87 1.47 0.43 0.79(0.61, 0.97)SE 0.13 0.09 0.05 0.05 0.095.3 Summary and conclusionsThis chapter illustrates how to incorporate the damage model developed inChapter 4 into the SPLD with a shoulder approach presented in Chapter 3in case of damage. The generalized SPLD with a shoulder approach gener-1405.3. Summary and conclusionsates the desired sample s from (Z,U) for the joint likelihood LPFY (θ; s,y)based on the observed sample s∗ from (Z∗, U) using the distributional re-lationship between [Y |X > l2] and [Y ∗|X > l2]. The parameter θ is thenestimated by the maximum likelihood method where the MLE of θ maxi-mizes the joint likelihood LPFY (θ; s,y) where s is generated from s∗. Sim-ulation studies indicate that the resulted MLE is an accurate estimate. Thegeneralized SPLD with a shoulder approach is illustrated with the Summer–of–2011 experiment data for assessing the dependence between MOR andthe log–transformed UTS. The high correlation found in the applicationhas important implications for lumber monitoring programs. If there is aneed to verity MOR and UTS, the high correlation between MOR and thelog–transformed UTS suggests that we do not need to measure both MORand UTS. Instead, we reduce the cost of the lumber monitoring programsby collecting one of them and the other is predicted from the collected one.141Chapter 6Conclusion6.1 SummaryThis thesis presents a novel approach to estimating the correlation betweena pair of normally distributed random variables that cannot be observedsimultaneously. This approach is applied to get the relationship betweenone lumber strength property and another strength property, where the twoproperties are both measured destructively. The lumber property relation-ship enables the long term monitoring programs in the North Americanlumber industry to reduce the monitoring cost, by measuring just one of thetwo properties and predicting the other from it.In Chapter 2, we introduced lumber and its major strength propertiessuch as the modulus of rupture (MOR) and the ultimate tensile strength(UTS). We described the Summer–of–2011 experiment on manufacturedlumber. The Summer–of–2011 was designed to determine the lumber strengthproperties relationship and conducted in the summer of 2011. This experi-ment had:• three groups of size 87 each with lumber proof loading in the MORmode (R60/40/20) and testing proof load survivors in the UTS mode;• three groups of size 87 with lumber proof loading in the UTS mode(T60/40/20) and testing proof load survivors in the MOR mode;• one shoulder group of size 174 with lumber tested to failure in theMOR mode (R100) and;• one shoulder group of size 174 with lumber tested to failure in theUTS mode (T100).1426.1. SummaryThe proof load levels for the proof loaded groups were specified suchthat the load levels were well spread out, which enabled us to study theproof load effects on the strength of the proof load survivors in Chapter 4.The exploratory analysis of the data collected from the shoulder groups ofthe Summer–of–2011 experiment suggests a normal distribution for mod-elling MOE, a Weibull or log–normal distribution for modelling UTS, anda normal or Weibull distribution for modelling MOR.In Chapter 3, we reviewed the single proof load design (SPLD) approachthat uses a sample from the SPLD experiment and the maximum likelihoodmethod to estimate the correlation between a pair of normally distributedrandom variables that cannot be observed simultaneously. We rediscoveredthe need to redesign the SPLD for resolving the near non-identifiability issueof the parameters in the SPLD approach when the sample size is realistic.The SPLD with a shoulder is suggested by the simulation studies of the pe-nalized SPLD approach. The penalized SPLD approach constructs a penaltyfunction based on an empirical estimated Bayesian power prior to regularizethe parameters. The SPLD with a shoulder experiment assigns specimens toone of the two groups: the SPLD group and the shoulder group. The shoul-der group in this new design strengthens the sample–to–estimate bridge,and thus resolves the near non-identifiability issue of the SPLD approachand leads to a more accurate, precise maximum likelihood estimate. Basedon simulation studies, the optimal allocation assigns 50% of the specimensto the SPLD group, and the rest to the shoulder group. The optimal proofload level for the SPLD group is expected to fail about 80% of specimensduring the proof loading procedure. This percentage could be reduced whenprior information suggests that ρ is reasonably large.Chapter 3 assumed that no damage occurred on the proof load survivors.Chapter 4 investigated that assumption and modelled the proof load damageaccumulated in the strength of the proof load survivors. We reviewed threedamage accumulation models for the so–called duration of load effect (wheresustained stress is expected to reduce the strength of lumber). Based on thethree damage accumulation models, we derived the relationship between thestrength and the residual strength after proof loading under load control.1436.1. SummaryThe first of these models is the Madison curve, which suggests approximatelya linear relationship between a representative strength of the strength dis-tribution and a representative strength of the residual strength distribution.The second, the U.S. model, also suggests an approximate linear determin-istic relationship between the strength and the residual strength. The third,the Canadian model, proves to be analytically intractable due to its ran-dom effects, leaving the relationship between the residual strength and thestrength unclear. Since the Summer–of–2011 experiment was conductedunder displacement control, the relationships derived from the duration–of–load models may not directly applicable to our data sets.In Chapter 4, we also described an empirical approach to quantifyingthe accumulated damage in the strength of the proof load survivors. Herewe introduced a new approach based on the idea that we were interested indescribing the new population of lumber represented by the proof load sur-vivors and say how its distribution may have changed due to the proof loaddamage. To do this, we used the SPLD with a shoulder approach to estimatetheoretical quantiles of the strength distribution given a sample generatedby a SPLD with a shoulder experiment using a low proof load level. Thelow proof load level is set such that little or no damage will occur in thestrength of the proof load survivors. The comparison between the estimatedtheoretical quantiles of the strength distribution and the empirical quan-tiles of the residual strength distribution reveals the damage accumulatedin the strength of the proof load survivors. The damage is described as thedistributional relationship between the strength and the residual strength.The illustration of this empirical approach with the data collected from theSummer–of–2011 experiment suggests that low and high proof load levelsleave survivors undamaged, but intermediate proof load levels do damageto the weaker survivors. To quantify the damage, we use the damage model[Y ∗|X > l2]d= [Y |X > l2]− α/[Y |X > l2] where [Y ∗|X > l2] is the residualstrength in the Y mode after proof loading in the X mode and [Y |X > l2]is the strength in Y mode after proof loading in the X mode, and l2 is theproof load level, and α ≥ 0.For the proof loaded group T40, the damage model is [X∗|Y > 1.38]d=1446.1. Summary[X|Y > 1.38]−1.55/[X|Y > 1.38] where X represents the MOR strength, Yrepresents the log–transformed UTS strength, and [X∗|Y > 1.38] representsthe residual strength in the MOR mode after proof loading in the UTSmode up to 1.38. For the proof loaded group R40, the damage model is[Y ∗|X > 6.11]d= [Y |X > 6.11] − 0.12/[Y |X > 6.11] where [Y ∗|X > 6.11]represents the log–transformed residual strength in the UTS mode afterproof loading in the MOR mode up to 6.11, and [Y |X > 6.11] representsthe log–transformed strength in the UTS mode after proof loading in theMOR mode up to 6.11. No apparent damage is found on the proof loadsurvivors of the proof loaded groups T60, R60, and T20. In short, thesurvivor populations of both severe proof loading or little proof loadingwill not differ much from their corresponding cousins from a non–damagedpopulation.Using the methods from Chapters 3 and 4, we could turn at last inChapter 5 to the genesis of this thesis, how to break the same board twiceto measure the correlation between two strength properties that can onlybe found by destructive testing. We presented the generalized SPLD with ashoulder approach that incorporates the proof load damage into the SPLDwith a shoulder approach. The key idea is to generate bootstrap versionsof the desired (but unobserved) sample from the true strength distributionbased on a sample from the residual strength distribution, using their dis-tributional relationship. This distributional relationship is available fromthe proposed damage model in Chapter 4. The simulation studies of thegeneralized SPLD with a shoulder approach indicate that the generalizedSPLD with a shoulder approach provides an accurate estimate of the corre-lation between a pair of normally distributed random variables that cannotbe observed simultaneously.Through the work presented in Chapters 2 to 5, we finally provided amethod for estimating the relationship between two random variables thatcannot be observed simultaneously. The generalized SPLD with a shoulderapproach was applied on the Summer–of–2011 experiment data to estimatethe correlation between MOR and the log–transformed UTS. Our applica-tion found a high correlation between MOR and the log–transformed UTS.1456.2. Application of our workThe finding suggests that if there is a need to verify the two strength proper-ties, only one of them needs to be measured and the other can be predictedfrom the measured strength. Given a specimen’s MOR, its log–transformedUTS is predicted by the conditional mean of the log–transformed UTS givenits MOR, and vice versa.6.2 Application of our workThe applications of the methods presented in this thesis are based on us-ing data collected from the summer–of–2011 experiment. The results arepromising and worth exploring for future application. However our summer–of–2011 experiment sample is far from a representative sample of the globalpopulation of lumber, which will contain a wider range of sizes, species,wood fibre characteristics, and production practices. The experiment con-tains two bundles of SPF grade–type 1650f–1.5E machine graded and onebundle of type SPF No.2 visually graded lumber (870 specimens in total)from one production facility. All were nominal 2-inch by 4-inch lumber.The readers are reminded that broad commercial acceptance of these meth-ods will require studying them with data sets that contain the sources ofvariation typically found in representative global lumber samples, and otherlumber sizes. To generalize the results to the lumber populations, a largescale study to explore the results would be needed.The empirical damage model developed in Chapter 4 is in agreement withintuition and with the goal of simplicity. Moreover, the extensive simulationstudies show that the model is identifiable. In particular, the estimate of thedamage model parameter α is shown to be reasonably close to the true valuesspecified in the simulation studies. But in application with our experimentaldata, the empirical damage models for proof load levels introduced in thesummer–of–2011 experiment are based on samples with small sizes due toresource limitations. For example, the sample sizes of the survivors from theproof loaded groups T60 and R60 are 33 and 35, respectively. The T60 andR60 samples are too small to give accurate estimates of their damage modelparameters α’s. The analysis is computationally demanding and beyond the1466.3. Future workresources available to fully assess the uncertainties in the estimated α’s.Further more, additional theory is needed to complete development ofthe theory whose foundations are laid in the thesis. These extensions aregiven in the next section.6.3 Future workModelling dependence using a bivariate Weibull distributionor CopulasOur work is developed under the bivariate normal distribution framework,which limits the domain of its applicability. The Weibull distribution isalso a commonly used distribution in reliability theory. A project for theimmediate future is using a bivariate Weibull distribution.Another possibility of specifying the joint distribution of two lumberstrength properties is by Copulas. To construct a Copula, one of the veryfirst steps is specifying the marginal distributions of the strength properties.The marginal distributions of the strength properties need not to be from thesame family of distributions. For the MOR and the UTS in our summer–of–2011 experiment, the marginal distributions of the MOR and the UTScould be specified as the normal and the Weibull distributions, respectively.Once the marginal distributions of the strength properties are specified, wethen specify their dependence structure by a bivariate function (a bivariateCopulas) that takes the distribution functions of the MOR and UTS as itsinput. The construction of a bivariate Copula for the MOR and UTS isequivalent to specifying the joint distribution of the MOR and the UTS,but enables the marginal distributions of the UTS and the MOR to be fromdifferent family of distributions.Asymptotic propertiesDue to the intractability of the objectives addressed in this thesis, we haveadopted the modern practice of using small sample simulation studies tocharacterize the properties of the methods we have developed. Future work1476.3. Future workwill include the development of an asymptotic theory to shed additionallight on the nature of the methods we have developed.Inclusion of nondestructive variablesThe thesis has not yet incorporated additional variables that are not mea-sured destructively but which are highly correlated with the destructivebivariate random variables. For example, in the lumber application context,we have not incorporated the modulus of elasticity (MOE), which is mea-sured nondestructively and highly correlated with both UTS and MOR.Since MOE is highly correlated with UTS and MOR, modelling MOE,UTS, and MOR together as a trivariate case could strengthen the depen-dence signal between UTS and MOR.Multivariate caseThe future work could also generalize the current SPLD with a shoulderapproach to model dependence between multivariate random variables thatcannot be observed simultaneously. For example, the trivariate case hasthree random variables that cannot be observed simultaneously where thethree random variables could be the UTS of a lumber specimen for threedifferent sizes ss1, ss2, and ss3. Let ss1, ss2, ss3 be ranked from the largestto the smallest. The size of a lumber specimen includes width, depth andlength. The size or volume effect is a phenomenon where the strength of alumber specimen depends on the stressed volume. It’s often claimed thatthe strength decreases as the size of the lumber specimen increases. Onepossible solution to verify and quantify the size effect on a lumber specimenis by proof loading twice. The specimens of size ss1 are proof loaded in theUTS mode up to a pre–determined load lss1 . If the specimen survives, itis cut and its size is reduced to Y . The specimen in reduced size is proofloaded in the UTS mode up to a pre–determined load lss2 . If the specimenssurvive the second proof loading procedure, they are cut again and their sizeis reduced to ss3. The specimens of size ss3 are then tested to failure in theUTS mode. The double proof load design may require two shoulder groups:1486.3. Future workone shoulder group where specimens of size ss2 are tested to failure in theUTS mode and one shoulder group where specimens of size ss3 are testedto failure in the UTS mode. The specimens of the two shoulder groups arenot proof loaded. Finding the optimal proof load levels for the proof loadedgroups and optimal specimen allocation between all groups will be two ofthe problems to be addressed.149BibliographyAmorim, S.D., Johnson, R.A., 1986. Experimental designs for estimatingthe correlation between two destructively tested variables. Journal of theAmerican Statistical Association 81, 807 – 812.ASTM D4761, 2005. Standard test methods for mechanical properties oflumber and wood–based structural material. ASTM International, DOI:10.1520/D4761-05.ASTM D6874, 2003. Standard test methods for nondestructive evaluationof wood–based flexural members using transverse vibration. ASTM In-ternational, DOI: 10.1520/D6874-03.Barrett, J.D. (Ed.), 1996. Duration of load. The past, present and future,Proc. International COST 508 Wood Mechanics Conference, Stuttgart,Germany.Chen, M.H., Ibrahim, J.G., 2006. The relationship between the power priorand hierarchical models. Bayesian Analysis 1, 551–574.D’Agostino, R., 1970. Transformation to normality of the null distributionof g1. Biometrika 57, 679 – 681.Evans, J.W., Johnson, R.A., Green, D.W., 1984. Estimating the correlationbetween variables under destructive testing, or how to break the sameboard twice. Technometrics 26, 285 – 290.Foschi, R.O., Yao, F.Z., 1986. Another look at three duration of load models,in: In Proceedings of IUFRO Wood Engineering Group Meeting, Florence,Italy, paper: 19-9-1.150BibliographyGerhards, C.C., 1979. Time-related effects of loading on wood strength: alinear cumulative damage theory. Wood Science 19, 139 – 144.Gerhards, C.C., Link, C.L., 1986. A cumulative damage model to predictload duration characteristics of lumber. Wood and Fiber Science 19, 147– 164.Green, D.W., Evans, J.W., Johnson, R.A., 1984. Investigating of the pro-cedure for estimating concomitance of lumber strength properties. Woodand Fiber Science 16, 427 – 440.Gupta, R., Gebremedhin, K.G., Grigoriu, M., 1986. Characterizing thestrength of wood truss joints. Transactions of American Society of Agri-cultural Engineers 35, 1285 – 1290.Hendrickson, E.M., Ellingwood, B., Murphy, J., 1987. Limit state proba-bilities for wood structural members. Journal of Structural Engineering113, 88–106.Hoerl, A.E., 1962. Application of ridge analysis to regression problems.Chemical Engineering Progress 58, 54 – 59.Hoerl, A.E., Kennard, R.W., 1970. Ridge regression: biased estimation fornonorthogonal problems. Technometrics 12, 55 – 67.Hu, F., 1994. Relevance weighted smoothing and a new bootstrap method.Ph.D. thesis. Department of Statistics, University of British Columbia.British Columbia.Huang, W., Askin, R.G., 2004. A generalized ssi reliability model consideringstochastic loading and strength aging degradation. IEEE Transactions onReliability 53, 77 – 82.Ibrahim, J.G., Chen, M.H., 2000. Power prior distributions for regressionmodels. Statistical Science 51, 46 – 60.151BibliographyIbrahim, J.G., Chen, M.H., Sinha, D., 2003. On optimality properties ofthe power prior. Journal of the American Statistical Association 98, 204– 213.Ibrahim, J.G., Ryan, L.M., Chen, M.H., 1998. Use of historical controls toadjust for covariates in trend tests for binary data. Journal of AmericanStatistical Association 93, 1282 – 1293.Johnson, R., Lu, W., 2000. A new multiple proof loads approach for estimat-ing correlations, in: Limnios, N., Nikulin, M. (Eds.), Recent Advances inReliability Theory. Birkha¨user Boston. Statistics for Industry and Tech-nology, pp. 245–257.Johnson, R.A., Galligan, W.L., 1983. Estimating the concomitance of lum-ber strength properties. Wood and Fiber Science 15, 235 – 244.Johnson, R.A., Lu, W., 2007. Proof load designs for estimation of depen-dence in a bivariate weibull model. Statistics and Probability Letters 77,1061 – 1069.Katzengruber, R., Jeitler, G., Brandner, R., Schickhofer, G., 2006. Ten-sile proof loading to assure quality of finger-jointed structural timber.Proceedings of WCTE (CD-ROM), 9th World Conference on Timber En-gineering , 6 – 10.Kondo, Y., Zidek, J.V., Taylor, C., van Eeden, C., 2014. Bayesian subsetselection procedures with an application to lumber strength properties.Submitted.Lam, F., Abayakoon, S., Svensson, S., Gyamfi, C., 2003. Influence of proofloading on the reliability of members. Holz als Roh- und Werkstoff 61,432–438. doi:10.1007/s00107-003-0418-1.Lele, S.R., Allen, K.L., 2006. On using expert opinion in ecological analyses:a frequentist approach. Environmetrics 17, 683 – 704.152BibliographyLiao, H., 2004. Degradation models and design of accelerated degradationtesting plans. Ph.D. thesis. Department of Industrial and Systems Engi-neering, Rutgers University. New Brunswick.Limnios, M.S.N.N., Balakrishnan, N., Kahle, W., (Eds), C.H.C., 2010. Ad-vances in degradation modeling: applications to reliability, survival Anal-ysis, and finance. Statistics for Industry and Technology, Springer.Liu, Y., 2012. Lower quantile estimation of wood strength data. Master’sthesis. Department of Statistics, University of British Columbia. BritishColumbia.Lu, C.J., Meeker, W.Q., 1993. Using degradation measures to estimate atime-to-failure distribution. Technometrics 35, 161 – 174.Madsen, B., 1976. In-grade testing. Degree of damage due to proof loadingof lumber in bending. Technical Report I.S.S.N. 0318-3378. University ofBritish Columbia, Department of Civil Engineering.Marin, L.A., Woeste, F.E., 1981. Reverse proof loading as a means of qualitycontrol in lumber manufacturing. Transactions of ASAE 24, 1276 – 1281.Massa, L., Gubskaya, A.V., Knoll, E., 2011. Can one take the logarithmor the sine of a dimensioned quantity or a unit? Dimensional analysisinvolving transcendental functions. Journal of Chemical Education 88,67–70.Morris, C.N., 1983. Parametric empirical bayes inference: theory and appli-cations. Journal of American Statistical Association 78, 47 – 55.Nelson, W.B., 2004. Accelerated testing: statistical models, test plans, anddata analysis. Wiley Series in Probability and Statistics, John Wiley &Sons, Inc.Noortwijk, van der Weideb, J., M.J. Kallena, b., Pandeyc, M., 2007. Gammaprocesses and peaks-over-threshold distributions for time-dependent reli-ability. Reliability Engineering & System Safety 92, 1651 – 1658.153BibliographyNovick, M.R., Grizzle, J.E., 1965. A bayesian approach to the analysis ofdata from clinical trials. Journal of the American Statistical Association60, 81 – 96.Robbins, H., 1955. An empirical bayes approach to statistics. Proceedingof the Third Berkeley Symposium on Mathematics, Statistics and Proba-bility 1, 157 – 163.Shi, T., Pirvu, C., Welch, W.J., Zidek, J.V., 2015. Tension proofload of fin-gerjoint lumber: benefit and optimization. Technical Report. Departmentof Statistics, University of British Columbia. British Columbia.Singpurwalla, N.D., 2006. The hazard potential: introduction and overview.Journal of the American Statistical Association 101, 1705 – 1717.Strickler, M., Pellerin, R., Talbott, J., 1969. Experiments in proof loadingstructural end-jointed lumber. Journal of Forest Products 20, 29–35.Tibshirani, R., 1996. Regression shrinkage and selection via the lasso. Jour-nal of the Royal Statistical Society: Series B 58, 267 – 288.Tseng, S.T., Yu, H.F., 1997. A termination rule for degradation experiments.IEEE Transactions on Reliability 46, 130 – 133.Woeste, F.E., Green, D., Tarbell, K.A., Marin, L.A., 1986. Proof loading toassure lumber strength. Wood and Fiber Science 19, 283 – 297.Wood, L.W., 1951. Relation of strength of wood to duration of stress.Technical Report. U.S. Forest Products Laboratory, Report No. 1916.Yang, K., Xue, J., 1996. Continuous state reliability analysis. Reliability andMaintainability Symposium, 1996 Proceedings. International Symposiumon Product Quality and Integrity, Annual , 251– 257.Zellner, A., 1997. Bayesian analysis in econometrics and statistics. EdwardElgar Pub.Zellner, A., 2002. Information processing and bayesian analysis. Journal ofEconometrics 107, 41 – 50.154Zhai, Y., 2011. Dynamic duration of load models. Master’s thesis. Depart-ment of Statistics, University of British Columbia. British Columbia.Zuo, M.J., Jiang, R., Yam, R., 1999. Approaches for reliability modeling ofcontinuous-state devices. IEEE Transactions on Reliability 48, 9 – 18.155Appendix AAdjusted MOESection 2.3 provided a detailed description of the Summer–of–2011 Exper-iment. This appendix explains how the modulus of elasticity (MOE) isadjusted for its moisture content (MC) and how we check whether the ex-perimental groups are homogenous in terms of the lumber’s strength prop-erties.The MOE of each specimen is adjusted for its MC following a standardprocedure, ASTM D1990. The adjustment of MC is conducted bundle bybundle. Let MCij be MC of the jth specimen of the ith bundle, MCi· be theaverage MC of the ith bundle, and MOEij be the MOE of the jth specimenof the ith bundle. The adjusted MOE of the jth specimen of the ith bundleis calculated asAdjusted MOEij = MOEij1.857− 0.0237×MCi·1.857− 0.0237×MCij.Table A.1 summarizes the sample mean and the sample standard deviation ofthe adjusted MOE’s within each group for each bundle. The sample meanand the sample standard deviation of the adjusted MOE’s across groupswithin each bundle are similar. For each group (T100/R100, T60/40/20,R60/40/20), we also combine all specimens from three bundles, and ob-tain the sample mean and the sample standard deviation of the adjustedMOE’s of each group. The sample mean and the sample standard devia-tion of the adjusted MOE’s across groups based on the combined bundlesare also similar. Since the adjusted MOE is known to be highly correlatedwith other lumber strength properties such as MOR and UTS, the groupsR100/60/40/20 and T100/60/40/20 are safely assumed to be homogenousin terms of the lumber’s strength.156Appendix A. Adjusted MOETable A.1: The sample mean and the sample standard deviation of the ad-justed MOE’s in million pounds per square inch (psi) for each experimentalgroup within each bundle and across three bundles. Bundles 1 and 2 areof grade–type SPF(Spruce, Pine, Fir) 1650f-1.5E. Bundle 3 is of type SPFNo.2.Bundle T20 T40 T60 T100 R20 R40 R60 R1001Mean 1.66 1.66 1.66 1.66 1.66 1.67 1.67 1.67SD 0.23 0.24 0.24 0.24 0.23 0.23 0.24 0.242Mean 1.64 1.64 1.65 1.64 1.64 1.64 1.64 1.64SD 0.23 0.23 0.21 0.24 0.23 0.23 0.23 0.243Mean 1.43 1.43 1.43 1.43 1.44 1.44 1.44 1.45SD 0.26 0.26 0.26 0.27 0.27 0.27 0.28 0.281-3Mean 1.59 1.59 1.59 1.59 1.59 1.59 1.59 1.59SD 0.27 0.27 0.27 0.27 0.27 0.27 0.27 0.27157Appendix BEstimating the proof loadlevelsThe Summer–of–2011 experiment was designed such that the proof load-ing procedure of R60/40/20 was expected to break 60%, 40%, 20% of thespecimens, respectively. As with bending, we expected 60%, 40%, 20% ofthe specimens would break during proof loading procedures of T60/40/20,respectively. To achieve this, we estimated the 60%, 40%, 20% quantiles ofthe distributions of the modulus of rupture (MOR) and the ultimate tensilestrength (UTS). The goal was not to estimate the proof load levels precisely,but rather obtain well separated proof load levels for us to estimate the proofload effects. Here, we only illustrate the quantile estimation procedure withthe groups proof loading in the MOR mode because the quantile estimationprocedure for the groups proof loading in the UTS mode is similar.The Weibull distribution is commonly used to model the lumber strengthproperties such as MOR and UTS. The density of the 2-parameter Weibulldistribution isfW (x;β, η) =βη(xη)β−1exp{−(xη)β} for x ≥ 0where β > 0 and η > 0. The quantile function of the 2-parameter Weibulldistribution isQ(p) = η (−ln(1− p))1/β , 0 ≤ p < 1. (B.1)The Weibull distribution is fitted to the MOR data collected from GroupR100 of the Summer–of–2011 experiment using the maximum likelihoodmethod. The maximum likelihood estimates of β and η and their standard158Appendix B. Estimating the proof load levelserrors are reported in Table B.1. The maximum likelihood estimates areplugged into Equation B.1 to obtain estimated quantiles Qˆ(p) of the MORdistribution.Table B.1: Maximum likelihood estimates of the Weibull parameters byfitting the R100 data with a Weibull distribution.Parameter Est SEShape β 3.85 0.05Scale η 7.32 0.02The proof load levels of R60/40/20 are Qˆ(0.6), Qˆ(0.4) and Qˆ(0.2), re-spectively. For a given p, the standard error (SE) of Qˆ(p) is obtained bythe Delta method. The approximate 95% confidence interval of Q(p) is(Qˆ(p) ± SE). Table B.2 lists the estimated quantiles and their standarderrors together with the approximate 95% confidence intervals.Table B.2: Estimated quantiles of the MOR distribution with their standarderrors (SD), and the approximate 95% confidence intervals (CI).Quantile Est SE 95% CI of quantileQ(0.2) 4.96 0.03 (4.62, 5.30)Q(0.4) 6.15 0.03 (5.84, 6.46)Q(0.6) 7.15 0.02 (6.85, 7.45)We convert the estimated quantiles to the estimated numbers of speci-mens failed by the proof loading procedure, and convert the 95% confidenceintervals of the quantile to the 95% confidence intervals of the number offailed specimens (Table B.3). The proof loading procedure starts with R60,R40 and then R20. For R60, the proof loading procedure with the proofload level being the estimated quantile Qˆ(0.6) is expected to fail around 53specimens (0.6× 87 = 52.2). If the number of observed failed specimens atthe end of the R60 proof loading procedure falls outside the 95% confidenceinterval of the number of failed pieces (47, 58), we treat it as a trigger alertand consider to revise our estimated proof load levels for R40 and R20.We are aware that the approximate 95% confidence interval of the quan-159Appendix B. Estimating the proof load levelsTable B.3: Estimated number of failed specimens of R60/40/20 with ap-proximate 95% confidence intervals.# of FailedGroup Est. CIR20 18 (14, 22)R40 35 (30, 41)R60 53 (47, 58)tile based on the Delta method may be wide, and decide to carry an addi-tional approach to estimate the quantile and its corresponding 95% confi-dence interval. Previous FPInnovations MOR sample sets from the samemills suggest that the Weibull shape parameter β is about 4. We assumethat MOR follows a Weibull distribution with scale parameter η and shapeparameter of 4. The quantile function of a Weibull distribution with scaleparameter η and shape parameter of 4 isη(−ln(1− p))1/4.Then MOR4 follows an exponential distribution with scale parameter λwhere λ = (1/η)4. The approximate 95% confidence interval of 1/λ is(2× 174λˆχ2(0.975, 2× 174),2× 174λˆχ2(0.025, 2× 174)),where 174 is the number of specimens of R100 and λˆ is the maximum like-lihood estimate of λ. Since η is a function of λ, the 95% confidence intervalof η is approximated by(2× 174(1/ηˆ)4χ2(0.975, 2× 174),2× 174(1/ηˆ)4χ2(0.025, 2× 174)),where ηˆ = λˆ−1/4. Thus the estimated quantile of MOR isηˆ(−ln(1− p))1/4,160Appendix B. Estimating the proof load levelsand the approximate 95% confidence interval of the quantile is(2× 174× (−ln(1− p))1/4(1/ηˆ)4χ2(0.975, 2× 174),2× 174× (−ln(1− p))1/4(1/ηˆ)4χ2(0.025, 2× 174)).Table B.4 lists the estimated quantiles of the MOR distribution and thecorresponding 95% confidence intervals. Table B.5 presents the results ofTable B.4 in terms of the number of specimens failed by the proof loadingprocedure. The approximate 95% confidence intervals in Table B.5 are nar-rower compared with the 95% confidence intervals in Table B.3, which servesas a more conservative standard for monitoring the proof loading procedure.Table B.4: Estimated quantiles of the MOR distribution with 95% confi-dence intervals under the assumption that the shape parameter is 4.Quantile Est 95% CI of quantileQ(0.2) 5.05 (4.87, 5.25)Q(0.4) 6.21 (5.99, 6.46)Q(0.6) 7.19 (6.94, 7.47)Table B.5: Estimated number of broken specimens of R60/40/20 with 95%confidence intervals.# of BrokenGroup Est. 95% CI of number of broken piecesR20 18 (16, 20)R40 35 (32, 40)R60 53 (48, 58)161Appendix CSummer–of–2011 experimentQ–Q plotsIn Section 4.3.4, we assumed that X and Y followed a bivariate normaldistribution where X and Y denoted the MOR and the log–transformedUTS, respectively. The parameter of the bivariate normal distribution wasdenoted as θ = (µX , σX , µY , σY , ρ)T where ρ was the correlation betweenX and Y , and µX(µY ) and σX(σY ) were the mean and the standard de-viation of X(Y ), respectively. The joint likelihood function of θ given theT20, T100, and R100 samples collected from the Summer–of–2011 experi-ment was optimized to find the maximum likelihood estimate of θ. Then,the theoretical quantiles of the strength distribution were evaluated at themaximum likelihood estimate of θ, and plotted against the empirical quan-tiles of the residual strength distribution (Q–Q plot) for each proof loadedgroup T60/40 and R60/40/20. The Q–Q plots for T60/40 and R60/40/20were given by Figures 4.4 to 4.8, respectively. Based on these Q-Q plots, weobserved more severe damage on the weaker survivors and negligible dam-age to the strongest survivors. We postulated, on heuristic grounds and forsimplicity, the damage model [Y ∗|X > l2]d= [(Y − α/Y )|X > l2] whered=means equalling in distribution, l2 is a proof load level, [Y |X > l2] is thestrength in the Y mode after proof loading in the X mode up to the loadlevel l2, [Y ∗|X > l2] is the residual strength in the Y mode after proof load-ing in the X mode up to the load level l2 and α ≥ 0. As an approximation,we take [Y |X > l2] > 0.However, Figures 4.4 to 4.8 ignore the variability in the maximum like-lihood estimator of θ, especially the variability in the maximum likelihood162Appendix C. Summer–of–2011 experiment Q–Q plotsestimator of ρ. To resolve this issue, we reproduce the same Q–Q plot foreach proof loaded group but with various ρ values including the maximumlikelihood estimate of ρ. These Q–Q plots are given by Figures C.1 to C.5 forGroups T60, T40, R60, R40 and R20, respectively. We observe the similarphenomenon as the case when ρ is taken at its maximum likelihood estimate:more severe damage on the weaker survivors and negligible damage to thestrongest survivors. Except when ρ = 0.55 or ρ = 0.65, almost all empiricalquantiles of [X∗|Y > 1.59] appear to be stronger than the estimated theo-retical quantiles of [X|Y > 1.59], indicating that a ρ value smaller than 0.65may not be plausible.Figure C.1: T60: The empirical quantile of [X∗|Y > 1.59] is plotted againstthe estimated theoretical quantile of [X|Y > 1.59] for various ρ values. Thesolid line is the 45–degree line.68104 6 8 10Theoretical quantileEmpirical quantileQ−Q plot for ρ = 0.5568106 8 10Theoretical quantileQ−Q plot for ρ = 0.6568106 8 10Theoretical quantileEmpirical quantileQ−Q plot for ρ = 0.7568106 8 10Theoretical quantileQ−Q plot for ρ = 0.85163Appendix C. Summer–of–2011 experiment Q–Q plotsFigure C.2: T40: The empirical quantile of [X∗|Y > 1.38] is plotted againstthe estimated theoretical quantile of [X|Y > 1.38] for various ρ values. Thesolid line is the 45–degree line.68105.0 7.5 10.0Theoretical quantileEmpirical quantileQ−Q plot for ρ = 0.5568104 6 8 10Theoretical quantileQ−Q plot for ρ = 0.6568104 6 8 10Theoretical quantileEmpirical quantileQ−Q plot for ρ = 0.7568106 8 10Theoretical quantileQ−Q plot for ρ = 0.85164Appendix C. Summer–of–2011 experiment Q–Q plotsFigure C.3: R60: The empirical quantile of [Y ∗|X > 7.09] is plotted againstthe estimated theoretical quantile of [Y |X > 7.09] for various ρ values. Thesolid line is the 45–degree line.1.21.62.01.0 1.5 2.0Theoretical quantileEmpirical quantileQ−Q plot for ρ = 0.551.21.62.01.0 1.5 2.0Theoretical quantileQ−Q plot for ρ = 0.651.21.62.01.5 2.0Theoretical quantileEmpirical quantileQ−Q plot for ρ = 0.751.21.62.01.2 1.5 1.8 2.1 2.4Theoretical quantileQ−Q plot for ρ = 0.85165Appendix C. Summer–of–2011 experiment Q–Q plotsFigure C.4: R40: The empirical quantile of [Y ∗|X > 6.11] is plotted againstthe estimated theoretical quantile of [Y |X > 6.11] for various ρ values. Thesolid line is the 45–degree line.0.51.01.52.01.0 1.5 2.0Theoretical quantileEmpirical quantileQ−Q plot for ρ = 0.550.51.01.52.00.8 1.2 1.6 2.0 2.4Theoretical quantileQ−Q plot for ρ = 0.650.51.01.52.01.0 1.5 2.0Theoretical quantileEmpirical quantileQ−Q plot for ρ = 0.750.51.01.52.01.0 1.5 2.0Theoretical quantileQ−Q plot for ρ = 0.85166Appendix C. Summer–of–2011 experiment Q–Q plotsFigure C.5: R20: The empirical quantile of [Y ∗|X > 4.96] is plotted againstthe estimated theoretical quantile of [Y ∗|X > 4.96] for various ρ values. Thesolid line is the 45–degree line.0.81.21.62.01.0 1.5 2.0 2.5Theoretical quantileEmpirical quantileQ−Q plot for ρ = 0.550.81.21.62.01.0 1.5 2.0 2.5Theoretical quantileQ−Q plot for ρ = 0.650.81.21.62.01.0 1.5 2.0Theoretical quantileEmpirical quantileQ−Q plot for ρ = 0.750.81.21.62.00.8 1.2 1.6 2.0 2.4Theoretical quantileQ−Q plot for ρ = 0.85167
- Library Home /
- Search Collections /
- Open Collections /
- Browse Collections /
- UBC Theses and Dissertations /
- Statistical methods for relating strength properties...
Open Collections
UBC Theses and Dissertations
Featured Collection
UBC Theses and Dissertations
Statistical methods for relating strength properties of dimensional lumber Cai, Yanling 2015
pdf
Page Metadata
Item Metadata
Title | Statistical methods for relating strength properties of dimensional lumber |
Creator |
Cai, Yanling |
Publisher | University of British Columbia |
Date Issued | 2015 |
Description | We present a novel approach for predicting one lumber strength property from another, each being measured destructively. The objective is to reduce the cost of lumber monitoring programs, by measuring one of the properties and predicting the other. To reach the objective, we review single proof load design (SPLD) proposed to assess dependence between two jointly normally distributed random variables X and Y that cannot be observed simultaneously. The SPLD tests specimens in one mode X up to a determined load (proof loading), and the survivors are tested to failure in a second mode Y. To resolve the near non--identifiability of parameters in the SPLD approach, we construct a penalty function based on a Bayesian power prior to regularize the parameters. Simulation studies of the penalized approach suggest redesigning the SPLD for improving the estimation performance. The new design, a rediscovery, assigns specimens to one of the two groups: SPLD and shoulder. The shoulder group tests specimens to failure in the Y mode. The SPLD with a shoulder approach results in a more accurate and precise estimate. To quantify damage caused by proof loading lumber specimens, we use the maximum likelihood method to estimate theoretical quantiles of the strength distribution using a sample from a SPLD with a shoulder experiment with a low proof load level. The comparison between those estimated theoretical quantiles and empirical quantiles of the proof load survivors reveals the damage at higher proof load levels. Using our experimental data on manufactured lumber, we find low and high proof load levels leave survivors undamaged, but intermediate load levels do damage weaker survivors. We generalize the SPLD with a shoulder approach to incorporate proof load damage, and thus finally provide a method for estimating the X-Y dependence when damage occurs. This generalized approach is applied on our experimental data to estimate the relationship between bending and log--transformed tension. The high correlation found in our application suggests that if there is a need to verify the two strength properties, only one of them needs to be measured and the other is predicted from the measured strength. |
Genre |
Thesis/Dissertation |
Type |
Text |
Language | eng |
Date Available | 2015-08-13 |
Provider | Vancouver : University of British Columbia Library |
Rights | Attribution 2.5 Canada |
IsShownAt | 10.14288/1.0166538 |
URI | http://hdl.handle.net/2429/54404 |
Degree |
Doctor of Philosophy - PhD |
Program |
Statistics |
Affiliation |
Science, Faculty of Statistics, Department of |
Degree Grantor | University of British Columbia |
GraduationDate | 2015-09 |
Campus |
UBCV |
Scholarly Level | Graduate |
Rights URI | http://creativecommons.org/licenses/by/2.5/ca/ |
AggregatedSourceRepository | DSpace |
Download
- Media
- 24-ubc_2015_september_cai_yanling.pdf [ 8.92MB ]
- Metadata
- JSON: 24-1.0166538.json
- JSON-LD: 24-1.0166538-ld.json
- RDF/XML (Pretty): 24-1.0166538-rdf.xml
- RDF/JSON: 24-1.0166538-rdf.json
- Turtle: 24-1.0166538-turtle.txt
- N-Triples: 24-1.0166538-rdf-ntriples.txt
- Original Record: 24-1.0166538-source.json
- Full Text
- 24-1.0166538-fulltext.txt
- Citation
- 24-1.0166538.ris
Full Text
Cite
Citation Scheme:
Usage Statistics
Share
Embed
Customize your widget with the following options, then copy and paste the code below into the HTML
of your page to embed this item in your website.
<div id="ubcOpenCollectionsWidgetDisplay">
<script id="ubcOpenCollectionsWidget"
src="{[{embed.src}]}"
data-item="{[{embed.item}]}"
data-collection="{[{embed.collection}]}"
data-metadata="{[{embed.showMetadata}]}"
data-width="{[{embed.width}]}"
async >
</script>
</div>
Our image viewer uses the IIIF 2.0 standard.
To load this item in other compatible viewers, use this url:
http://iiif.library.ubc.ca/presentation/dsp.24.1-0166538/manifest