{"Affiliation":[{"label":"Affiliation","value":"Applied Science, Faculty of","attrs":{"lang":"en","ns":"http:\/\/vivoweb.org\/ontology\/core#departmentOrSchool","classmap":"vivo:EducationalProcess","property":"vivo:departmentOrSchool"},"iri":"http:\/\/vivoweb.org\/ontology\/core#departmentOrSchool","explain":"VIVO-ISF Ontology V1.6 Property; The department or school name within institution; Not intended to be an institution name."},{"label":"Affiliation","value":"Mechanical Engineering, Department of","attrs":{"lang":"en","ns":"http:\/\/vivoweb.org\/ontology\/core#departmentOrSchool","classmap":"vivo:EducationalProcess","property":"vivo:departmentOrSchool"},"iri":"http:\/\/vivoweb.org\/ontology\/core#departmentOrSchool","explain":"VIVO-ISF Ontology V1.6 Property; The department or school name within institution; Not intended to be an institution name."}],"AggregatedSourceRepository":[{"label":"AggregatedSourceRepository","value":"DSpace","attrs":{"lang":"en","ns":"http:\/\/www.europeana.eu\/schemas\/edm\/dataProvider","classmap":"ore:Aggregation","property":"edm:dataProvider"},"iri":"http:\/\/www.europeana.eu\/schemas\/edm\/dataProvider","explain":"A Europeana Data Model Property; The name or identifier of the organization who contributes data indirectly to an aggregation service (e.g. Europeana)"}],"Campus":[{"label":"Campus","value":"UBCV","attrs":{"lang":"en","ns":"https:\/\/open.library.ubc.ca\/terms#degreeCampus","classmap":"oc:ThesisDescription","property":"oc:degreeCampus"},"iri":"https:\/\/open.library.ubc.ca\/terms#degreeCampus","explain":"UBC Open Collections Metadata Components; Local Field; Identifies the name of the campus from which the graduate completed their degree."}],"Creator":[{"label":"Creator","value":"Noghhondarian, Kazem","attrs":{"lang":"en","ns":"http:\/\/purl.org\/dc\/terms\/creator","classmap":"dpla:SourceResource","property":"dcterms:creator"},"iri":"http:\/\/purl.org\/dc\/terms\/creator","explain":"A Dublin Core Terms Property; An entity primarily responsible for making the resource.; Examples of a Contributor include a person, an organization, or a service."}],"DateAvailable":[{"label":"DateAvailable","value":"2009-04-20T23:36:10Z","attrs":{"lang":"en","ns":"http:\/\/purl.org\/dc\/terms\/issued","classmap":"edm:WebResource","property":"dcterms:issued"},"iri":"http:\/\/purl.org\/dc\/terms\/issued","explain":"A Dublin Core Terms Property; Date of formal issuance (e.g., publication) of the resource."}],"DateIssued":[{"label":"DateIssued","value":"1997","attrs":{"lang":"en","ns":"http:\/\/purl.org\/dc\/terms\/issued","classmap":"oc:SourceResource","property":"dcterms:issued"},"iri":"http:\/\/purl.org\/dc\/terms\/issued","explain":"A Dublin Core Terms Property; Date of formal issuance (e.g., publication) of the resource."}],"Degree":[{"label":"Degree","value":"Doctor of Philosophy - PhD","attrs":{"lang":"en","ns":"http:\/\/vivoweb.org\/ontology\/core#relatedDegree","classmap":"vivo:ThesisDegree","property":"vivo:relatedDegree"},"iri":"http:\/\/vivoweb.org\/ontology\/core#relatedDegree","explain":"VIVO-ISF Ontology V1.6 Property; The thesis degree; Extended Property specified by UBC, as per https:\/\/wiki.duraspace.org\/display\/VIVO\/Ontology+Editor%27s+Guide"}],"DegreeGrantor":[{"label":"DegreeGrantor","value":"University of British Columbia","attrs":{"lang":"en","ns":"https:\/\/open.library.ubc.ca\/terms#degreeGrantor","classmap":"oc:ThesisDescription","property":"oc:degreeGrantor"},"iri":"https:\/\/open.library.ubc.ca\/terms#degreeGrantor","explain":"UBC Open Collections Metadata Components; Local Field; Indicates the institution where thesis was granted."}],"Description":[{"label":"Description","value":"This research presents a new approach to the computations of control charts for non-\r\nNormal data and for those quality characteristics where the exact sampling distributions of\r\nstatistics for the process mean and standard deviation are not known. We use a class of\r\npower transformations due to Box and Cox (1964), to produce data that conform best to\r\nthe Normal distribution. A statistical test of significance to determine the presence of an\r\nadditional between-sample variation is introduced and an appropriate control chart to\r\ncontrol this extra variation is developed.\r\nThe Likelihood Ratio (LR), statistic which has been found useful in areas such as\r\ntesting of hypothesis and estimation of confidence intervals, is used to design the control\r\ncharts in the original scale of measurements that are natural for the product. The major\r\nadvantage of LR method is its relatively rapid convergence to its chi-square asymptote.\r\nWe present a specific application in the wood industry, by constructing appropriate\r\ncontrol charts for the final Moisture Content (MC) of kiln-dried lumber.\r\nComparison with a previous study which used the original non-Normal MC data\r\nshowed the importance of an appropriate transformation and the inclusion of the\r\nadditional between-sample variation in the calculations of the control chart limits. Without\r\nthese necessary steps the control chart may lose its validity and falsely signal an out of\r\ncontrol situation.\r\nConfidence intervals and control charts for the process mean and standard\r\ndeviation are developed based on the LR statistic for the Weibull and Gumbel distributions. A control chart for the percentile of strength data to maintain a rninimum\r\nstrength at a desired level, is also presented.\r\nProbability plots to check the Normality assumption of the censored and truncated\r\ndata are presented. Appropriate control charts for the sample estimates of mean and\r\nstandard deviation for the non-Normal censored and truncated data are developed. A\r\nprocedure is given to re-express the control charts for the censored and truncated data in\r\nthe original scale of measurements.\r\nComplex calculations were performed without the need to program using\r\nthe Mathcad\u2122 computer analysis package. This is a highly desirable property for the\r\nnon-statistically oriented user.","attrs":{"lang":"en","ns":"http:\/\/purl.org\/dc\/terms\/description","classmap":"dpla:SourceResource","property":"dcterms:description"},"iri":"http:\/\/purl.org\/dc\/terms\/description","explain":"A Dublin Core Terms Property; An account of the resource.; Description may include but is not limited to: an abstract, a table of contents, a graphical representation, or a free-text account of the resource."}],"DigitalResourceOriginalRecord":[{"label":"DigitalResourceOriginalRecord","value":"https:\/\/circle.library.ubc.ca\/rest\/handle\/2429\/7438?expand=metadata","attrs":{"lang":"en","ns":"http:\/\/www.europeana.eu\/schemas\/edm\/aggregatedCHO","classmap":"ore:Aggregation","property":"edm:aggregatedCHO"},"iri":"http:\/\/www.europeana.eu\/schemas\/edm\/aggregatedCHO","explain":"A Europeana Data Model Property; The identifier of the source object, e.g. the Mona Lisa itself. This could be a full linked open date URI or an internal identifier"}],"Extent":[{"label":"Extent","value":"4301594 bytes","attrs":{"lang":"en","ns":"http:\/\/purl.org\/dc\/terms\/extent","classmap":"dpla:SourceResource","property":"dcterms:extent"},"iri":"http:\/\/purl.org\/dc\/terms\/extent","explain":"A Dublin Core Terms Property; The size or duration of the resource."}],"FileFormat":[{"label":"FileFormat","value":"application\/pdf","attrs":{"lang":"en","ns":"http:\/\/purl.org\/dc\/elements\/1.1\/format","classmap":"edm:WebResource","property":"dc:format"},"iri":"http:\/\/purl.org\/dc\/elements\/1.1\/format","explain":"A Dublin Core Elements Property; The file format, physical medium, or dimensions of the resource.; Examples of dimensions include size and duration. Recommended best practice is to use a controlled vocabulary such as the list of Internet Media Types [MIME]."}],"FullText":[{"label":"FullText","value":"QUALITY CONTROL WITH NON-NORMAL, CENSORED AND TRUNCATED DATA by Kazem Noghondarian B.Sc, University of Nevada, USA., M.Sc, Arizona State University, USA. A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF DOCTOR OF PHILOSOPHY in THE FACULTY OF GRADUATE STUDIES MECHANICAL ENGINEERING We accept this thesis as conforming to the required standard THE UNIVERSITY OF BRITISH COLUMBIA October, 1997 \u00a9 Kazem Noghondarian, 1997 In presenting this thesis in partial fulfilment of the requirements for an advanced degree at the University of British Columbia, I agree that the Library shall make it freely available for reference and study. I further agree that permission for extensive copying of this thesis for scholarly purposes may be granted by the head of my department or by his or her representatives. It is understood that copying or publication of this thesis for financial gain shall not be allowed without my written permission. Department of tA\u00a3cl\\t\\ s>A f E~rS&n The University of British Columbia Vancouver, Canada Date 0C\/T- (o, DE-6 (2\/88) ABSTRACT This research presents a new approach to the computations of control charts for non-Normal data and for those quality characteristics where the exact sampling distributions of statistics for the process mean and standard deviation are not known. We use a class of power transformations due to Box and Cox (1964), to produce data that conform best to the Normal distribution. A statistical test of significance to determine the presence of an additional between-sample variation is introduced and an appropriate control chart to control this extra variation is developed. The Likelihood Ratio (LR), statistic which has been found useful in areas such as testing of hypothesis and estimation of confidence intervals, is used to design the control charts in the original scale of measurements that are natural for the product. The major advantage of LR method is its relatively rapid convergence to its chi-square asymptote. We present a specific application in the wood industry, by constructing appropriate control charts for the final Moisture Content (MC) of kiln-dried lumber. Comparison with a previous study which used the original non-Normal MC data showed the importance of an appropriate transformation and the inclusion of the additional between-sample variation in the calculations of the control chart limits. Without these necessary steps the control chart may lose its validity and falsely signal an out of control situation. Confidence intervals and control charts for the process mean and standard deviation are developed based on the LR statistic for the Weibull and Gumbel distributions. A control chart for the percentile of strength data to maintain a rninimum strength at a desired level, is also presented. Probability plots to check the Normality assumption of the censored and truncated data are presented. Appropriate control charts for the sample estimates of mean and standard deviation for the non-Normal censored and truncated data are developed. A procedure is given to re-express the control charts for the censored and truncated data in the original scale of measurements. Complex calculations were performed without the need to program using the Mathcad\u2122 computer analysis package. This is a highly desirable property for the non-statistically oriented user. iii TABLE OF CONTENTS Abstract Table of Contents List of Tables List of Figures Acknowledgment Chapter 1 Introduction Overview Problem Definition Research Objective Chapter 2 Literature Review and Background Material Overview Non-Normal Distributions Censored and Truncated Samples Likelihood Ratio Statistic Chapter 3 Control Charts for Non-Normal Distributions Overview Final Moisture Content of Kiln-dried Lumber The Normality Assumption Data Transformation Control Chart for Sample Means Control Chart for Moving Standard Deviations Control Chart for Sample Standard Deviations Control Chart in the Original Scale of Measurements Summary Chapter 4 Control Charts with Censored Data Overview The Normality Assumption Estimation of the Process Parameters Confidence Intervals for the Process Parameters Control Chart for the Sample M L Estimates of Mean Control Chart for the Sample M L Estimates of Standard . Deviations Control Chart in the Original Scale of Measurements Summary iv Chapter 5 Control Charts with Truncated Data Overview 66 The Normality Assumption 67 Confidence Intervals for the Process Parameters 72 Control Chart for the Sample ML Estimates of Mean 74 Control Chart for the Sample ML Estimates of Standard 76 Deviations Control Chart in the Original Scale of Measurements 77 Summary 82 Chapter 6 Control Charts with Material Strength Data Overview 84 Estimation of the Process Parameters 85 Weibull Probability Plot 88 Data Transformation 89 Confidence Intervals for the Process Parameters 91 Control Chart for Sample ML Estimates of Scale Parameter 94 Control Chart for the Sample ML Estimates of Location 96 Parameter Control Chart in the Original Scale of Measurements 97 Control Charts for the Percentiles of Strength Distribution 100 Summary 106 Chapter 7 Conclusions and Recommendations Overview 109 Conclusions 109 Recommendations for Further Research 112 Bibliography 114 LIST OF TABLES Table 3-1 Percentage moisture content of kiln dried 4\/4 redwood uppers 16 Table 3-2 Sample means and moving standard deviations 27 Table 3-3 Sample means in the original scale of measurements. 34 Table 4-1 Measures of Pollutants in Air Quality Samples (in PPM) 44 Table 4-2 Control limits for the sample ML estimates of means 55 Table 4-3 Control limits for the sample ML estimates of standard deviations 57 Table 4-4 Control limits for the sample MLE of mean in the original scale 62 Table 5-1 Data from a truncated non-Normal distribution 67 Table 6-1 Rupture strength of a new engineering material 86 LIST OF FIGURES Figure 3-1 Probability plot for the data in Table 3-1 19 Figure 3-2 Probability plot for the log transformed data 20 Figure 3-3 Probability plot for the transformed data 23 Figure 3-4 The sample means chart for data in Table 3-1 27 Figure 3-5 Moving additional between-sample standard deviation chart 30 Figure 3-6 Standard deviation chart 31 Figure 3-7 LR curve and the line of chi-square 36 Figure 3-8 Control chart for sample means in the original scale of measurements 37 Figure 4-1 Probability plot for the data in Table 4-1 46 Figure 4-2 Normal probability plot for the transformed data 49 Figure 4-3 Likelihood ratio curve as a function of the process mean 52 Figure 4-4 Likelihood ratio curve as a function of the process standard deviation 53 Figure 4-5 Control chart for the sample ML estimates of mean 56 Figure 4-6 Control chart for the sample ML estimates of standard deviation 58 Figure 4-7 Likelihood ratio curve as a function of mean in the original scale 61 Figure 4-8 Control chart for the ML estimates of mean in the original scale 63 Figure 5-1 The probability plot of the original truncated data 69 Figure 5-2 Probability plot for the transformed data 71 Figure 5-3 Likelihood ratio curve as a function of the process mean 73 Figure 5-4 Likelihood ratio curve as a function of the process standard deviation 74 Figure 5-5 Control chart for the sample ML estimates of mean 76 Figure 5-6 Control chart for the sample ML estimates of standard deviation 78 Figure 5-7 Likelihood ratio curve as a function of the mean in the original scale 80 Figure 5-8 Control chart for the sample estimates of mean in the original scale 81 Figure 6-1 Plot of Weibull pdf 87 Figure 6-2 Probability for data in Table 6-1 89 Figure 6-3 Plot of Gumbel probability distribution function 90 Figure 6-4 LR curve as a function of the location parameter ji 92 Figure 6-5 Sample ML estimates of standard deviations chart 96 Figure 6-6 LR curve for the process mean in the original scale 99 Figure 6-7 Control chart for the sample means in the original scale 100 Figure 6-8 Control chart for the sample tenth percentiles 106 vi Acknowledgment I would like to offer my sincere thanks to my advisor, Dr. F. Sassani for his constant supervision and guidance throughout this research. Special thanks to my supervisory committee, Dr. B. Dunwoody and Dr. T. Maness for many valuable suggestions and comments. I wish to express my gratitude to Dr. Bury who has made a substantial contribution to both my understanding and appreciation of statistics. I would also like to tank the Ministry of Higher Education, Islamic Republic of Iran for their invaluable scholarship during my Ph.D. program. Last but not least, I would like to thank my wife Fatemeh Ashraf Julaei and my children Iman, Ehsun and Maryam for their constant support and encouragement. vii D E D I C A T I O N Dedicated to the memory of the late Ayatollah Imam Khomaini, the great leader of the Islamic revolution and the founder of the Islamic Republic of Iran. A great man who devouted his entire blissful life to chastity, to self-purification, nearness to God, acquisition of knowledge and struggle for liberation of the oppressed masses from the shackles of the arrogant. A great leader, who led the Iranian nation to end the tyranical, corrupt, and deviated monarchial regime and replaced it with an Islamic system. A great scholar, who presented the doctrine of wilayat-e fagih (The Guardianship of Jurisconsult) as the best form of government and set up a political system based on Divine foundation wiht Divine injuction. May Almighty God please his exalted soul and give us strength to follow his path. K. Noghondarian Oct. 10, 1997 viii CHAPTER 1 INTRODUCTION Overview In today's competitive global market place, the achievement of high-quality products has become the only solution for manufacturers to survive and prosper. As a result, manufacturers have to have an operational control system to assure the quality of their products. A manufacturer who only performs inspection on the final products without being concerned about the intermediate stages of the manufacturing process, by no means is enhancing the quality of his products. The final inspection may give an estimation of the performance of the process, but does not prevent nonconforming products from reaching the customer. Studies have shown that only about 80% of nonconforming products are detected during any 100% inspection, see Ryan (1989, p. 9.) Therefore, to ensure quality, emphasis should be placed on the intermediate stages of the manufacturing process and the use of a powerful tool known as Statistical Quality Control (SPC.) SPC, which uses the concepts of statistics to control the manufacturing process, was introduced decades ago. Unfortunately it did not receive wide attention in North America and Europe until recent years. The essential idea of SPC is to improve the quality of products continuously through constant applications of statistical methods to process control. A major goal of SPC is to detect the occurrence of any assignable causes of disturbances in the process as soon as possible so that investigation of the process and l corrective actions can be taken before many nonconforming products reach the final stage of the process. In an ideal manufacturing world, any production would turn out perfect products. No quality control would then be necessary because every item that comes off the production line would conform 100% with specification. Unfortunately, in the real world of manufacturing, many factors combine and interact to make each unit unique. Temperature, humidity, materials used, machine settings all vary and affect the product. The parts of a machine are not fixed entities; they wear out, change dimensions, and lose their adjustment. Also the people who run the machines differ in their behavior. A single operator forgets things over time and may fail to communicate with others, and when many operators are involved, opportunities to vary from the standard procedure are multiplied. Therefore, uniformity and stability over time is not something that may exist as a natural characteristic of a manufacturing process, rather something that we should work hard to achieve. The Shewhart control chart (Montgomery, 1991) is a valuable tool in this regard. Its importance is the simple fact that successive observations are plotted in time order, so that time patterns of normal and abnormal behavior of the process can be clearly seen and analyzed by a viewer who is familiar with the process. To make the assessment easier of what might be and what might not be a normal process behavior, control limits are often placed at \u00b1 3rj about the process mean ja . Since many processes tend to remain stable over short periods of time, a measure of this short term standard deviation, a, is usually used to judge normal behavior. This standard deviation is determined from short lengths, 2 called rational subgroups, of normal operation of the process. Control limits are determined by allowing a process to run untouched and then analyzing the results using a set of mathematical formulas. There are two kinds of variation. The first is that which results from many small causes: minor variations in the worker's ability, the clarity of procedures, the capability of the machinery and equipment, and so forth. These are \"common causes\" and can often only be changed by management. The other form of variation is usually easier to eliminate. When a machine malfunctions, or an untrained worker is put on the job, or a defective material arrives from a vendor, corrective actions can be taken rather easily. Deming (1986), calls these \"special causes.\" They show up on control charts as points outside the limits. The formula for the control limits is designed to provide an economic balance between searching too often for special causes where there are none (i.e. a false-alarm or Type I error), and not searching when a special cause may be present (i.e. not to give alarm or type II error.) A system can best be improved when special causes have been eliminated and it has been brought into statistical control. At that point, management can work effectively on the system, looking for ways to reduce variation. Once a system is in control, control charts can be used for monitoring so as to immediately detect when something goes wrong. Line operators can record the data and take action, shutting down the line, if necessary. A point need not be outside the limits to indicate action. Abrupt shifts or distinct trends within limits are also signals for investigation. 3 Generally speaking, there are two kinds of control charts: charts for variable data and charts for attribute data. Variable data control charts are useful when the parameter of interest can be conveniently measured numerically, for example, the measurement of the diameter of a cylindrical part. Whereas attribute data control charts are useful when the parameter of interest can not be conveniently measured numerically, for example, the inspection of the finished surface of a cylindrical part. X and R charts belong to the category of variable data charts, and p and c charts belong to the category of attribute data charts. Problem definition Conventional control charts for variables are based on the assumption that the quality characteristic of interest is approximately Normally distributed. If the process shows evidence of a significant departure from Normality, however, then the control limits calculated may be entirely inappropriate. Control chart techniques intended for Normally distributed processes can lead to serious errors when applied to data obtained from other processes. In such cases, it will usually be best to determine the control limits for the individual control chart based on the probabilities of the correct underlying distribution. These probabilities could be obtained from a probability distribution fit to the data. Another approach would be to transform the original variable to a new variable that is approximately normally distributed, and then apply control charts to the new variable. However, working with the transformed data presents a significant problem in quality control work: the units of the transformed data are not comprehensible to the production 4 workers who should use control charts to monitor their work. Therefore, a procedure is needed to re-express the control chart in the original scale of measurements and specify its control limits in the units that are natural for the product. A control chart is a graphical display of a quality characteristic that has been obtained from a sample versus the sample number or time. Usually quantities computed from the observations in the sample such as sample mean and sample standard deviation are used to monitor the process mean and process standard deviation respectively. Sample means obtained from Normal and Exponential distributions, have known Normal and Gamma probability distributions respectively, and as a result, control charts for sample means can easily be developed to monitor the process parameters. Sample means obtained from other distributions, do not have known exact probability distributions. Therefore, a method is needed to develope control charts to monitor the process parameters. Research objective The Likelihood Ratio (LR), a statistic which has been found useful in areas such as testing of hypothesis and estimation of confidence intervals, can also be used to establish rather accurate control chart limits for quality characteristics where the exact sampling distribution of statistics for the process parameters are not known. The major advantage of LR method is its relatively rapid convergence to its chi-square asymptote. However, in one way the LR method is disadvantageous: it requires more complex computations. 5 The objective of this research is to present a new approach for the computations of control charts for non-Normal data and for those quality characteristics where their exact sampling distributions of statistics for the process mean and standard deviation are not known. This research uses a class of power transformations to produce data that conform best to the Normal distribution, and presents a method that enables the quality control engineer to design the control charts in the original units of measurements that are more natural for the product. Also a specific application in the wood industry, by constructing appropriate control charts for the moisture content data from kiln-dried wood will be presented. LR-based control charts for censored and truncated samples obtained from non-Normal distributions, will also be dealt with. In both cases, the probability distributions of sample parameters are not known and the conventional methods for developing control charts can not be used. The computations in this research are performed using the Mathcad (1996), computer analysis package, which is one of the most popular computational tools available today. This is a highly desirable property for non-statistically oriented users. The chapters are organized as follows. Chapter 2 reviews some of the early works on the analysis of non-normal distributions, censored and truncated samples. Chapter 3 begins with a brief review of some of the basic concepts of testing the Normality assumption and transformation, then presents control charts for the transformed data and a control chart for sample means based on the original units of measurements. Chapter 4 discusses control charts for sample maximum likelihood estimates of means and standard 6 deviations for censored non-Normal data. Chapter 5, presents control charts based on the LR statistics for quality characteristics with truncated distributions. Chapter 6, discusses control charts for sample means and percentiles of material strength test data with a Weibull distribution. Chapter 7, summarizes conclusions as well as research contributions and gives some further research directions. 7 CHAPTER 2 LITERATURE REVIEW AND BACKGROUND MATERIAL Overview This chapter presents some of the early works on the analysis of non-Normal distributions, censored and truncated samples and the Likelihood Ratio method. Non-Normal distributions The quality characteristics of products sometimes do not follow the normal distribution, and assuming Normality will result in inappropriate control limits. Several authors have studied the effects of non-Normality on the conventional X and R charts for different distributions like the Burr family of distributions discussed by Burr (1967), Gamma, uniform and Normal mixture distributions by Schilling and Nelson (1976), Normal mixture and double exponential distributions by Balakrishnan and Kocherlakota (1986), and Tukey's X-family of symmetric distributions by Chan, Hapuarachchi and Macpherson (1988). Robust versions of X and R charts have been discussed by Langenberg and Iglewicz (1986) and Rocke (1989). An alternative approach to those surveyed above may be the use of appropriate transformation so that the transformed data are approximately normally distributed, and then use 3-sigma limits for the transformed variable. Various normalizing transformations have been suggested in the literature. Some of the better known are general transformations like the power transformations studied by Box and Cox (1964), or some 8 transformations that are particularly structured for given distributions, like the arc-sin transformation for binomially distributed data or certain root transformations for Poisson variates1. Schneider, Hui, and Pruett (1992), used Box and Cox transformations in their study of the control charts for environmental data. Very few papers dealing with non-Normal distributions in quality control have appeared in the literature. Hosono, Ohta, and Kase (1981) developed single sampling plans for Weibull distributions. Hosono (1984) also introduced cumulative sum charts for the mean of extreme-value distributions with the assumption that the shape parameter of the corresponding Weibull distribution was known with the scale parameter as the specified target value of the process. Variable sampling plans based on Weibull distribution with complete or Type I and II censoring were developed by Fertig and Mann (1980); and Nelson (1982). Schneider (1989) also developed Failure-Censored Variables Sampling Plans for Lognormal and Weibull distributions. Shewhart-type \"percentile charts\" for Weibull and Lognormal distributions based on Monte Carlo simulation were proposed by Padgett and Spurrier (1990). Censored and truncated samples Censoring is a well known term in statistical literature and refers to the situation in which the values of some of the observations of the sample are unknown. Consider for instance, a fatigue test, where the experimenter decides to withdraw some of the test specimens prior to their failure. In this case some of the lifetimes are not known, i.e. they are censored. On the other hand, truncated samples are those from which certain population 1 See Ryan (1989) for a discussion of some of these transformations. 9 values are entirely excluded. The estimation of the parameters of a censored sample taken from a normal population and related statistical intervals have previously been considered by many researchers who have used different methods. In addition to the method of least squares and the method of maximum likelihood many other methods have been used to obtain simple but highly efficient estimators (refer for a survey of some of these methods, for example, to Schneider, 1986, p. 3 and 57). Herd (1960), used the term \"multicensored\" samples and suggested a probability plotting of multiply censored data much the same as the Kaplan - Meier (1958) method. Mann (1971), introduced best linear invariant estimators (BLIE's), for Weibull parameters under progressive censoring. Cohen (1991), and Nelson (1982), in their respective books presented the asymptotic (i.e. large-sample) theory for maximum likelihood estimators and confidence limits. Hahn and Meeker (1991), demonstrated the practical relevance and construction of confidence intervals and discussed several practical issues in the analysis of progressively censored data . Viveros and Balakrishnan (1994), used a conditional method of inference to derive exact confidence intervals for several life characteristics such as location, scale, quantiles, and reliability when the data are Type II progressively censored. Likelihood ratio statistic The likelihood ratio is familiar in other branches of statistical analysis, but its use in quality control does not appear to have been exploited. Likelihood ratio statistic has been used by many researches to test hypotheses (Brownless 1960; Kendall 1961; and Keeping 10 1962), and construct confidence intervals (Mohan 1979, Owen 1988, Chang 1989, Baxter 1993, Qui 1993 & 1994 and Murphy 1995). Chang (1989), presented a methodology based on the likelihood-ratio test for construction of confidence intervals for a normal mean, following a group sequential test. Owen (1988), introduced the empirical likelihood ratio method in nonparametric models. Qin (1994), used the likelihood ratio to construct confidence intervals in a semiparametric problem, in which one model is parametric, and the other is nonparametric. Murphy (1995), considered binomial and Poisson extensions of the likelihood, in an attempt to find meaningful likelihood ratio hypothesis tests and subsequent confidence intervals in a semiparametric setting. He defined confidence intervals for the survival function and the cumulative hazard function for failure time data. Basic concepts The likelihood ratio test is based on the ratio of the likelihood function for a sample of observations computed using the null hypothesis over the same likelihood function computed using the alternative hypothesis. This ratio, can not be greater than 1 and is positive since it is the ratio of products of probability fuctions which must always be positive (Brownless, 1960 P. 88.) A small value of likelihood ratio indicates that the likelihood computed using the null hypothesis is relatively unlikely, and so we should reject the null hypothesis. Conversely, a value of the ratio close to 1, indicates that the null hypothesis is very plausible and should be accepted. Considering the following ratio of likelihoods for a sample of observations: LR^=Wy (2-1} i i the method of Likelihood Ratio (LR), is based on the fact that under very general conditions as described by Kokoska and Nevisbn (1994), for large n, -2\\nLR(d) has approximately a chi-square distribution with degrees of freedom equal to the number of unknown parameters in the likelihood function L(6). According to Keeping (1962), when the parent population is Normal, the chi-square distribution of -2 in LR(e) holds exactly, even for sample size n = 2. L(d) in (2-1) is the likelihood function with unknown parameter replaced by its maximum likelihood estimator. See Kendall (1961) for the basic properties of the Likelihood Ratio statistic. Lawless (1982) has shown that the distribution of the likelihood ratio statistic approaches its limiting chi-square distribution considerably more rapidly than the distribution of the maximum likelihood estimator approaches its limiting Normal distribution. Therefore, using the LR statistic we can find more accurate confidence limits for small to moderately-sized samples. Considering the likelihood function of an obtained sample and its maximum likelihood estimators of the parameters u. and a, a ( l - a )-level confidence interval endpoints on the process parameter u are obtained from the condition L(\/i,tr(\/i)) -2 \u2022 In * XL- (2-2) L(ju,a) where x\\(\\-a) is the (1 - a) percentile of the chi-squared distribution with v = 1 degree-of-freedom.tr(^ ) is the process standard deviation as a function of the process mean and is defined by solving the maximum likelihood equation for a in terms of fi. The upper and lower (l-a) percentile confidence limits for parameter fi are then obtained from (2-2). A 12 similar procedure can be used to find a ( i-a )-level confidence interval on the scale parameter a. Confidence interval endpoints on the parameter a are obtained from the condition - 2 \u2022 In l{u(a),a) * Xl-a (2-3) where \/i(o\") is the process mean as a function of the process standard deviation and is defined by solving the maximum likelihood equation for u in terms of a. The end points of the (l - a) -percentile confidence limits for parameter a are then obtained from (2-3). 13 CHAPTER 3 CONTROL CHARTS FOR NON-NORMAL DISTRIBUTIONS Overview Conventional Shewhart control charts for variables are based on the assumption that the underlying distribution of the obtained data is Normal. However, there are numerous industrial situations where such an assumption is invalid and could lead to inaccurate quality control judgments. In this chapter, a specific quality characteristic with a non-Normal distribution will be investigated. Moisture content (MC) of kiln-dried lumber is a good example as addressed in the following section. Final moisture content of kiln-dried lumber Variable control charts may be used as effective tools in the control of the final moisture content of kiln dried lumber. The removal of moisture from lumber usually is accomplished by exposing the lumber to outdoor atmospheric conditions or to the higher temperatures of a dry kiln. There are two main reasons for kiln drying lumber: 1. To reduce its moisture content more rapidly than can be accomplished in air drying. While air drying usually requires several months or a season, kiln drying can be done in a few days. Rapid drying results in a more flexible operation and reduces capital tied up in yards. It also reduces insurance costs and taxes. 14 2. To reduce the moisture content of lumber below that attainable in air drying. For cases where wood is to be used under the driest conditions, lumber must be dried to a low moisture content, so that little or no shrinkage will take place. Improperly dried lumber would cause problems for both the manufacturer and the user. If lumber is under-dried, the end result can be warppage, shrinkage, cracks, splits, and related problems when the wood dries while in service. On the other hand, over-dried wood, which eventually equalizes to a higher moisture content, wastes energy (Rice and Shepard, 1993). Lumber must be dried to a prespecified, uniform moisture content to maintain dimensional stability in service and improve rmchining operations. Drying of lumber to a uniform moisture content will significantly increase its durability, retain its usefulness and value. However, lumber drying defects can often occur whenever lumber is over or under-dried. Application of control charts to the final moisture content of kiln-dried lumber can help prevent both over dried and wet boards and maintain uniform moisture content. In the following, we demonstrate the construction of the appropriate control charts using an illustrative example. Illustrative example The data set in Table 3-1 consists of twenty samples, each containing five measurements of the final MC of kiln dried redwood lumbers expressed as a percentage of the oven-dry weight. For the purpose of drying lumber, MC is the water contained in a sample of wood expressed as a percentage of the mass of dry wood of the sample, assuming that all water 15 has been removed (small wood samples must be dried in a small sample drying oven until they show no further loss of weight. They are then considered dry.) For example, if the wet weight of a piece of sample is 25 g, and its dry weight is 25-20 20 g, the M C would be: \u2014 \u2014 \u2014 x 100 = 25 %. Sample boards were obtained randomly from each kiln charge. Samples were cut about 1 FT from the end of the board as the very end may be drier than the remaining sample board. The location in the charge, from which each sample was obtained and the identity of the kiln charges were recorded. Oven sections were cut from the sample boards and weighed as soon as possible after the samples have been chosen so as to minimize errors. The M C of each sample expressed as Table 3-1. Percentage moisture content of kiln dried 4\/4 redwood uppers. Sample Kiln and charge number number Sample values 1 1 -1 8.0 7.5 9.2 8.2 7.0 2 1--1 8.5 8.4 9.5 8.9 7.8 3 2--1 7.6 7.6 8.7 8.3 8.3 4 2--1 7.3 8.2 8.8 8.9 8.1 5 1 -2 8.1 7.7 8.1 8.4 8.1 6 1-\u20222 7.8 8.6 9.4 9.1 8.9 7 2-\u20222 8.4 10.0 9.5 12.1 12.0 8 2--2 9.3 9.7 10.2 10.8 10.0 9 1 -3 89 8.8 10.4 8.9 8.0 10 1--3 8.5 10.6 9.0 8.8 9.9 11 2--3 7.7 9.2 10.3 9.0 10.8 12 2-\u20223 8.5 9.7 7.2 8.6 8.3 13 1--4 8.9 9.0 8.0 8.0 9.4 14 1--4 8.2 8.3 7.7 8.7 8.1 15 2--4 8.4 10.8 9.9 13.1 8.5 16 2--4 11.1 12.2 8.8 10.1 9.8 17 1--5 9.0 10.1 8.5 9.1 8.6 18 1--5 7.6 8.8 8.5 8.3 8.4 19 2-\u20225 7.0 9.8 8.1 9.4 8.6 20 2-\u20225 7.8 7.7 9.2 8.7 8.2 16 a percentage of its oven-dry weight, is calculated to the nearest tenth of a percent. Enough samples to make two subgroups per kiln charge were obtained. The sample number, kiln and charge number and the MC of each sample were recorded. These data were first reported by Pratt (1953), who used the conventional Shewhart control charts for variables without applying any transformation or checking the significance of the additional between-sample variation. These data are used here for the purpose of comparison and to show the significance of a more thorough analysis of the final MC data. The Normality assumption The quality characteristic of interest, sometimes does not follow the Normal distribution, and assuming Normality will result in inappropriate control limits. Therefore, it is imperative to test this assumption before constructing the control charts. We can check the Normality assumption graphically with a probability plot. The probability plot indicates how the observations may deviate from what would be expected if they were Normally distributed. The Normal probability plot, is the cumulative frequency distribution (cfd), constructed by ranking the observed sample values from small to large; assigning each value a rank i, and calculating the plotting position of the probability scale. There are many choices for plotting position when plotting the sample cfd for the data plot. Blom (1958), Mandel (1964), and Press et al. (1986) showed that the plotting position: i- 0-375 >*>'77025 ( M ) 17 closely approximates the order statistics of the Normal distribution. pw is the proportion of samples that are less than or equal to x(i) with a slight \"continuity\" correction. Normal cfd as a function of the standard Normal deviate, can be obtained by linear interpolation between any two sets of extreme values of the standard Normal deviate and its corresponding cfd (z, cfd). For example, interpolation between two points (-2.5, 0.006) and (2.5,0.994), results in the following equation: y, =05 + 0.1975-2, (3-2) Using y, for the ordinate of the model plot would cause it to appear as a straight-line. The probabilities, p(i), are converted to standard Normal order scores, z,, and the ordinate of data plot would also produce a straight-line if the data came from a Normal distribution. Using Mathcad notation and functions, the ordinates of data and model plots are: Afi-100 i.-i.JV J C , : = READidata) (3-3) x: =sort(x) 1 N T * \u2014 X N -1 , i - 0.375 P i ' N+0.25
A * \u00b0> for X > 0, (3-4) [ log X, A = 0, Here, the data values X are raised to some power X, then 1 is subtracted from the modified data and finally divided by X. These steps are fully explained in Box and Cox (1964), but two reasons are: (a) for negative values of X the transformed data (y), do not 20 reverse their order; and (b) the value X = 0 then corresponds to a logarithmic transformation. The logarithm is, therefore, a power transformation which fits naturally into the transformation sequence. We assume that for some unknown X, the transformed observations [(x, )A - l]y\/A satisfy the full Normal theory assumptions, i.e. are independently Normally distributed with constant variance. The probability density for the original observations, is obtained by multiplying the Normal density of the transformed variable by the Jacobian of the transformation. The Jacobian of the transformation is obtained by differentiating the transformed variable with respect to the original variable, d ( xx - 1 dx x A -(3-5) = X The likelihood function of the obtained sample is then (2-*)' \/ 2 . f r N \u2022II\" I - l 2a1 \u2022ii(*.r i - l (3-6) It is usually more convenient to work with log-likelihood, LL(n,o), function: LL(u,a) = - - ^ - l n ( 2 n)- \u00ab - l n ( < r ) - \u00a3 l-a' \u2022 + ( A - l ) - I l n U ) 0-7) 21 We will use the maximum likelihood estimator (MLE) approach to find our estimator for the particular transformation parameter X, then we will find the mean u, and standard deviation a of the transformed data. For normally distributed data, the ML estimators for u and a (biased) are: 1 N (x,)'-l 1 , \u00a3 ( * , ) ' -1 3 \\1 J-* N ft A (3-8) Substituting (3-8) in the log-likelihood function (3-7) will result in an equation in terms of X only. Differentiating this equation with respect to X and equating to zero, gives a maximum likelihood estimating equation for X. The solution to this equation is straightforward with Mathcad, although at first sight the numerical calculations appear to be rather involved. Guess: A: =0.1 (3-9) A:= root dX Y ' ln(2 -ir) - N In o o A - i _ i ^ ( X i y - \\ 1 *> \u2014 \"I N ft ( x , ) A - l _ 1 f ( x , ) A - l A N fri A N ~ i A N ft A = -2.168 ( x , ) A - i _ i f (xty-i A N ft A - r + ( A - l ) \u00a3 l n ( J c j ) 0,A Using Box and Cox transformation (3-3) with A = -2.168, resulted in a Normal distribution for the MC data in Table 3-1. The probability plot of the transformed data is shown in Figure 3-3. The ML estimators for u and a (unbiased) are: 22 \" N k A u = 0.457 1 N a = 0.001 -1 1 ^(s\/) A -l 1 1 1 1 1 1 * 0.8 A mcxlel. 0.6 l data. X 1 0.4 -0.2 0 X *\/ l 1 1 1 i 0.454 0.455 0.456 0.457 0.458 0.459 0.46 (3-10) Figure 3-3. Probability plot for the transformed data. The maximum likelihood estimator a reflects both fcenveen-sample and wif\/iin-sample variability. If the sample means differ significantly, then this will cause a to be too large and the process standard deviation overestimated. See Montgomery (1991, p. 215) for a discussion of these two variations. In processes where only simple random variation exists, between-sample variation is not significant. The estimate of the wif\/im-sample variation only, should then be taken as an estimate of the process variation and be used in calculating the control chart limits. 23 For some processes, the variation of sample means will be more than we expect from a random variation. The process mean, may vary slightly causing additional variation. In this case the between-sample variation becomes significant and its extra value must be added to the w'tfim-sample variation before calculating the control chart limits. Estimate of the within-sample standard deviation sw, is the average standard deviation within samples2: 1 m m (3-11) where I \u2022j'i (xx - 1 - x. n - 1 1 \" JC-\u2022 1 and *;. = -\u2022 Y \u2014 n % A (subscript dot denotes summation over suffix j.) Between-sample standard deviation sb, is simply the standard deviation of the m sample means: 1 m \u2014 2 , \u2014 7 ' \u00a3 ( * \u00ab . - J \" ) \\m-l i = 1 v ' (3-12) when a process is in control, sw estimates the process standard deviation a, and sb estimates o\/4n. Therefore a formal F-test of whether or not there is a significant between-sample variation can be set up using the following ratio F = (3-13) ' See Wether i l l and B r o w n , 1991 P. 57. 24 For data in Table 3-1, and using formulas (3-11) and (3-12), the values of sb and sw are -4 (3-14) sb = 6.966 10\" sw = 8.665 10\"4 Using (3-13), the value of the F statistic is F= 3.232 (3-15) This value is greater than 1, indicating that there is some additional systematic variation between sample means. We can formally test the significance of this F value by comparing it with the critical F value at a = 0.05 with numerator degrees-of-freedom u = m - 1 = 19, and denominator degrees-of-freedom v = m{n -1) = 80. The 5% critical F value with 19 and 80 degrees-of-freedom is 1.718, computed as follows: \u00ab:= 19 v:=80 F(c): = 2 V ' 1 1 J \u00b0 V 7 fl - - I T \u2014 \u2022 v dt (3-16) c:=1.7 xc:= root(F(c)- 0.95,c) xc= 1.718 Since the sample F value is larger than the critical F value of xc, we conclude that the additional between-sample variation is significant. The standard deviation due to this variation is: S. = \\l Sh (3-17) 25 The additional Beftveen-sample variation can be controlled by means of a moving standard deviation chart based on sample means. This chart together with the control chart for sample means aids in detecting changes in the process mean. Control chart for sample means Sample mean X, is normally distributed with expectation equal to the process mean u. and standard deviation equal to the ratio of the process standard deviation a divided by the square root of the sample size. When between-sample variation is proved to be significant, as in the case of MC data given in Table 1, the extra between-sample variation should be added to the expected within-sample variation. Therefore the 3-sigma control limits for the sample means control chart become: The sample mean chart is shown in Figure 3-4. In this case, the process mean appears to be in control (as opposed to control chart developed by Pratt, which showed points out of control), indicating the need for transformation and inclusion of the additional between-sample variation. (3-18) 26 Sample Mean 0.457 0.46 0.459 h 0.458 r-0.456 h 0.455 r -0.454 1 1 1 1 1 r UCL=0.459. CL=0.4571 LCL-0.455Q 10 12 14 16 18 20 Sample Number Figure 3-4. Sample means chart for data in Table 3-1. Table 3-2. Sample means and moving additional between-sample standard deviations. Sample mean, Moving additional between-sample X standard deviation, se 0.45606 -0.45693 -0.45632 4.383 IO\"4 0.45640 3.059 IO\"4 0.45633 6.922 10 s 0.45780 3.893 IO\"4 0.45824 9594 IO\"4 0.45816 6.434-10\"4 0.45729 5.21 IO\"4 0.45762 4.345-IO\"4 0.45756 1518 IO\"4 0.45667 5.227-IO\"4 0.45697 4.418-IO\"4 0.45647 2.387 IO\"4 0.45799 7.71-10\"4 0.45832 9.828 IO\"4 0.45739 4.629 IO\"4 0.45661 8536 IO\"4 0.45673 4.097 \u2022 10\"4 0.45658 5.603-10\"4 27 Control chart for moving standard deviations We can control the additional between-sample variability by constructing a moving between-sample standard deviation chart. We chose subgroups of size k = 3 to calculate the moving additional between-sample standard deviations. Table 3-2 shows the results of these calculations. The first entry in column two is the additional between-sample standard deviation (3-17) based on samples 1 to 3 and the next entry is based on samples 2 to 4 and so on. Since the sample means are Normally distributed, we can set up an se chart with probability limits. Probability limits can be obtained using the chi-square distribution in conjunction with the following relation where, (k-1) is the degrees of freedom. For a Type I error of a =.0027, and from (3-19) it follows that: (*-iK (3-19) < Xi-a\/2 = 1-a (3-20) and so P \u2022Xa,2 <*\/ < J = 1-a (3-21) By taking the square roots we obtain: (3-22) 28 Therefore, if the between-sample variability is in control at o~e , 1 - a % of the time the moving standard deviations, se, will fall between the endpoints of the interval. Using estimate of the additional between-sample standard deviation (3-17), the control limits would then become: LCLS_ = 0.00002 UCL^ = 0.0015 x x and the centerline would be se, the average of se values. For a =.0027 (corresponding to the 3-sigma limits), the a\/2 percentage point of the chi-square distribution with k-l degrees-of-freedom is obtained using Mathcad as follows a:= 0.0027 k:= 3 v: = k - 1 i r f i \\r~l ( i > F(c): = \u2022 v V2 t \\dt (3-24) 2 \u2022 r c:= 0.002 chi - square:= root(F (c) - a\/2,c) chi - square = 0.003 Also Xk-i,i-a\/2 = 13.215. The movitig additional between-samples standard deviations chart is given in Figure 3-5. In this case, this component of variability appears to be in control. 29 0.002 UCL = 0.0015 Moving Between-Sample standard deviation 0.001 0.00051 o LCL \" 0.00002A o 2 4 6 10 12 14 16 18 Group number Figure 3-5. Moving additional between-sample standard deviations chart. Control chart for sample standard deviations (s chart) An s chart can be used for controlling the process variability. Using similar steps as given for the moving additional between-samples standard deviations chart, it can be shown that the control limits for the 5 chart are as follows: UCLS_ = 0.0018 x where, a = 0.0027, n = 5, xl-i,a\/2 = 0.106, and xl-i,i-a\/2 = 1 7 - 8 - T n e s chart is given in Figure 3-6. (3-25) LCL^ = 0.00014 30 0.002 1 1 1 1 1 1 1 1 1 UCL =0.0018 0.002 -sample standard 0.001 0.001 deviation CL \u00ab* 0.0008 LCL = 0.00014 0 0 1 2 1 4 I 1 1 1 6 8 10 12 Sample Number i 14 1 16 1 18 2( )Figure 3-6. Standard deviations control chart. The s chart should generally indicate control before the control chart for sample means is constructed. The reason for this is that unless the variability of the process is in a state of statistical control we do not have a stable process with a single fixed mean. Control charts shown in Figures 3-4, 3-5 and 3-6 clearly indicate an in control state for the kiln-drying operation. Both the process mean and process variation have remained constant during the period of study. These results are in sharp contrast with that previously obtained by Pratt (1953.) In that study, as mentioned earlier, conventional Shewhart X and R control charts were applied to original data and on both charts some points were found beyond the control limits. Although Pratt collected his data from an ongoing operation and any present assignable cause could have been found and reported by the author, no such finding is given in the study. This leads us to believe that the out of control points were in fact false alarms and no assignable cause had been present present in the process. Had Pratt recognized the underlying distribution and made proper transformation, he would have 31 found the process to be under control. The conventional control charts as developed by Pratt were based on the assumption that the distribution of sample means was Normal. Probability plot and statistical test of fit indicated that this assumption was not realized for the samples of MC data used in the study and as a result, the 3-sigma control limits couldn't have been the proper limits to use for the distribution of sample means. The lack of Normality in the original data couldn't have also been the result of an assignable cause since an assignable cause is either related to the process mean or the process standard deviation, not the shape of the distribution. Before any data are collected for the purpose of control charts, the quality control engineer should take proper action to fix any recognizable problem. Once the process is judged to be operating under its best possible conditions, the task of collecting data begins. For the kiln-drying operation of this chapter, it is natural to assume that these basic steps have been taken. Our control charts clearly confirm this assumption and their control limits could be used to keep the process under ontrol in the future. Control chart in the original scale of measurements So far we have constructed control charts for transformed data. However, working with transformed data presents a significant problem in process quality control: the units of the transformed data are not comprehensible to the production workers who should use control charts to monitor their work. We can construct a control chart in the original scale of measurements and specify its control limits in the units that are natural for the product. 32 Considering the definition of the expected value of a variable, the mean in the original scale of measurements can be written as a function of the mean and variance in the transformed scale. Defining the transformed data as variable y: x x - \\ y = (3-26) and solving for x we find x = ^jA-y + l The mean of measurements in the original scale can then be defined as: 0.461 0.38 a -+J2-7T exp y ~ u V a dy (3-27) u03(u,a) = 8.908 The values 0.38 and 0.461 in the integral are the possible values for the transformed variable y. The expected value of MC in the original scale, will then be used as the centerline of the control chart for sample means in the original scale. For each sample, we use (3-27) to find the mean in the original scale (xos). For first sample number this calculation is as follows: j:=\\.5 0.461 ;:= J VA \u2022 y + 1 0.38 \u2022exp 2 >'5 t i -dy (3-28) Xos = 7.991 For sample number 2 we only change the subscript j to 6.. 10, and so on. The sample means in the original scale of measurements for all 20 samples are listed in Table 3-3. Local standard error of mean in the original scale ( LSE\/J0S), can be found from the error propagation formula evaluated at the ML estimates of mean and standard deviation in the 33 transformed scale and the local standard errors of these parameters. These calculations are shown in 3-29. Since the exact sampling distribution of the estimate of mean in the original scale is not known (as opposed to the exact sampling distribution of the estimate of mean in the Table 3-3. Sample means in the original scale of measurements. Sample Number Mean (original scale) 1 7.991 2 8.750 3 8.195 4 8.338 5 8.202 6 8.909 7 10.510 8 10.372 9 9.138 10 9.560 11 9.476 12 8.499 13 8.794 14 8.323 15 10.104 16 10.641 17 9.261 18 8.443 19 8.553 20 8.414 LSE\/u: = -=\u2022 \u2022Jn LSEo: = LVuoa: = da' LL(u,a) LSE\/u2 + -\\2 ^ T ^ ( ^ a ) n-1 LSEo' (3-29) LSEu\u201e.= JIW, LSEuos =0.115 34 transformed scale which is Normal), we use the Likelihood Ratio (LR) statistic to find an approximate confidence interval on the process mean in the original scale and subsequently we obtain the related control limits for the sample means chart. Considering the log-likelihood function (3-7) and the ML estimates (3-10) for parameters u and a, a (l-oc)-level confidence interval endpoints on the process mean in the original scale (yos) are obtained from the following condition - 2 In L(U,6) (3-30) Both parameters in the numerator (3-30) are expressed in terms of mean in the original scale. This can be done by first defining the location parameter in terms of mean in the original scale and scale parameter: (3-31) a:= a b:= a I(u0,,b): = root f\u00b0- 4 6V, ,\\1\/A 1 , (A y + 1) , exp 1 fy-a' . b j dy~