UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Sensitivity analysis of the response characteristics of pattern search techniques applied to exponentially… Bitz, Brent William John 1972

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata

Download

Media
831-UBC_1972_A4_5 B58.pdf [ 5.15MB ]
Metadata
JSON: 831-1.0101576.json
JSON-LD: 831-1.0101576-ld.json
RDF/XML (Pretty): 831-1.0101576-rdf.xml
RDF/JSON: 831-1.0101576-rdf.json
Turtle: 831-1.0101576-turtle.txt
N-Triples: 831-1.0101576-rdf-ntriples.txt
Original Record: 831-1.0101576-source.json
Full Text
831-1.0101576-fulltext.txt
Citation
831-1.0101576.ris

Full Text

CI SENSITIVITY ANALYSIS OF THE RESPONSE CHARACTERISTICS OF PATTERN SEARCH TECHNIQUES APPLIED TO EXPONENTIALLY SMOOTHED FORECASTING MODELS by BRENT WILLIAM JOHN BITZ B. Comm., University of British Columbia, 1970 A THESIS SUBMITTED IN PARTIAL FULFILMENT OF THE REQUIREMENTS FOR THE DEGREE OF MASTER OF BUSINESS ADMINISTRATION in the Faculty of Commerce and Business Administration We accept this thesis as conforming to the required standard THE UNIVERSITY OF BRITISH COLUMBIA June, 19?2 In present ing t h i s thes is in p a r t i a l f u l f i l m e n t o f the requirements fo r an advanced degree at the U n i v e r s i t y of B r i t i s h Columbia, I agree that the L i b r a r y sha l l make i t f r e e l y a v a i l a b l e for reference and study. I fu r ther agree that permission for extensive copying o f th is t h e s i s for s c h o l a r l y purposes may be granted by the Head of my Department or by h is representa t ives . It is understood that copying or p u b l i c a t i o n o f t h i s t h e s i s f o r f i n a n c i a l gain s h a l l not be allowed without my wr i t ten permiss ion . Department of Commerce and Business Administration The U n i v e r s i t y of B r i t i s h Columbia Vancouver 8, Canada Date June 8, 1972 i ABSTRACT The purpose of th i s study was to undertake a s e n s i t i v i t y analysis of selected input parameters of the pattern search-exponential smoothing forecasting system. The inputs subjected to the analysis werei 1) maximum number of pattern moves, 2) minimum step s i z e , 3) pattern search step si z e , 4) step size reduction factor, 5) exponential smoothing constants (A, B and C). As the values of these input parameters were changed during the course of the analysis the resultant changes i n certain c r i t e r i o n variables of the system were noted. These variables were: 1) forecast error standard deviation, 2) number of i t e r a t i o n s (or pattern moves), 3) exponential smoothing constants (A, B and C). The three separate time series that were used i n th i s study were furnished by the Frazer Valley Milk Producer's Association. The data series are composed of unit sales of f l u i d milk segregated according to container s i z e , butterfat content and channel of d i s t r i b u t i o n . Each of the time series analysed represents a d i f f e r e n t type of trend factor. One each for r i s i n g , f a l l i n g and stable trend factors. The three time series were subjected to i d e n t i c a l a n a l y t i c a l procedures. The results were then i i compared across the three time series i n order to determine i f the response patterns of the pattern search system were sensitive to changes i n the series trend. As measured hy the response patterns of the c r i t e r i o n variables, the accuracy of the system i s not influenced s i g n i f i c a n t l y hy changes i n the input parameters. Throughout the s e n s i t i v i t y analysis there developed a consistent pattern of minimal change i n the forecast error standard deviation and the exponential smoothing constants. The search process was able to consistently reach very s i m i l a r forecast error standard deviation values and exponential smoothing constant values, given the range of input values tested. The only dependent variable that experienced any marked change was the number of i t e r a t i o n s . There does appear to be certain input values that minimize the number of i t e r a t i o n s that the pattern search system needs, to a r r i v e at solution values. Neither the maximum number of pattern moves nor the minimum step size exerted much of an e f f e c t on the size of the forecast error standard deviation or the "optimum values" f o r the exponential smoothing constants. However, changes i n the minimum step size do af f e c t the number of i t e r a t i o n s the pattern search system makes before reaching a minimum forecast error standard deviation. I f the minimum step size i s decreased the number of i t e r a t i o n s i s increased. The opposite i s also true, and i f the minimum step size i s increased, the number of it e r a t i o n s i s decreased. Changes i n the maximum number of i i i pattern moves have no effect on the number of iterations. The pattern search system also appears to be unrespon-sive to changes in the pattern search step size. Neither the forecast error standard deviation nor the expotential smoothing, constant values can be improved through the use of different pattern search step sizes. The number of iterations is some-what more responsive. Both large and small pattern search step size yield larger numbers of iterations than do middle values i.e. . 1 0 - . 2 0 . Like the other inputs, the step size reduction factor, also does not e l i c i t change in the results of the search process. Movements in the forecast error standard deviation and the exponential smoothing constants are small enough to be considered insignificant. Step size reduction factor values from . 1 0 0 to • 5 0 0 minimize the number of iterations, although within this interval there is l i t t l e change. Larger values of the step size reduction factor tend to increase the number of iterations. There is l i t t l e responsiveness in the pattern search system to changes in the i n i t i a l values for the exponential smoothing constants. Between the three time series used, there is l i t t l e consistency with regards to the effects of changes in the i n i t i a l constant values on the number of iterations. The ris i n g series benefits most from small values i.e. . 2 5 0 . The f a l l i n g series benefited most with a middle value i.e. . 5 0 0 . The stable series reacted opposite to the f a l l i n g one and benefited most with values at the extremes i.e. . 2 5 0 and . 7 5 0 . One important finding is that most of the responsiveness of the pattern search system takes place before the f i r s t step size reduction. The bulk of a l l improvement i n the forecast error standard deviation and the majority of a l l change i n the exponential smoothing constants occurs i n this f i r s t set of pattern moves. This i s an important r e s u l t as i t explains the i n s e n s i t i v i t y of the search system to changes i n the maximum number of pattern moves, the minimum step size and the step size reduction factor. V TABLE OF CONTENTS Page FORWARD 1 CHAPTER I. General Discussion of Forecasting 3 Introduction 3 Forecasting Systems 6 Descriptive Models 6 Time Series Analysis 7 Factor Listing • 10 Causal Techniques 10 Leading Series 11 Econometrics 13 Summary 14 CHAPTER II. Exponential Smoothing 15 Introduction 15 Characteristics of Exponential Smoothing . 15 Exponential Smoothing Method 18 Measures of Forecast Accuracy 22 Mean Forecast Error 22 Mean Absolute Error 2 3 Error Variance 23 Seasonal Adjustments 24 Trend Adjustment 26 Weighting Constants 2 8 CHAPTER III.Pattern Search Methodology 31 Introduction 31 Pattern Search Process 32 Exploratory and Pattern Moves 36 Step Size Reduction 42 Direction Change 43 General Discussion of Pattern Search 45 CHAPTER IV. Data» Source and Selection 48 Data Source 48 Dairy Industry 4 9 Data Selection 55 CHAPTER V. Analytical Methodology 58 Selection of I n i t i a l Values 58 Error Measurement 6 0 Standard Run Values 6 l Methodology 63 vi Page CHAPTER VI. Data Analysis And Conclusions 68 Maximum Number of Pattern Moves and Minimum Step Size 68 Pattern Search Step Size 73 Step Size Reduction Factor 77 Exponential Constants - A, B and C .... 79 Conclusions 83 CHAPTER VII. Recommendations 88 BIBLIOGRAPHY 90 APPENDIX v i i LIST OF TABLES Page TABLE I Shipping Volume Distribution, 1966 51 TABLE II Per Capita and Total Consumption of Dairy Product 1965* 1 9 6 ? and Projections for 1975 and 1 9 8 0 , Assuming Constant Real Prices 5^ TABLE III Cow Numbers, Yield Levels and Milk Sales for 1965 and Projections for 1975 and 1 9 8 0 55 TABLE IV Input Values on an Experimental Run Basis 66 TABLE V L i s t of Experimental Run Values for a l l Time Series 69 TABLE VI Results from Changes in DELO 7^ TABLE VII Results from Changes in RHO 78 TABLE VIII Results from Changes in Exponential Constants (A, B and C) 81 v i i i LIST OF FIGURES Page Figure 1 Different Values of the Smoothing Constant Give Different Weights to Past Data Items 21 Figure 2 Path of a Pattern Search Application 35 Figure 3 Exploratory and Pattern Moves 37 Figure 4 Pattern Search Flow Chart 40 Figure 5 Exploratory Move Flow Chart 4l Figure 6A Pattern Search: Change of Direction 44 Figure 6B Pattern Searchs Change of Direction 44 Figure 7 Pattern Search Step Size and Associated Error Value 72 Figure 8 Pattern Search Step Size and Associated Number of Iterations 76 Figure 9 Step Size Reduction Factor and Associated Number of Iterations 80 Figure 10 Constant Value and Associated Number of Iterations 84 ix ACKNOWLEDGEMENT The author of this paper wishes to thank Dr. Doyle Weiss f o r his invaluable assistance. Dr. Weiss' c r i t i c i s m s and comments were extremely useful i n preparing th i s paper. I would also l i k e to extend my thanks to the Frazer Valley Milk Producer's Association for t h e i r co-operation i n supplying the data necessary to the study. 1 FORWARD The focus of this study is essentially sensitivity analysis with regards to some of the inputs of the pattern search technique. This analysis is performed within the context of an exponential smoothing application. What I did was to change, in a systematic way, the i n i t i a l values of the smoothing constants, and also the values of the maximum number of pattern moves (MAX), the pattern search step size (DELO), the step size reduction factor (RHO) and the minimum step size (D). While doing this I noted the effect of these changes on the dependent variables, the standard deviation of the forecast error, the number of iterations or pattern moves that the search process takes in arriving at a particular solution and at the smoothing constants. The purpose is to see what effect the changes had and to determine i f there are any values or range of i n i t i a l values that are "better" than the rest. By "better" I mean any values that consistently minimize the standard deviation of the forecast error and the number of iterations. By determining this I hoped to develop some generalizable information about the response characteristics of the pattern search process as i t is reflected in the values of the dependent variables. Chapter I gives an overview of forecasting methodology in general and tries to place exponential smoothing in an over-a l l context. It also discusses the relative advantages and dis-advantages of the various approaches. Chapter II is an in depth 2 discussion of the exponential smoothing process. Chapter III discusses the pattern search technique and relates i t to t h i s a p p l i c a t i o n i n exponential smoothing. Chapter IV deals with the c h a r a c t e r i s t i c s of the data used i n th i s study and gives an overview of the industry from which the data was taken. Chapter IV also goes into the se l e c t i o n of the-data se r i e s , that were actu a l l y used, from the lar g e r number that were available to the study. Chapter V relates the s p e c i f i c s of the methodology used i n the s e n s i t i v i t y analysis. Chapter VI discusses the resu l t s of the analysis and states the conclusions of the study. Chapter VII makes some recommendations as to areas of further research. 3 CHAPTER I General Discussion of Forecasting Introduction The purpose of t h i s chapter i s to describe i n an ove r a l l sense, several approaches to forecasting sales. By-doing so, exponential smoothing, the technique that i s the focus of thi s study, w i l l be placed within a broader context. It i s intended that t h i s w i l l help the reader understand the advantages and disadvantages of exponential smoothing and gain a better f e e l i n g f o r i t s a p p l i c a b i l i t y . Any form of planning, be i t business, personal or governmental, involves some expectations of the future. Such expectation may be opt i m i s t i c , pessimistic, based on i n t u i t i o n , or simply a strong assumption-that the future w i l l be s i m i l a r to the past. In some cases, the accuracy of future expectations i s not of c r i t i c a l importance. I f the forecast proves to be wrong,plans can be e a s i l y and quickly adjusted. However, on other occasions, when resources have to i r r e t r i e v a b l y committed, accurate forecasting i s essential to the future survival and success of the enterprise. It i s under conditions such as these that a technique such as exponential smoothing becomes of value. As mentioned e a r l i e r , forecasting i s concerned with the future but i s based imperfectly on information from the past and present. The future i s uncertain and u n t i l the c r y s t a l b a l l i s perfected must remain so. No matter what forecasting methodology 4 or technique w i l l he developed i n the years to come the future w i l l continue to be uncertain. As a re s u l t of t h i s , forecasts are estimates of future states about which no one can be sure. How fa r the future i s capable of being predicted depends on the extent to which i t i s related to the past and whether the relationships between the past and future can be discovered. Just because apples have f a l l e n from trees f o r as long as man can remember (Newton not excepted), this i s no proof that the phenomenon w i l l continue tomorrow. Moreover, I may be w i l l i n g to r i s k my future on the continuence of t h i s fact but I am s t i l l taking a r i s k because i t i s uncertain. The view that economic events can be forecast depends upon the seemingly reasonable assumption that economic events have some continuity ( l ) . I f there were no r e l a t i o n between the past and the future then there would be no r a t i o n a l basis on which forecasts could be made or on which planning could take place. The purpose of developing forecasting techniques i s to exploit the relationships that are thought to exis t between past and future states. Although forecasts are sometimes made as though the variable to be forecast w i l l , or w i l l not, a t t a i n a p a r t i c u l a r value, complete accuracy i s not to be expected. Ideally, a forecast should take the form of a d i s t r i b u t i o n of the values that might be assumed by the variable to be forecast along with an estimate of the pr o b a b i l i t y that each w i l l occur. Throughout the study, I have taken a pragmatic point of view. I have assumed that the value of a forecasting technique 5 l i e s i n i t s a p p l i c a b i l i t y . Given t h i s general l i n e of reasoning, then the c r i t e r i o n to be used i n judging a forecast i s whether i t enables better decisions to be made. I f thi s i s true then a business's decisions with regards to forecasting should include the following considerations ( l ) . I f the use of a forecasting technique increases the accuracy of assumptions made about the future, and as a consequence improves the qual i t y of managerial decision making, then the use of forecasting i n such circumstances may be said to be worthwhile. Of course, i t w i l l be worthwile only i f the increase i n p r o f i t s that r e s u l t s from improved foresight exceeds the costs of making the forecasts. I f , on the other hand, the forecasts r e s u l t i n p r o f i t s no better than would have been r e a l i z e d without them then there i s no point i n forecasting. The reason that I make the above points i s that exponential smoothing i s an e a s i l y applied technique. As w i l l be apparent, when the various forecasting approaches are discussed, exponential smoothing does have some weaknesses. These weaknesses, however, do not detract from the fac t that the method i s quite accurate f o r a wide range of commercial situations and i s quite inexpensive to use. I f e e l that these advantages outweigh the theo r e t i c a l disadvantages that the system possesses. I would now l i k e to discuss some of the problems that a forecaster might face en route to producing the forecast i t -s e l f . As the forecaster seeks to determine the i n t e r r e l a t i o n -ships that are of importance to his problem, he w i l l l i k e l y f i n d 6 that they change over time. He is also l i k e l y to find that for each fluctuation there are a number of possible explanations, a l l of which may be partially or wholly correct. The cause may change from time to time. More than one cause may be at work and their relative strengths may vary. The forecaster is often faced with a great volume of data. Some of i t may be late, some incomplete and some of questionable validity. In order to permit prompt conclusions the data must be organized in some logical sequence. In add-ition, i f one i s to have a faith in the conclusions or inter-pretations, the r e l i a b i l i t y and meaning of the data must also be determined. This organization and interpretation i s , of course, only a necessary preliminary to the actual forecasting i t s e l f . Forecasting Systems I would now like to discuss a number of methods and techniques commonly used in forecasting. Although I have used the classification system delineated by Spencer et a l . (9) i t should be remembered that the identification of specific methods does not mean that they must or even should be used alone. In fact, i t is sometimes d i f f i c u l t to separate one method from another, and there are numerous conceivable classifications. Descriptive Models This approach can also be called naive methods. The basic idea is to project the current situation into the future. 7 Such methods are usually distinguished from other forecasting methods i n that they are mechanical and are not c l o s e l y i n t e -grated with surrounding economic or business theory. I f a forecaster decides to use a descriptive approach the number of problems that he faces with regards to understanding are reduced. He does not have to know the cause-and-effect r e l a t i o n -ships. He simply looks f o r a method that w i l l work without enquiring as to why i t works. The forecaster may not know or have any judgement about why the method worked i n the past and consequently he must assume that i t w i l l work again i n the future. Time Series Analysis - I t i s to t h i s c l a s s i f i c a t i o n that exponential smoothing belongs. At t h i s point however I w i l l describe the c h a r a c t e r i s t i c s of the system as a whole leaving to the next chapter more i n depth discussion of exponential smoothing i t s e l f . A time series i s a sequence of values corresponding to p a r t i c u l a r periods of time. Data such as sales, production volume or prices when ordered chronologically are referred to as time ser i e s . The basic assumption for forecasting these series i s that there w i l l be a continuous development of the variable i n question and that the s i z e , rate and nature of h i s t o r i c a l change w i l l continue into the future. As a quick look at most time series w i l l show, they are subject to cert a i n patterns of f l u c t u a t i o n . Within economic time series these sources of v a r i a t i o n can be c l a s s i f i e d into four typesJ trend, seasonal, c y c l i c a l and random forces. Trend 8 represents the long-run growth or decline of the s e r i e s . Seasonal v a r i a t i o n s , that are due to weather and custom, manifest themselves during the same approximate time periods each year e.g. summer or Christmas. Seasonal variations could even be reduced to a weekly basis i f the pattern of change i n the time series moved i n a seven day cycle. C y c l i c a l variations cover several years at a time and are a r e f l e c t i o n of the p o s i t i o n of the economy within the business cycle e.g. recession. F i n a l l y , random forces such as s t r i k e s , wars and competitive influence are e r r a t i c i n t h e i r influence on p a r t i c u l a r s e r i e s . Of these four forces a f f e c t i n g economic time s e r i e s , the seasonal v a r i a t i o n i s f a i r l y easy to measure and predict. A l l that i s needed i s a good set of data that covers the season i n question f o r a number of years. The random factor, as the name implies i s unpredictable but can be mitigated by such smoothing out processes as moving average. The method of trend projection i s often used as a forecasting procedure i n i t s e l f . This method assumes that the recent rate of change of the variable w i l l continue into the future. On t h i s basis expectations are established by projecting past trends into the future. This i s perhaps the most common method used by business firms. This method i s used because economic series exhibit a persistent and c h a r a c t e r i s t i c rate of growth which can be approximated by a mathematical trend (9). The trend may consist of a simple unweighted projection or i t may be weighted by attaching greatest importance to the most 9 recent period and successively less importance to periods in the more distant past. Because economic time series show a persistent tendency to move in the same direction for a period of time a forecaster using the method of trend projection w i l l be right as to the direction of change more times than he w i l l be wrong. In fact, he w i l l he right in every forecast except those at turning points. This points out some major weaknesses of trend projec-tion and indeed of a l l time series methods. Because forecasts are based entirely on data within the time series under considera-tion, factors outside the data series w i l l not be looked at. In many cases factors outside the data series can be used to help predict the rate and direction of movement of the series. For example, business analysts have been using leading indicators (see Leading Series) for years. In these cases changes in one factor consistently preceed changes in another and hence can be used as a predictive tool. Forecasting with time series method-ology alone would preclude the use of such other tools. Another weakness of time series methods exists because they assume the future to be a systematic extension of the past. In many cases, new changes in the environment that the time series operates in, w i l l cause changes in the rate and direction of movement of the series. If only historical data is used, these new changes w i l l not be picked up and any forecasts made after that point w i l l be in error. On the other hand, time series methods do have some 10 distinct advantages. To begin with, because the method uses only historical data l i t t l e new data collection has to be done for each forecast. If the time series is one of sales then a firm would only have to keep a record of i t s sales in order to satisfy the requirements of the method. As w i l l be discussed later many other techniques are not nearly so simple in their data requirements. Another advantage is that time series forecasts usually involve the use of repetitive mathemati-cal computations. Because of this, the system lends i t s e l f to use in conjunction with electronic computers. If the number of forecasts being made is large this is a great asset. In addition because of this repetitiveness costs per forecast can usually be kept low relative to other methods. Factor Listing - This is a descriptive method of forecasting whereby the analyst simply enumerates the factors that he believes are influencing the dependent variable. From this l i s t the forecaster draws a conclusion as to the likelihood of various events occurring. In i t s most basic form the l i s t makes no provision for the quantitative evaluation of each of the factors and their role in influencing the variable under consideration. Causal Techniques If the forecaster decides to employ a causal approach, he is faced with a series of decisions. These decisions occur in the selection of the causal explanation of, for example, business movements. The decisions also occur in the selection, 11 organization and interpretation of data. Whereas descriptive methods of forecasting, particularly time series analysis, imply that the future is an extension of the past, the use of causal techniques is "based on the idea that the knowledge of the interrelationships between independent and dependent variables enables accurate forecasts to be made. In many cases these interrelationships are based on happenings in the present. To be more specific, barometric methods of which leading series analysis is one, involve the use of s t a t i s t i c a l indicators, usually selected time series. When these indicators are used in conjunction with one another or when combined in certain ways, they provide an indication of the direction in which the dependent variable is moving. Thus the time series serve as barometers of change. The second technique to be discussed is econometrics. This method works on the assumption that changes in a dependent variable, usually an economic one, can be explained by a set of mathematical relationships. Leading Series - Within the f i e l d of forecasting, this method has received by far the most attention. The basic idea is rather simple. If a series or index could be discovered that showed leads of, say, six months with substantial regularity i t would successfully indicate the turns in the dependent variable. This being the case, the problems of forecasting would be largely solved. In the early 1950's Geoffrey Moore and his associates at the National Bureau of Economic Research did some extensive 12 examination of time series (7). They found that a number of series exhibited quite a degree of consistency either i n leading the "reference cycle," running coincident with i t or lagging behind i t at the turning points. At f i r s t sight leading indicators would seem to provide a useful guide f o r prediction. Unfortunately the method suffers from a number of l i m i t a t i o n s . F i r s t , the indicators are not always consistent i n t h e i r tendency to lead. Frequently, some of the indices w i l l signal what turns out to be a true change i n the dependent variable, while the remaining indices either f a i l to signal at a l l or else signal too l a t e to be of much value for predictive purposes. Secondly, i t i s not always possible to t e l l i f an index i s s i g n a l l i n g an actual change i n the dependent variable or whether i t i s merely e x h i b i t i n g a random f l u c t u a t i o n of no r e a l s i g n i f i c a n c e . F i n a l l y even i f the indices could consistently signal the true turning points of, say, the economy, they would s t i l l only indicate the d i r e c t i o n of future change, while d i s -c l o s i n g l i t t l e or nothing about the magnitude of the change. Since both factors are desired i t can e a s i l y be seen that the use of leading series i s somewhat l i m i t e d . Perhaps the chief reason being that the leading series are not causally related, in a functional sense, to the basic factors responsible for them. Nor are they weighted according to t h e i r i n t r i n s i c importance r e l a t i v e to the dependent variable. Instead, the indicators are selected because of t h e i r h i s t o r i c a l uniformity of performance and are usually given equal weights of performance. Econometrics - This approach i s based on the idea that changes i n a c t i v i t y of a dependent variable can be explained by a set of i n t e r r e l a t i o n s h i p s between variables. For example, i t attempts to explain past economic a c t i v i t y and predict future economic a c t i v i t y by deriving mathematical equations that w i l l express the most probable i n t e r r e l a t i o n s h i p between a set of economic variables. By combining the relevant variables into what seems to be the best mathematical arrangement econometricians attempt to predict the future course of one or more of these variables on the basis of the established r e l a t i o n s h i p s . The "best mathematical arrangement" i s thus a model which takes the form of an equation or system of equations that seems best to describe the past set of r e l a t i o n s h i p s . In other words, the model i s a s i m p l i f i e d abstraction of a r e a l s i t u a t i o n , expressed i n equation form and employed as a prediction system that w i l l y i e l d numerical r e s u l t s . In a c t u a l i t y econometrics i s quite heavily based on e x i s t i n g economic theory. E s s e n t i a l l y i t i s an attempt to apply that theory i n r e a l s i t u a t i o n s . With regards to applications two factors must be considered. The f i r s t has to do with accuracy of " f i t " or how well the model describes r e a l i t y . As a rather simple general-i z a t i o n one can say that the greater degree of complexity and elegance that the model possesses the better w i l l be the " f i t " . However o f f s e t t i n g t h i s i s the problem of weighing the increment i n research and construction costs against the increment i n value of greater accuracy, i n order to decide on just how complete 14 a model to construct. The second point has to do with the fact that since the model i s a r e p l i c a of a dynamic s i t u a t i o n i t must he revised p e r i o d i c a l l y to allow for the changing weights of the constants or parameters i n the equations. Summary As a summary and conclusion to the chapter i t should be noted that the major strength of time series analysis i . e . exponential smoothing, r e l a t i v e to other forecasting methods i s i t s a p p l i c a b i l i t y . As mentioned previously i t i s f a i r l y simple to use, quite cheap and quite accurate f o r a broad range of uses. It does not possess the theoreti c a l s o p h i s t i c a t i o n and completeness of say econometrics, but i t also does not possess the disadvantages with regards to data c o l l e c t i o n and the required understanding of the complex in t e r r e l a t i o n s h i p s of the system being looked at. It i s a users approach to fore-casting. 15 CHAPTER II Exponential Smoothing Introduction As was noted earlier the purpose of this study is to investigate the response characteristics of the pattern search method as i t applies to exponential smoothing. Consequently, the intent of this chapter i s to provide the reader with a description of the exponential smoothing method of forecasting. The f i r s t section of this chapter w i l l deal with the general characteristics and conceptual foundations of exponential smoothing? while the second section w i l l give a detailed description of the technique i t s e l f . It should he noted here that the content of this chapter relies heavily on the material presented in the art i c l e by Peter R. Winters, "Forecasting Sales by Exponentially Weighted Moving Average" (11). This chapter integrates Dr. Winters methodology and a number of ideas from other sources which have also been referenced where appropriate. Characteristics of Exponential Smoothing As a generalization the forecasting method discussed here deals with products whose "lives" may be assumed to continue indefinitely. Therefore, the forecasting procedure is concerned with generating routine forecasts that take into account random fluctuations, trends and the recurrent seasonal movements of sales. The needs of this type of routine forecast imply certain 16 c h a r a c t e r i s t i c s that should be present i n the techniques used. Such a forecast must be made quickly, inexpensively and e a s i l y . To accomplish this the technique used must be c l e a r l y spelled out, so that i t can be followed routinely, either manually or by an electronic computer. The number of pieces of information required to make a single forecast must be kept at a minimum, or else the t o t a l amount of information required f o r a l l products w i l l be expensive to store and expensive to maintain. F i n a l l y , the technique must be able to introduce the l a t e s t sales information e a s i l y and cheaply into new forecasts. Exponential smoothing i s a technique which meets a l l these requirements. The computational format consists of a few simple equations. These equations are e a s i l y programmed into an e l e c t r o n i c computer or manipulated manually. Even i n i t s most complete form, when seasonal and trend adjustments are included, the amount of information required for an i n d i v i d u a l forecast i s quite small. By i t s very nature the technique e a s i l y incorporates new information into the system. Quite a few forecasting techniques, or systems, are available or could be developed for predicting item sales. The one discussed here does not "predict" with a behavioural model of sales, but uses an analysis of the sales time series taken out of context. That i s , the only input to the forecasting system i s the past history of item sales. The model does not consider such information as the market, the industry or the economy. 17 Various forms of the exponential model have been used for a wide variety of forecasting applications. Most applica-tions have required l i t t l e modification of the basic model. For example, monthly forecasts of cooking u t e n s i l sales f o r the Aluminum Company of America, and bi-monthly projections of paint sales f o r Pittsburgh Plate Glass are two c h a r a c t e r i s t i c applica-tions. In i t s simplest form, the exponential system makes a forecast of expected sales i n the next period from a weighted average of actual sales i n the current period and forecasted sales f o r the current period (made during the previous period). In the same way, the forecast f o r the current period was constructed from a weighted average of actual sales for the previous period and the forecast of sales for that period made i n the period before i t . This recursive process continues back to the beginning of the sales data f o r the item. Thus a predic-tion made i n any period i s based on current sales information and a l l the previous sales data for the item. However the fore-cast i s constructed i n such a way that only one number (the most recent forecast for the current period) must be retained to produce the next forecast. This scheme obviously has desirable c h a r a c t e r i s t i c s f o r a forecasting method. Current sales information i s e a s i l y introduced. The c a l c u l a t i o n of an i n d i v i d u a l forecast i s f a s t . Only a l i m i t e d amount of information must be kept and maintained. 18 For products with stable sales rates and l i t t l e seasonal influence, the simple exponential smoothing model proves quite useful. Many products, however, have a marked trend i n t h e i r sales, p a r t i c u l a r l y when they are f i r s t i n t r o -duced or when competing products are introduced. In addition, for many products there i s also a substantial seasonal pattern to sales. Because of t h i s , i t i s usually worthwhile to extend the exponential system to account f o r long-run trends and seasonal e f f e c t s . These two factors are handled i n much the same way as the simple exponential system. More information i s required f o r thi s more complete model but, f o r most products, the accuracy of the forecast i s su b s t a n t i a l l y increased. The next section w i l l give a more in-depth description of the system. Exponential Smoothing Method The advantages of exponential weighted averages, discussed i n the previous section, are obvious from i t s simple formulation: Expected Sales f o r the Coming Period = A (Realized Sales during the Last Period) + (l-A) (Expected Sales for the Last Period). From th i s i t can be seen that just three b i t s of information are ca l l e d f o r i 1) expected sales f o r the period just completed, 2) r e a l i z e d sales f o r the period just completed, 3) a weighting factor A, whose value l i e s between zero and one. 19 The role played by the weighting factor and the sales and forecast data can be demonstrated by rearranging the terms and expressing them i n the notation introduced by Winters (11) Expected sales becomes: S t = S t_ x + A - S t _ l ) f where S^ = sales during period &t = sales forecast made i n period -t f o r the coming period, S^-i = sales forecast made in period -t-1 f o r period ±, A = a number between one and zero i n c l u s i v e . This formulation makes i t clear that a forecast (S^) i s equal to the l a s t sales forecast (S^-i) adjusted hy some f r a c t i o n (A) of the difference between the sales a c t u a l l y achieved during the l a s t period (S^) and the sales forecast for that period ( S ^ - i ) . Therefore, i f the l a s t forecast was high, the new sales forecast w i l l be equal to the l a s t forecast l e s s a f r a c t i o n of the fore-cast error. I f the l a s t forecast was low, the new forecast w i l l be equal to the l a s t forecast plus some f r a c t i o n of the forecast error. Using the same notation, the new forecast may be interpreted as a simple weighted average of the sales r e a l i z e d and the sales that were forecast. That i s : S t = AS* + (1-A) St-1. The smoothing constant A determines how much of an e f f e c t the past data w i l l have on the estimate. R.G. Brown (3) presented a formula for f i n d i n g the 20 number of past time units of data that are bing used i n a smooth-ing system. This number i s d i r e c t l y influenced by the value of A. N = 2 - A A where N i s the number of time units of data represented. For example, a low value of A, e.g. 0.2 gives l i t t l e weight to the present data and considerable weight to past data. I f t h i s value i s used, N = 9 time units of data are used i n t h i s case. I f A = 0.5» where lar g e r emphasis i s placed on current data and consequent-l y l e s s emphasis on past data, N = 3. From t h i s example i t can e a s i l y be seen how the value of A e f f e c t i v e l y controls the d i s -t r i b u t i o n of emphasis placed on current and past data. Figure 1 graphically demonstrates the exponentially d i s t r i b u t e d weight given to each data pexce i n a time series for two d i f f e r e n t values of A. Choosing an appropriate value f o r the smoothing constant i s a n o n - t r i v i a l problem. The value of the constant should be low enough to give the system s t a b i l i t y . It does t h i s by decreasing the emphasis on the current data and consequently reducing the i n -fluence of current random fluc t u a t i o n s . On the other hand the value must be large enough to give the system s e n s i t i v i t y to sys-tematic changes i n the time ser i e s . It does t h i s by having a value that places considerable emphasis on current data. One approach to choosing a proper value for the smoothing constant uses a "grid of values." Each value on the gr i d i s t r i e d i n the smoothing system and that value which r e s u l t s i n the minimum forecast error i s used i n future c a l c u l a t i o n s . * For discussion of forecast error seet Measures of Forecast Accuracy. 21 +3 cd P -p CO o <U o <H o +» c 0> o u 0) PH weighting of data with smoothing constant = 0.5 98.4$ of forecasted data is made up of data 5 periods old or less weighting of data with smoothing constant = 0.2 a. S «• 5 w 1 Age of Data (months) 91.6$ of forecasted data is made up of data 10 periods old or les Figure 1 Different Values of the Smoothing Constant  Give Different Weights to Past Data Items D'Amico (1971) 22 Obviously t h i s i s an i n e f f i c i e n t and cumbersome method. The pattern search method, which w i l l be discussed i n the next chapter, i s an attempt to take a d i f f e r e n t and more e f f i c i e n t route f o r choosing the value of the smoothing constant. Measures of Forecast Accuracy Roberts and Whybark (8) present a number of ways of measuring forecast accuracy. Each measure i s a function of error i n each forecast period. The forecast error i n any period ±, given by e^, for which the forecast was F^, can be written: e t = F± - X^, where i s the actual series value f o r period f With the measure of error, forecast accuracy may then be assessed by three measures: A) mean forecast error, B) mean absolute error (MAD), C) error variance. Mean Forecast Error The mean forecast error assesses the bias of the fore-casting technique. I f the system i s leading or lagging i n i t s forecast the deviation of error w i l l be indicated by the mean forecast error, given by: e = e. / N, t=l X where N denotes the number of forecast periods under investiga-t i o n . 23 Mean Absolute Error Another common and frequently used measure of forecast accuracy i s the mean absolute error or mean absolute deviation (MAD). The MAD i s defined as: MAD i s an i n d i c a t i o n of the v a r i a b i l i t y i n the forecast error. Error Variance An alternative to MAD i s the forecast error variance. 2 This measure, denoted by S e, i s defined as: Se = ^ N ( e + - e ) 2 / N - l . This measure i s useful because many decisions are influenced by knowledge of forecast r e l i a b i l i t y . If the variance i s high t h i s means that the i n t e r v a l within which the forecast i s l i k e l y to f a l l i s quite wide. Put i n other words, i t means that the range of error size i s quite wide. The decision then, must r e f l e c t t h i s knowledge of the variance, i n order to absorb the uncertain-ty of the forecast. Ideally one would l i k e a forecasting system which, i n terms of forecast accuracy, y i e l d s a l l measures close to zero. Unfortunately, i t i s very d i f f i c u l t to determine a technique which simultaneously s a t i s f i e s a l l c r i t e r i a because these measures frequently c o n f l i c t . As Roberts and Whybark (8) point out, one may be able to reduce to a minimum the v a r i a b i l i t y of the forecast error by minimizing MAD. In doing so, however, he may increase the mean error i n the forecast. The question of 24 which type of c r i t e r i o n to minimize must be evaluated i n l i g h t of the expected benefits from improved forecasts for each individual application. Seasonal Adjustments Seasonal adjustments of sales forecasts are useful and p r a c t i c a l i n those cases where recurring cycles i n sales behaviour can be i d e n t i f i e d . To seasonally adjust an exponen-t i a l l y smoothed sales forecast requires an increase i n both the computations performed and the amount of data carried forward from one period to another. By considering the formulation of the procedure and the steps performed i n obtaining a forecast, the workings of the added factors w i l l be apparent. In doing so, i t w i l l be convenient to use an annual sales cycle and monthly sales data. To calculate a sales forecast i n one period for some future period, i t i s necessary to form the product of an estimate of the smoothed deseasonalized sales rate and an estimate of the seasonal factor f o r the forecast period. Again using Winters' notation t h i s estimate i s i s+,i = S ^ V L + I , where S^ .,1 = smoothed forecast of expected sales made in period -t f o r period «t + 1. S^ = estimate of the smoothed expected deseason-al i z e d rate of sales for period = expected r a t i o of smoothed seasonal sales to smoothed deseasonalized sales i n the period 25 for which the forecast is being made. In making a forecast for the coming month, the ratio calculated one year ago for that month is used. The subscript bears this out since L represents the number of periods in the cycle or season. greater than expected smoothed deseasonalized sales (S^), the seasonal ratio w i l l be greater than one and i f the reverse is true, the ratio w i l l be less than one. The formula that yields an estimate of the smoothed Its similarity to the formula for simple exponential smoothing is easily observed. The current sales are deseasonalized by dividing by an appropriate seasonal factor. Otherwise the structure of the relationships are identical. To compute the expected deseasonalized rate of sales (S4,l) each month i t is necessary to have available the following: A) unit sales for the month (S±) B) value of the seasonal factor applicable to the same month last cycle (F-t-L) C) weighting constant A D) expected deseasonalized rate of sales computed last period (§t-l) To complete this part of the procedure only one extra bit of information (the applicable seasonal sales ratio) and only one additional calculation (a division) are needed. Both of these additions being relative to what would have been needed in the case where a correction for the seasonal sales If smoothed seasonal sales (S^ .) are expected to be expected deseasonalized sales i s : 26 behaviour was not required. There i s f however, the added problem of computing and stori n g values f o r the seasonal r a t i o s , H = B -tr + ( l - B ) F t - L Again i t may be observed that the formula i s i n a f a m i l i a r form. The new weighting constant B (a number between one and zero inclusive) assigns the r e l a t i v e influence between the most recent seasonal r a t i o (S-t)/(S+,) and the relevant r a t i o calculated the year before. It should be noted that the seasonal r a t i o f o r each period i s a new addition to the required data and computing requirements. Since an annual monthly cycle has been used to i l l u s t r a t e t h i s method, twelve such figures would be stored and each employed once during the course of the year. Trend Adjustment seasonal behaviour have been considered. The f i n a l adjustment i n the exponential smoothing method of sales forecasting i s to adjust data to r e f l e c t trends. In the l i g h t of the preceding discussion i t w i l l not be s u r p r i s i n g to f i n d that the amount of computation increases but that the general format of the technique i s not r a d i c a l l y altered. adjusted sales forecast f o r T periods i n the future can be written, So f a r , the problems of handling normal random and The exponentially smoothed trend and seasonally S t tT = ( S t + TR^) Ft-L+T. 27 Two new terms are introduced here, a revised estimate of the trend (R-fc) and the number of periods in the future for which the forecast is being made (T). The above formulation is a straightforward extension of the simpler procedure. By adding the product of the current period's estimated trend (Rt) and the number of periods in the future for which the forecast is being made (T) to the smoothed and deseasonalized current rate of sales (S+)i an estimate of the expected level of smoothed and trend adjusted sales is obtained for the forecast period. This result is then multi-plied by the appropriate seasonal factor (F^._Tj+m) in order to obtain the desired sales forecast. The expression used to obtain the seasonal ratio is unchanged in the process of adding a trend adjustment. The computation of the deseasonalized expected rate of sales, how-ever, i s slightly modified. It becomes ^ = A{^i) + ( 1 " A ) ( ^ - i + R t - D . The effect of this to update the previous deseasonalized sales forecast by adding to i t the previous period's trend factor. Although this is a simple adjustment, i t necessitates storing and carrying forward additional data. To compute the value of the trend factor, a weighting constant C (again, a number between one and zero inclusive) must also be added to the l i s t of required data. This i s necessary since the trend factor must also be smoothed. The 28 formula f o r smoothing the trend factor i s Rt = C(§t " §t-l) + (1-C) R-t-1. This expression smooths the v a r i a t i o n i n the trend from one period to the next. Again, the usual form of the smoothing r e l a t i o n s h i p i s preserved and the closer the weighting constant C i s to one, the greater the r e l a t i v e importance attached to the most recent deviations. Weighting Constants The aforegoing discussion has highlighted many aspects of the f l e x i b i l i t y inherent i n the exponential smoothing method. Two other features that enhance f l e x i b i l i t y are s i g n i f i c a n t . The f i r s t of these i s the assignment of i n i t i a l values within the smoothing system and the second i s the a v a i l a b i l i t y of separate weighting constants. I n i t i a l values must be given to the forecast of expected sales, the seasonal r a t i o for each period during the season and the trend f a c t o r . Ideally, these values would be based on h i s t o r i c a l sales data for the item i t s e l f or from an item whose sales behaviour i s expected to be si m i l a r to the one under consideration. Lacking information of thi s character, informed judgements must be made on the facts at hand, no matter how fragmentary these facts may be. Disregarding the absolute accuracy of the i n i t i a l conditions employed, t h i s forecasting method permits the forecaster to incorporate i n the forecasting mechanism a l l of the information at his command. Chapter V discusses t h i s area of i n t i a l values i n greater d e t a i l than i s presented here. The purpose of these few l i n e s i s to give the reader a f e e l i n g f or the system under consideration. The second feature i s the a v a i l a b i l i t y of three separate weighting constants. Each of these constants may be adjusted to allow f o r an items expected sales behaviour. As was mentioned e a r l i e r the constant A determines the r e l a t i v e influence that the most recent sales r e s u l t s w i l l have. Relative l y high values of t h i s weighting constant w i l l make a system responsive to current changes and w i l l frequently be appropriate when a system i s f i r s t introduced and l i t t l e h i s t o r i c a l data i s avail a b l e . When t h i s condition p r e v a i l s , the value of A may be reduced to a "normal" l e v e l f o r the item. This can take place a f t e r a s u f f i c i e n t time has passed to permit the system to est a b l i s h a reasonably stable pattern of response. The second weighting constant performs the same smoothing function f o r the seasonal r a t i o that A performs for the unadjusted forecast. I f B i s high, the seasonal r a t i o s are sensitive to the seasonal behaviour displayed i n the previous cycle. If B i s low, l i t t l e weight i s given to the seasonal behaviour that occurred l a s t cycle and as a consequence the influence of seasonal behaviour extending further back i n time becomes the p r i n c i p a l determinant of the seasonal factor employed f o r forecasting purposes. In the case of the weighting constant C used to smooth the trend fa c t o r s , the same i s true. High values place more weight on recent trend size than do low 30 values. Again, as i n the discussion of the other i n i t i a l values, I have raised the matter i n order to place i t i n i t s context. Chapter VI discusses i n i t i a l constant values i n much greater depth. 31 CHAPTER III Pattern Search Methodology The purpose of this chapter is to present a detailed discussion of pattern search procedures as they apply to f i t t i n g smoothing constants to the exponential forecasting system. The chapter starts off with a short general introduction to the pattern search process. Following this there is an extensive description of the workings of the process, including a discussion of the two basic search moves: exploratory and pattern. Next is a discussion of the stopping rules as they relate to the accuracy of the procedure, while the last part of the description looks at the systems a b i l i t y to change the direction of the search. Finally, the last section of the chapter, discusses some of the strengths and weaknesses of such a system. Introduction While the pattern search technique in i t s e l f is a method with enough generality to be able to be applied in a number of problem situations I w i l l only deal with i t as i t relates to the selection of parameters for the exponential smoothing model. The parameters under consideration are the smoothing constants; A, B, and C, (see Chapter II). Berry and Bliemel (2) discuss three approaches to the problem of selecting the exponential weights (A, B, and C). Firs t , there are some general guidelines for selecting smoothing 32 constant values. These guidelines require an assessment of the extent to which the sales series average is subject to random variations and major shifts in its level. Second, there are dynamic techniques^mentioned but not discussed,for the continuously adjusting smoothing constant values. These adaptive techniques are concerned with detecting important changes in the sales average and specifying the proper smoothing constant value to be applied. A third method of selecting the smoothing constant values conducts a simulation analysis, using past sales data, to evaluate the forecast error associated with alternative smoothing constant values. The quality of the smoothing constant values derived from this analysis depends on the sales history being a reliable description of future sales. It is this third method with which Berry and Bliemel were concerned. It is also the one which this chapter w i l l discuss in detail. While the i n i t i a l work in this area was done by Hook and Jeeves (6) I have used the Berry and Bliemel paper (2) as my major reference source because of i t s topical relevance. Pattern Search Process The pattern search process utilizes a simple search strategy to maximize or minimize the value of a function. The strategy enables the procedure to move progressively towards "better" parameter values by building on what i t has already learned from previous functional evaluations. Pattern search is applied to problems where a minimum (or maximum) value for an 33 expression is sought, i.e. an expression of the formj y = f (X]_, X£, X^, ...» X n). As Van Wormer and Weiss (10) point out, the search technique views the function to be minimized or maximized as a black box. The box allows certain inputs to be set ( t r i a l values of the functions arguments) and responds with a single output value (the value of the function at that point). The search strategy i t s e l f consists of conducting a series of t r a i l evaluations, where the expression is evaluated successively with particular sets of X values. The strategy then uses these t r i a l results to decide what to do next. The basic notion underlying this strategy involves moving from one set of t r i a l s to another, and when desirable results are obtained the moves towards "better" values are made in increas-ingly larger step sizes. Thus, the search for an improved solu-tion is guided by the successes (or failures) obtained in previous function evaluations. In effect, the procedure is capable of crude learning. In order to describe and illustr a t e the pattern search techniques, I have l i f t e d out of context, the example used by Berry and Bliemel. They used the method in an application to the problem of selecting smoothing constant values (A and C) for the trend adjusted expotential forecasting model. Insofar as the model does not deal with the weighting constant, B, that performs the smoothing function for the seasonal ratios, the description is incomplete. Fortunately, this does not detract 3^ from the generality of the search procedure i t s e l f . I t should he noted that i n my analysis of the technique I used the complete exponential smoothing model and considered a l l of the A, B and C parameters. In s e l e c t i n g smoothing constant values, the expression to be evaluated involves a simulation using a sales history sequence and a s p e c i f i c set of smoothing constant values. These are used to obtain the estimate of the forecast error standard deviation (or an analogous c r i t e r i o n ) associated with these values. For the trend adjusted model the expression to be evaluated i s represented as; S = f(A,C). The pattern search technique begins by evaluating an i n i t i a l set of smoothing constant values (eg. A =.5 and C = .5). From this i n i t i a l p o s i t i o n subsequent moves are made towards parameter values which produce a smaller estimate of the forecast errors standard deviation. To i l l u s t r a t e t h i s procedure, Figure 2 shows the path taken by the search procedure i n successive evaluations ( t r i a l solutions) f o r various smoothing constant values. It begins at the point (A = . 5» C = .5) and moves r a p i d l y "downhill" to smoothing constant values with successively smaller forecast error standard deviations and stops temporarily at the point (A = .00, C = . 0 0 ) . At this point the search procedure back-tracks s l i g h t l y (to A = .07, C = .07) and changes d i r e c t i o n , moving towards the terminal point (A = .02, C = .78) . The search procedure terminates at this point because another set of 35 Figure 2 Path of a Pattern Search Application r&R -mi's. 36 smoothing constants having a smaller forecast error standard deviation cannot he located. There are several important features of the path followed by the pattern search technique in Figure 2 to be considered. First, as the path moves from the starting point (A = . 5 , C = . 5 ) and more information is obtained about the response surface, longer steps are taken between successive evaluations. Second, when the t r i a l evaluations reveal that the search is producing inferior solutions, e.g. at (A = .00, C = .00), the next step in the procedure is a series of local explora-tions that result in establishing a new pattern in a new direction. Finally, the new pattern progresses in an arc-like manner, moving simultaneously in two directions towards the fi n a l point (A = .02, C = .78). These points,as well as the search logic, are i l l u s t r a t -ed in Figures 3» 6A and 6B where smaller portions of the search path are shown in more detail. Exploratory and Pattern Moves Pattern search employs two types of moves in looking for improved parameter values: exploratory and pattern moves. The search procedure begins by making a series of exploratory moves to examine the nature of the response surface around an arbitrarily selected starting point, e.g. (A= . 5 , C= - 5 ) . 1 The moves, which are illustrated in Figure 3 , include: The selection of this starting point w i l l be discussed in detail later on as i t is one of the factors under consideration in this research. 37 ,50 . . .Mb -• •oa... X * o * Exploratory Move Pattern Move Forecast Error Standard Deviation —\—t—A/ .0 .03. .«W -Hb M .5o ,5H ^ Weighted Average Smoothing Constant (A) Figure 3 Exploratory and Pattern Moves 38 A" Value C" Value Forecast Error Standard Deviation I n i t i a l Evaluation . 5 Exploratory Move No.l . 5 • -01 Exploratory Move No.2 . 5 - . 0 1 Exploratory Move No.3 . 4 9 Exploratory Move No.3 - 4 9 • 5 • 5 • 5 . 5 + . 0 1 . 5 - . 0 1 7 5 8 . 5 7 6 3 . 1 7 5 3 . 9 7 5 6 . 4 7 5 1 . 4 The exploratory moves provide an evaluation of the alternative weighted average smoothing constant values (A) a short distance ( . 0 1 ) from the starting point. Although an increase in A to .51 results in a larger forecast error standard deviation, the smaller value of . 4 9 leads to a reduction in this measure. Noting this, the procedure continues by evaluating the effect of small changes in C, with the new value of A = . 4 9 . The changes in C result in reducing the forecast error standard deviation to 7 5 1 . 4 and the selection of a direction for future moves, i.e. from (A= . 5 , C - . 5 ) to (A = . 4 9 , C = . 4 9 ) . In effect, the exploratory moves establish an estimate of the slope of the contour map around a given point. If one wanted to add the seasonal ratio constant value B to the smoothing formula then the procedure would be extended to holding the A and C values constant while sweeping B through the same series of exploratory moves. what is called a pattern move. The i n i t i a l series of pattern moves is shown in Figure 3 . Each of these moves covers a greater distance than i t s predecessor as long as the successive t r i a l evaluations meet with continued success (i.e. progressively The next step in the search procedure involves making 39 smaller forecast standard deviations). The length of a pattern move is determined by multiplying the distance covered by the immediately preceeding exploratory and/or pattern moves by a factor (K). To illustrate this, (X^, X 2) are defined to be the best exploratory move co-ordinate in the last set of exploratory moves and (Y^» Yg) are the i n i t i a l co-ordinates of the previous pattern move. As an example, for pattern moves one, two and three in Figure 3 , the co-ordinates are: Pattern Move Best-Exploratory Previous Pattern Number Move Coordinate (X-pXp) Move I n i t i a l Coordinate ( Y j ^ ) 1 ( . 4 9 , . 4 9 ) ( I n i t i a l Point A= . 5 , C= . 5 ) 2 ( . 4 ? , . 4 7 ) ( . 4 9 , . 4 9 ) 3 ( . 4 4 , . 4 4 ) ( . 4 7 , . 4 7 ) Next the length of the pattern move is computed using the following equation: New Co-Ordinate (Z l f Z 2) = (X l p X 2) + K [(X l P X 2) - (Y l f Y27| . In this example the pattern move multiplier (K) is set equal to 1 . 0 and the resulting pattern moves are: Pattern New Move Coordinate Number (Zi ,Zp) (X 1,X 2) (Yi ,Y?) KJJX-I ,X?)-(Yi ,Yp 1 ( . 4 8 , . 4 8 ) ( . 4 9 , - 4 9 ) ( . 5 0 , . 5 0 ) ( - . 0 1 , - . 0 1 ) 2 ( . 4 5 , - 4 5 ) ( . 4 7 , - 4 7 ) ( . 4 9 , - 4 9 ) ( - . 0 2 , - . 0 2 ) 3 ( . 4 1 , . 4 1 ) ( . 4 4 , . 4 4 ) ( . 4 7 , - 4 7 ) ( - . 0 3 , - . 0 3 ) Thus as long as the search procedure continues to encounter successful t r i a l evaluations, the distance covered by each of the pattern moves increases rapidly. In the smoothing model that 40 KEY A i C i Si (w me EX*>LOfcftvoR.y E Q U A L S GO 100 ex9uoR.moKy e x p u o « . f K oR.y W O S" VS U H S 5 C4> SETT S Starting Value for A Starting Value for C Forecast Error Standard Deviation for Point (A,C) Best Exploratory Move Value for A Best Exploratory Move for C Forecast Error Standard Deviation for Point (A',C) yes (2L MO CO Figure 4 Pattern Search Flow Chart SOURCEi Berry and Bliemel 41 I A O V E (\ Success ? yes f\WQ ft N5u> COOR.t>\W«TE E X I T + Figure 5 Exploratory Move Flow Chart (The route shown is carried out for each co-ordinate separately) \s wove ^ s u c c e s s ? ves too SOURCE: Hook and Jeeves 42 was used in this study the pattern move multiplier was given a value of 2.0. This had the effect of doubling the distance between the best exploratory move co-ordinate and the previous pattern move i n i t i a l coordinate andassigning that value to the next pattern move step size. Berry and Bliemel (2) experimented with multiplier values in the interval between 1.0 and 4.0 and found that the computing time and forecast error obtained were relatively insensitive to such changes. A value of 2.0, however, was found to be marginally superior with regards to minimizing the number of exploratory moves needed to obtain an optimum for the function. The exploratory and pattern moves are used in a repetitive manner to guide the search toward improved smoothing constant values. A flow chart summarizing the pattern search procedure is presented in Figure 4 and a flow chart describing the exploratory moves is presented in Figure 5« Step Size Reduction As long as the exploratory moves continue to provide improvement in the forecast error standard deviation (S*) pattern moves continue to be made. This process is represented by blocks 4 to 7 in Figure 4. The pattern moves continue until the direction of the search leads to unsatisfactory results. In this case, either the direction of the search is changed through a series of exploratory moves (block 2) or i f a better direction cannot be found, the step size for the exploratory move 43 (.01 i n the example shown i n Figure 2) i s reduced. The step size reduction i s accomplished i n the loop connecting blocks 2, 3, 8 and 9 i n Figure 4 . Should the reduced step size continue to produce unsatisfactory r e s u l t s (a higher forecast error standard deviation i n t h i s a p p l i c a t i o n ) , the search f i n a l l y i s terminated (block 1 0 ) . In Figure 2, t h i s occurs at A = . 0 2 , C = .78. The influence of the step size reduction factor on the s e l e c t i o n of the smoothing constants i s an area that w i l l be considered l a t e r . Generally speaking, the step size f o r t h i s a p p l i c a t i o n can be any number in the open i n t e r v a l between 0 and 1. The reasons f o r t h i s i s that a f t e r unsatisfactory r e s u l t s follow from a p a r t i c u l a r pattern move and i f subsequent exploratory moves do not y i e l d better r e s u l t s then the step size reduction factor i s mu l t i p l i e d by the pattern search step s i z e to y i e l d to a new pattern search step s i z e (a smaller one). If the step size reduction factor were 0 then the new pattern search step size would also be 0 and consequently the process would degen-erate and stop. S i m i l a r l y , i f the value were 1 then the new pattern search step size would be the same as the old one and the process would enter into an i n f i n i t e series of i t e r a t i o n s attempt-ing to reduce the current step s i z e . D i r e c tion Change The example shown i n Figures „6A and 6B provides a good i l l u s t r a t i o n of the pattern search procedures a b i l i t y to change 4 4 Trend Smoothing Constant (C) •v, r • a + .OS + .ow 4. Figure 6A Pattern Search: Change of Direction I A M .ta. .tn .oi, .0% ,ia .is ,\k> Weighted Average Smoothing Constant (A) Trend * Smoothing Constant (C) . w •DM 4 (650.?) * x Cssa.H) , J>oim at . Figure 6B Pattern Search: Change of Direction x < l •02- .OS .OL .08 .ft , U ,\H Weighted Average Smoothing Constant (A) - — x Pattern Moves X Exploratory Moves ( ) Forecast Error Standard Deviation 45 the d i r e c t i o n of the search. These figures present the lower l e f t hand portion of Figure 2 i n more d e t a i l . In th i s case, the l a s t pattern move leads the search to the point (A = .00, C = . 0 0 ) . The exploratory moves about th i s point (point #4 i n Figure 6A) produce r e s u l t s that are i n f e r i o r to those provided by the l a s t set of exploratory moves (point #1-3) • Thus the search returns to the best previous exploratory move (point #3 i n Figures 6A and 6B). It then conducts a series of exploratory moves and establishes a new pattern move beginning at point #4 and ending at point #5 i n Figure 6 B . This new set of pattern moves leads r a p i d l y away from point #3, tending towards s l i g h t l y smaller A co-ordinates and much l a r g e r C co-ordinates. It i s important to note that because a pattern move begins from the best exploratory move, the d i r e c t i o n of the moves need not be i n a straight l i n e , but can be altered s l i g h t l y to allow f o r d i r e c t i o n -a l changes. As an example, the d i r e c t i o n of the pattern move s h i f t s s l i g h t l y at point #7, taking advantage of the information provided by the l a s t pattern move. General Discussion of Pattern Search Hook and Jeeves ( 6 ) report that i n actual practice, pattern search has proved p a r t i c u l a r l y successful i n l o c a t i n g minima on hypersurfaces which contain "sharp va l l e y s . " On such surfaces c l a s s i -cal procedures behave badly and can only be induced to reach the In cases where exploratory moves involve A and C values outside of the i n t e r v a l from 0 to 1, e.g. A= - .01 and C= .00 then an a r b i t r a r i l y high forecast error standard deviation value of 99999 i s associated with such co-ordinates to drive the search back into the correct l e v e l . 46 minimum slowly. Direct search procedures using only simple moves are forced into small step sizes i n order to keep from moving out of the v a l l e y on each step. Consequently, though fas t e r than c l a s s i c a l techniques, such d i r e c t search procedures are not overly f a s t . Pattern search has the inherent p o t e n t i a l -i t y of making pattern moves d i r e c t l y down the va l l e y , and hence ra p i d l y approaching the minimum (or maximum). Another advantage i s that pattern search i s well adapted to use on elec t r o n i c computers, since i t uses repeated i d e n t i c a l arithmetic operations. C l a s s i c a l methods, developed for human use, often stress minimiza-t i o n of arithmetic by increased s o p h i s t i c a t i o n of l o g i c , a goal which may not he desirable when a computer i s used. Pattern search also provides an approximate solution, improving a l l the while, at a l l stages of c a l c u l a t i o n . This feature can be important when a tentative solution i s needed before the calculations are completed. On the other hand, as Van Wormer and Weiss (10) point out, the technique does have some l i m i t a t i o n s . I t can only be used i n deterministic rather than stochastic search problems. Other l i m i t a t i o n s are that a solution i s not guaranteed i n a f i n i t e time and, any solutions discovered by the procedure are not guaranteed to be the minimum (or maximum). The nature of the search procedure i s such that i t may arrive and stop at a l o c a l minimum (or maximum) point. This concludes the chapter on the workings of the pattern search method. As was mentioned e a r l i e r , this material has been presented to provide an aid to understanding the search system. In turn i t i s hoped that t h i s understanding w i l l aid the reader i n understanding and evaluating the mean-ing and consequences of the analysis presented i n t h i s study. The following chapter i s intended to provide a more s p e c i f i c type of information. It deals with the data that was actually used i n the study. 4 8 CHAPTER IV Data: Source and Selection The purpose of this chapter is to give a description of the industry from which the data series used in this study's analysis were taken. By doing so i t is hoped that the reader w i l l gain a better understanding of the context within which the pattern search and exponential smoothing techniques were applied. I w i l l also discuss the rationale behind the selection of the specific data series from the much larger set available to this study. Data Source The three data series that form the basis for the present study were selected from among a total of thirty-eight series made available by the Frazer Valley Milk Producer's Association. In an aggregate sense the series are composed of units sales of flu i d milk segregated according to container size, butterfat content and by distribution channel. The data represents sales of f l u i d milk for the period January 1966 to December 1971 inclusive. Unit sales were used as opposed to dollar sales so as to eliminate the problem of price changes over time. If dollar sales had been used and not adjusted to account for price increases there would be an upward bias in the data that would tend to distort the trend factor in the various series. 49 Dairy Industry This section draws heavily on a paper prepared by the Federal Task Force on Agriculture (5)' While discussion deals with the dairy industry on a national level, the authors of that paper f e l t that many of the trends are general enough to be applicable to specific provinces. I w i l l deal in added depth with the processing sector of the industry as this is the sector within which the Frazer Valley Milk Producer's Association operates. The Canadian dairy industry has at least as many problems as most industries in the agricultural sector. Climat-ic conditions for milk production are less favourable in Canada than in most countries. The domestic per capita consumption of milk in a l l forms is f a l l i n g . Even with a rapid increase in population, total Canadian consumption of milk in a l l forms is increasing only slightly. In addition, substitutes threaten to continue to erode the f l u i d milk producers' markets. Adjust-ments which should have occurred on farms and in processing plants have been slow in coming, partly because of government policies which have attempted to protect and maintain a type of agricultural production in which Canada has a marked disadvantage. Throughout the post-war years (1946-1972) federal dairy policies have supported manufacturing milk and cream prices by offers-to-purchase programs, by embargoes on virtually a l l dairy imports except specialty cheeses, and by other forms of subsidization. Support programs have provided seasonally stable prices, but 50 the year-to-year changes in dairy programs have created uncertainties for investment in the entire industry. These economic conditions have retarded the rate of structural adjustment both in the producing and processing sectors. At present there are about 110,000 manufacturing milk and cream shippers of whom about 78,000 ship less than 100,000 pounds of milk (equivalents) annually, and about 21,000 f l u i d milk shippers of whom nearly a l l ship over 100,000 pounds of milk. Except for those small scale producers who have l i t t l e alternative use for the few resources they devote to dairying, and the largest scale producers (predominantly fl u i d shippers) who have obtained substantial economies of scale, primary dairy production in Canada is characterized by high costs. There is a similar situation in the composition of the processing-distributing sector. Of the close to 1,300 dairy factories, about two-fifths are relatively small with annual sales of less than $250,000. Both primary and secondary sectors are characterized by wide disparities in levels of technology and average costs. The smaller processors are expected to continue facing serious financial problems. The composition of the British Columbia dairy industry can be seen in Table I. Traditionally B.C. has had a dairy industry heavily oriented to the f l u i d milk market. Approximately 70 percent of B.C.'s milk production is utilized in f l u i d sales as compared to the national average of 11 percent. On the other hand only 10 percent i s utilized for manufactured dairy products 51 compared to a national average of 36 percent. This u t i l i z a t i o n pattern i s expected to continue through the 1 9 7 0 ' s . TABLE I Shipping.Volume D i s t r i b u t i o n , 1966 Shipping Volume Cream Mfg. F l u i d l b s . per annum percent of a l l dairy Total farms B r i t i s h under 48 , 0 0 0 18 4 ( - ) * 22 Columbia 48 , 0 0 0 - 9 5 , 9 9 9 2 3 15 2 0 9 6 , 0 0 0 and over ( _ ) * 3 55 58 Total 2 0 10 70 100 Canada under 48 , 0 0 0 43 12 ( - ) * 54 4 8 , 0 0 0 - 9 5 . 9 9 9 8 10 2 2 0 9 6 , 0 0 0 and over 3 14 9 26 Total 54 36 11 1 0 0 * l e s s than . 5 percent Sources see Bibliography No. 5 The processing-distributing sector of the Canadian dairy industry comprises close to 1 , 3 0 0 f a c t o r i e s or plants owned by nearly half as many companies. A large part of the sector i s s t i l l made up of companies processing butter or cheese i n small single plants. Large scale and multiproduct plants, operated by major corporations which s e l l a wide range of dairy products and have t h e i r own brand names, are integrating the sector across product l i n e s . Through mergers and consolidations the degree of 52 concentration i n the sector i s increasing markedly. Changes i n technology and i n d u s t r i a l structure have favoured large volume plants. New forms of packaging and merchandising as well as changes i n the structure of competition have had a d i r e c t and equally important impact on the number and size of these processing firms. Condenseries, process cheese plants, and the l a r g e r ice cream plants, which t y p i c a l l y have been operated by major corporations, are faced with the countervailing power of the r e t a i l chains. The impact of the r e t a i l chains on d a i r i e s , has had f a r greater consequences. The r e t a i l chains have offered consumers lower prices f o r milk and other dairy items, and a greater choice of container siz e s . More recently competition at the r e t a i l l e v e l has been heightened by the emergence of milk specialty stores i n major c i t i e s , which by means of high volume sales and longer store hours, o f f e r milk i n 3 quart jugs at a lower price. The large c a p i t a l requirements for modern pasteurizing and b o t t l i n g plants and the need to meet the demand f o r d i v e r s i f i e d sizes and types of containers and types of products have put d a i r i e s i n a weaker competitive position. In addition, the bargaining strength of the supermarkets, which are accounting f o r an increased propor-t i o n of the d a i r i e s sales, has put great pressure on the d a i r i e s to expand t h e i r businesses or to s e l l out to other d i s t r i b u t o r s . According to the Federal Task Force paper ( 5 ) the trends i n milk pasteurizing plants show an average annual c l o s i n g of 25 plants or about a 3 percent per year drop i n the 53 number of operating plants. Over the past eight years there has been no significant change in total employment. The growth in sales per plant has only risen by about 5 percent annually. This change in structure is most pronounced in urban centers. The degree of industry concentration has increased significantly in the post-war period. The implications of these trends are that in the long run the processing-distributing sector w i l l be completely integrated across product lines, and operated by a small number of large corporations and co-operatives. At this stage in the process and in the foreseeable future the degree of competition w i l l be high and margins w i l l be relatively low. As in the case of British Columbia where a provincial milk board administers r e t a i l prices, the board effectively determines the marketing margin for f l u i d milk products. Evidence suggests that these margins are set to cover the costs of the least efficient distributors and thus serve to reduce price competition. The existence of fixed margins for processing distribution provides considerable incentive for backward inte^ gration by chain stores into the processing f i e l d . Projections as to future consumption of milk in a l l forms depend upon the assumptions made and type of analysis used. The Task Force paper notes two projections, one made by the Task Force i t s e l f and the other by Department of Agriculture. The Task Force study forecast a 9 percent growth in total Canadian consumption in the 15 years 1965 to 1 9 8 0 , the Depart-54 ment of Agriculture forecast a 14 percent growth in the same period. The main source of discrepancy between the two arises from differences in the treatment of 2 percent f l u i d milk. The Task Force estimates appear in Table II. They indicate that between 1965 and 1980 per capita consumption of milk in a l l forms w i l l decline by 18 percent but that total consumption w i l l rise by 9 percent because population growth. While cheese consumption in 1980 is expected to be more than double that of 1 9 6 5 . total consumption of other milk products w i l l f a l l . TABLE II Per Capita and Total Consumption of Dairy Products 1 9 6 5 . 1967 and Projections for 1975 and 1 9 8 0 , Assuming Constant Real Prices.  Per Capita Consumption in Pounds of Products 1 9 6 5 1 9 8 0 as a 1967 1975 1 9 8 0 % of 1965 Fluid Milk 2 7 5 . 0 2 6 7 . 5 Butter 18.5 1 6 , 9 Cheese 9 . 0 9 . 9 Other Milk Products 114.4 114.5 Total Consumption in Milk Equivalents  --pounds-246. 0 14. 0 12.8 102.8 2 3 3 . 0 1 3 - 1 14.4 9 9 . 6 0 0 0 , 0 0 0 •pounds— Fluid Milk 5 , 2 6 3 5 , 3 2 5 Butter 8 , 3 7 2 7,933 Cheese 1,714 1,971 Other Milk Products 2,178 2,337 Total 84.7 70.8 1 6 1 . 1 89.4 5 , 7 0 3 5 , 9 4 3 1 1 5 - 3 7 , 7 7 3 7 , 9 7 3 9 5 - 2 2 , 9 4 9 3 , 6 5 3 2 1 3 . 1 2 , 4 3 8 2 , 5 9 4 119.1 17,230 17,149 ,17.809 18,831 109.3 55 On the supply side Table III represents a portion of the Task Force projections for the changes i n cow numbers, y i e l d l e v e l s , and milk sales for the sale time period. TABLE III Cow Numbers, Yi e l d Levels and Milk Sales f o r 1 9 6 5 and Projections f o r 1975 and 1980  Cow Numbers Sales per Cow Milk Sales 1 9 6 5 1975 1 9 8 0 1 9 6 5 1975 1 9 8 0 1965" 1975 1 9 8 0 " ( 0 0 0's) (lbs) (millions lbs) B.C. 86 77 82 9 , 5 3 6 1 1 , 1 0 0 1 1 , 7 0 0 =805 855 '.-842 Canada 2,822 2 , 2 1 3 2,048 5 , 9 0 9 7 , 9 0 9 8 , 7 2 2 1 6 , 6 7 2 17 ,502 17 ,863 As can be seen i n the Table the absolute number of cows, both i n B.C. and Canada, w i l l decline. This w i l l be offset by marked increases i n productivity per cow. While milk sales w i l l be up over the period i n question they w i l l only increase by a l i t t l e over 7 percent. Data Selection The purpose of th i s section i s to describe the rationale that was used i n s e l e c t i n g the three data series that were used from the larger number that were a v a i l a b l e . The f i r s t step was to select from the overall set those data series which were complete. By t h i s I mean those series i n which data was a v a i l -able for every monthly period from January 1966 to December 1 9 7 1 . Out of the 38 series i n the i n i t i a l set only 2 0 met th i s basic requirement. The purpose of t h i s procedure was to insure 5 6 consistency between the series and thereby to allow comparisons to be made. By making sure that each data series used was composed of 72 periods the problem of v a r i a b i l i t y i n the number of periods used to determine the constant value was eliminated. Also by doing this each of the data series represented the same time period. The second step was to plot each series on a very coarse grid i n order to get a rough f e e l i n g f o r the size and d i r e c t i o n of the trend and the degree of random fl u c t u a t i o n between individual periods. The purpose of this was to try to select from the set a data series f o r each of the trend types -r i s i n g , f a l l i n g and stable. There was an attempt i n doing this to select a r e l a t i v e l y clean series f o r each, i . e . one with f a i r l y low.'random fluctuations. The idea behind using three separate series for the various t r i a l runs on the pattern search process was to see i f there was any difference i n the techniques a b i l i t y to select constant values for various types of trends. The attempt to minimize the random factor was to increase the comparability of the three series by reducing extraneous influences. In eachcase the sel e c t i o n was done on a purely v i s u a l basis. Once the i n i t i a l s e l e c t i o n of 9 series was accomplished the next step was to more accurately plot the series i n order to make a f i n a l s e l e c t i o n . At t h i s stage the PLOT routines compiled by the U.B.C. Computing Center were used. Again on a v i s u a l basis the same c r i t e r i o n of trend and degree 57 of random f l u c t u a t i o n were used. The re s u l t of thi s process was the sel e c t i o n of the three data series l i s t e d below. Rising Trend - ( 2 Quart Wholesale 2.0% Butterfat Carton Shipping) 6 0 0 108 486 1,240 2,227 2,343 2,965 4,075 4,158 64 2 04 5 3 4 1 , 2 7 9 2 , 3 0 1 2 , 8 5 1 2 , 5 1 5 4 , 1 2 9 4 , 2 8 9 738 667 594 459 471 297 192 141 226 2 5 1 2 88 3 0 6 491 -591 556 1,118 1 , 0 7 5 1 , 0 5 7 1,341 1,702 1,511 1 , 6 1 0 2,044 2 , 4 5 0 2,2 09 2,244 1,947 1,698 2 , 1 0 6 2,4l4 2 , 5 5 1 2,843 3 , 1 0 3 1,994 2,238 2,854 3,726 3,733 3 , 3 3 0 3.985 3.870 3.756 4,i4o 3,974 3,580 3,937 4,524 3,860 5 , 5 2 5 4,443 4,423 4,818 5,141 4,l6l F a l l i n g Trend - ( l Quart R e t a i l 3 . 2 $ Butterfat Carton) 574769 460373 372176 327378 247858 245087 2 2 1 6 1 0 284984 2 5 7 7 9 4 5 1 0 6 3 9 4 7 0 4 5 6 3 4 8 3 4 8 3 0 8 4 6 6 2 5 7 2 9 3 2 2 0 5 3 0 3 6 2 8 0 8 3 0 2 9 3 3 2 3 5 6 5 2 5 5 8 9 4 0 4 7 0 6 0 8 3 1 7 4 5 4 3 1 7 4 4 0 2 5 1 9 9 2 2 2 0 6 0 0 3 7 4 5 0 2 2 7 7 7 5 0 9 0 0 5 1 526489 482295 330660 308882 242758 214127 3 5 1 4 4 1 288486 19444 519697 466049 346101 3 1 0 0 6 3 265143 2 2 1 1 2 5 324770 285604 21180 4 7 4 3 3 0 365637 3 4 2 3 3 0 280988 244302 228845 301804 259205 2 0904 4 2 1 0 6 7 391404 3 4 1 4 9 4 2 6 2 7 4 9 2 5 8 9 9 9 2 1 3 9 4 2 2 8 2 7 0 3 2 8 5 0 0 6 21843 4 3 5 1 0 2 3 5 8 6 9 8 3 3 2 2 6 0 2 3 4 4 9 0 2 4 8 6 2 8 2 2 6 9 9 3 2 1 9 9 8 5 2 6 5 0 7 2 24114 Stable Trend - ( 2 Quart Wholesale 3.895 Butterfat Carton) 1 2 1 0 8 5 1 2 9 9 3 0 1 2 4 6 6 5 1 2 4 5 6 9 1 2 9 9 7 9 1 1 2 2 9 1 1 1 8 0 7 9 I I 6 3 6 1 3 6 3 5 0 1 1 1 3 5 2 1 3 2 2 2 9 1 2 3 5 2 8 1 2 4 1 9 7 1 1 0 6 1 9 1 1 9 6 9 7 1 2 0 6 3 4 1 1 4 7 8 1 3 2 4 2 8 1 2 3 3 3 9 1 2 9 5 4 6 1 2 4 7 5 5 1 1 7 6 3 6 122181 8 3 7 5 0 1 0 7 1 4 7 1 2 4 7 5 5 3 1 6 7 1 122281 1 3 8 4 4 5 1 2 0 3 3 3 122422 1 1 9 5 4 6 9 5 0 5 5 H 6 3 6 O 9 8 8 1 0 3 4 4 3 0 1 1 6 5 2 7 1 3 4 0 0 2 1 3 2 6 7 1 1 1 6 9 7 1 1 1 6 8 8 2 9 8 5 6 7 1 1 6 1 2 5 4 3 3 8 3 3 3 1 4 6 1 1 7 1 3 6 1 2 8 7 4 9 1 3 1 7 8 1 1 1 4 8 9 6 126026 I O 8 3 2 3 1 1 4 0 9 6 3 5 0 4 4 3 5 0 4 6 1 1 7 3 2 6 1 2 0 0 1 6 1 3 1 7 4 7 1 0 9 1 1 4 1 1 1 7 5 3 1 1 3 9 8 8 108220 3 2 4 7 2 3 5 4 7 3 1 2 7 1 1 6 1 3 6 4 8 1 1 3 1 1 0 2 1 1 7 7 1 5 1 1 6 6 7 8 1 1 0 2 5 2 1 1 7 7 7 4 3 6 1 5 1 2 7 9 0 2 For a graph of these three series see Appendix A. 58 CHAPTER V Ana l y t i c a l Methodology The purpose of this chapter i s to provide the reader with a discussion of the rationale and procedure used i n analyzing the response c h a r a c t e r i s t i c s of the pattern search -exponential smoothing forecasting system. I w i l l s t a r t o f f with a discussion of the approach used i n s e t t i n g the i n i t i a l values for the sales average (BEGAVG), the trend estimate (BEGTND) and the seasonal factors (BSEAS(N)). Next I w i l l present the reasoning behind the sel e c t i o n of the error measure used i n this study. T h i r d l y w i l l be a discussion of the standard values used as a comparison f o r the experimental run re s u l t s . L a s t l y I w i l l present the methodology that was used i n the analysis. Selection of I n i t i a l Values The method used to determine the i n i t i a l values of the sales average, the trend estimate and the seasonal factors remained the same for a l l three of the series tested. In addition, once the values were determined, they remained constant throughout a l l the t r i a l s . It was decided not to subject these three inputs to analysis. Due to the very nature of a weighted moving average method such as exponential smoothing, inputs such as these lose t h e i r impact as the time periods move foreward. By the time the system reaches the f i f t e e n t h or twentieth period and i s forecasting into the future any changes i n these i n i t i a l 5 9 values w i l l have a n e g l i g i b l e e f f e c t . To begin with, the i n i t i a l sales average, was computed simply by determining the average value for the f i r s t year i n the data series. This was accomplished by summing the values of the twelve monthly periods of the f i r s t year and then d i v i d i n g this sum by twelve. A s l i g h t bias i s introduced by this procedure, i f there i s a trend i n the data s e r i e s . If the trend i s positive then the value of the i n i t i a l sales average w i l l be biased upward. The degree of bias depends on the size of the trend. Of course, the opposite holds true for a negative trend. I calculated the i n i t i a l trend estimate by subtracting the f i r s t value of the data series from the l a s t value and then d i v i d i n g this difference by the number of periods i n the data series. This gave me a mean value for the trend of the series. This mean value was then used as an estimate of the i n i t i a l trend. By the time the series reached i t s l a t e r values the ef f e c t of the i n i t i a l estimate w i l l have been diluted by the smoothing procedure so as not to noticeably influence the re s u l t s of the process. It i s because of t h i s that an estimate could be used for the i n i t i a l value. The t h i r d input determined was the i n i t i a l seasonal factor for each month of the seasonal cycle. In t h i s study the seasonal cycl was a year and hence a seasonal factor had to be developed for each month. The method used to develop a set of i n i t i a l values was to sum 6o values of the months in a yearly cycle, then divide this sum hy 12. This yields an average value for each year. I then divided each monthly value hy the yearly average. This gave me the ratio that represents the seasonal factor. I repeated this process for the f i r s t two years to the data series, summed the two values for the individual months, then divided hy two to determine the mean value for the two years. The twelve values, realized from this procedure, were then normalized so that they summed to twelve. The reason that I chose to use two years rather than three or more, was simply one of convenience. Again, as in the other two values being determined, the size and direction of the trend of the series influences the calculation of i n i t i a l trend estimates. If the values of the series rise then the seasonal factors representing the months at the latter end of the cycle w i l l be biased upwards relative to those months at the beginning of the cycle. Error Measurement As I mentioned earlier in my discussion of exponential smoothing, any of a number of measures can be used to track the accuracy of the smoothing process. The reason the standard deviation of the forecast error was chosen is that i f a forecast system is to be used in a real l i f e application then the method of error measurement should be linked to profit. If, as is true in many business situations, a large mistake is more than propor-tionally "costly" than a small one then the error measurement of 61 the forecast system should reflect this non-linearity. As an example, the mean absolute error, which is a linear measure-ment scale, can only appropriately be used where a 10 point in-accuracy in a forecast is only twice as "bad" as a 5 point error. The standard deviation of the forecast error is not linear and can appropriately be applied to situations where bigger errors are more than proportionally costly than small ones. Because most profit situations are not linear in their reaction to error I f e l t that the use of this measure would be more appropriate. It should be noted that I made no attempt to link my error measure with the actualities of the dairy industry. Standard Run Values As part of my methodology I developed a set of standard values which when used as inputs into the pattern search process yielded a series of benchmark values against which other experi-mental runs could be evaluated. In actuality there were two standard value sets. The f i r s t was used in analyzing changes in the pattern search parameters, e.g. pattern search step size, etc. The second, which resulted from my analysis of the para-meter changes, was used in the analysis of the changes in the beginning values for the smoothing constants. The values that I used for the f i r s t set of standards can appropriately be described as "traditional" ones. They are values which are typically used in a pattern search application 62 where more "efficient" values are unknown. The values are li s t e d belowt Maximum number of pattern moves (MAX) 50 Pattern search step size (DELO) .05 Step size reduction factor (RMO) .500 Minimum step size (D) .01 Starting Sales Average Smoothing Constant (A).500 Starting Trend Smoothing Constant (B) .500 Starting Seasonal Factor Smoothing Constant (C) .500 When applied to the three sets of data used in this study these values yielded! 1) minimum error value 2) total number of iterations used in arriving at the minimum error value 3) value for each of the smoothing constants -A, B and C. Subsequent changes were then compared against these figures to determine i f the change had resulted in improved results. As I have just mentioned the second set of standard values were derived from the analysis of changes in MAX, DELO, D, and RH0. After I had completed the analysis I determined which values had yielded the best results and used these values in my analysis of changes in A, B and C. The values I used were: Maximum number of pattern moves (MAX) 100 Pattern search step size (DELO) .1 Step size reduction factor (RHO) .5 Minimum Step Size (D) .001 Again as before, when these were applied to the data used,the computations resulted in a number of figures which I subsequently 63 compared a l l changes against. Methodology I intend to discuss in this section the approach taken for the sensitivity analysis of the smoothing systems parameters. Essentially the method involved a t r i a l and error approach to the selection of the input values. It should he noted here that a l l of the analysis performed was fa c i l i t a t e d by the use of a previously developed pattern search-exponential smoothing computer program. The computer program "evolved" in several stages over a period of three years. People associated with i t s development were* W. Berry - Purdue University J. Wilcox - Indiana University D. Weiss - University of British Columbia Given that I had developed a base level against which to make comparisons then I proceeded to change the input values one at a time. I used a "one at a time" approach so that I could effectively isolate the results of any changes made and hence determine the cause-and-effect relationship. The f i r s t change I made was to extend the length of the pattern search process. I did this by increasing the value of MAX (maximum number of pattern moves) to 100 and by decreasing D (minimum step size) to .001. Together these two parameters form the boundary that terminates the pattern search process. The purpose of this move was to get a more complete feeling for the response surface of the search process as i t applied to my data series. 6k I also wanted to find out i f I was prematurely terminating the process before i t arrived at the optimum constant values. I subsequently incorporated these changes into the rest of ray t r i a l s . The second series of changes that made were changes in DELO (pattern search step size). I made five experimental runs, each with a different value. The values weres . 0 1 , . 1 , . 1 5 , . 2 , . 3 . The purpose of this series of changes was to see i f either the smoothing constant values, minimum error value or number of iterations were sensitive to search step sizes. As a result of these t r i a l s I selected a value of DELO = .1 as the most efficient value for use in a l l subsequent experimen-tal runs. The next series of changes made were on RHO (step size reduction factor). Again I tried five different values ( . 1 0 0 , . 2 5 0 , .400, . 6 0 0 , . 7 5 0 ) . The reason for these experimental runs was the same as those above for DELO. I wanted to see the effect of these changes on the smoothing constant values, the minimum error value and the number of iterations. The results of these changes led me to select the value of RHO that had been used in the standard run ( . 5 0 0 ) and use this value for the remainder of my analysis. Up to this point I had been keeping the smoothing constant values unchanged. Now that I had developed what appeared to be the best set of pattern search parameter values. I applied the analysis to the smoothing constants. The previous 65 analysis had given me values that I used as a standard for this second part of the analysis. These values were; MAX = 100 DELO = . 1 0 0 RHO = . 5 0 0 D = . 0 0 1 A = . 5 0 0 B = . 5 0 0 C = . 5 0 0 Because I was working with three smoothing constants (A, B and C) I took a two-pronged approach. F i r s t l y , I made two changes that were consistent for a l l three constants. A, B and C were i n i t i a l l y a l l given values of . 2 5 0 and then were a l l given values of . 7 5 0 . I used this consistent approach because in most cases of practical application the user is not aware of the range within which the best constant values might l i e and consequently usually w i l l give a l l three constants the same value. The second set of changes that were made were based on the consistency of the constant value results over a l l of the previous analysis. Because the A, B and C values remained almost constant over a l l the previous experimental runs that were made i t was decided to see what would happen i f the i n i t i a l constant values were put near the solution constant values. Given that the three data series each yielded different results I gave them different i n i t i a l values. For the rising series I assigned the values A = . 2 0 0 , B = . 6 0 0 , C = . 2 0 0 while for both the f a l l i n g and stable series I assigned the values A = .800, B = .800 and G = . 2 0 0 as i n i t i a l values. 66 Table IV gives a l i s t i n g of the t r i a l runs made and the values associated with each t r i a l run. As can be seen both TABLE IV Input Values on an Experimental Run Basis Run MAX DELO RH0 D A B C Standard 50 .05 .500 .01 .500 .500 .500 2nd Run 100 .05 .500 .001 .500 .500 .500 3rd Run 100 -.1 .500 .001 .500 .500 .500 4th Run 100 .01 .500 .001 .500 .500 .500 5th Run 100 .15 .500 .001 .500 .500 .500 6th Run 100 .2 .500 .001 .500 .500 .500 7th Run 100 ±1 .500 .001 .500. .500 .500 8th Run 100 .1 •750 .001 .500 .500 .500 9th Run 100 .1 .250 .001 .500 .500 .500 10th Run 100 .1 .100 .001 .500 .500 .500 11th Run 100 .1 .4oo .001 .500 .500 .500 12th Run 100 .1 .600 .001 .500 .500 .500 13th Run 100 .1 .500 .001 .250 .250 .250 14th Run 1*00 .1 .500 .001 .750 •750 .750 *15th Run 100 .1 .500 .001 .800 .800 . 2 00 116th Run 100 .1 .500 .001 .200 .600 .200 \00 .\ • 5oo . 001 * These values were applied only to the atahlie aa4 f a l l i n g serie These value s were applied only to the ris i n g series -V 1  11 n 11 H i> N.B . The underlined figures represent the parameter input under consideration during an experimental run. from the Table and my description of the analytical process I tried to base each change on the degree of success or failure that I had noted in the previous moves. Because I followed this rule-of-thumb procedure I only analyzed a rather narrow range of values when compared to the possibilities open to me. I did this on purpose. Given the usual constraints on time, computer resources, etc. I f e l t that the analysis had to be approached in 67 some sort of l o g i c a l manner. By following along what looked l i k e good leads or directions I attempted to maximize the results of the e f f o r t invested and minimize production of r e s u l t s with l i t t l e value. By using a more extensive "blanket" approach I would have covered areas that would not have been worthwhile. 68 CHAPTER VI Data Analysis and Conclusions As has been mentioned several times before the purpose of t h i s study was to undertake a s e n s i t i v i t y analysis of a number of inputs into the pattern search technique as i t applies to exponential smoothing. Consequently the focus of t h i s chapter w i l l be on an analysis of the data generated by the study. Each of the inputs that were changed are dealt with on a separate basis. The maximum number of pattern moves (MAX) and the minimum step size (D) are discussed f i r s t . Following t h i s i s the pattern search step size (DELO). Then the step size reduction factor (RHO) w i l l be discussed. As i n the methodology th i s analysis w i l l lead to some new standard values. Next, based on these standard values the experimental runs involving the exponential constants w i l l be discussed. Follow-ing this the f i n a l conclusions of the study w i l l be drawn. At each stage of the analysis the three separate time series, that were used, w i l l be compared for difference i n r e s u l t s . Maximum Number of Pattern Moves and Minimum Step Size Given that i n the i n i t i a l standard run MAX = 50 and D = .01 the f i r s t experimental run consisted of increasing MAX to 100 and decreasing D to .001. This had the e f f e c t of extend-ing the terminal l i m i t s on the pattern search process. In each of the three time series that were used the e f f e c t of the i n i t i a l 69 TABLE V L i s t of Experimental Run Values for a l l Time Series* Experimental Run MAX PELO RHO D Standard 5 0 • 05 . 5 0 0 . 0 1 2 n d 1 0 0 . 0 5 . 5 0 0 . 0 0 1 3 r d 1 0 0 . 1 0 . 5 0 0 . 0 0 1 4 t h 1 0 0 . 0 1 . 5 0 0 . 0 0 1 5 t h 1 0 0 . 1 5 . 5 0 0 . 0 0 1 6 th 1 0 0 . 2 0 . 5 0 0 . 0 0 1 ?th 1 0 0 • 3 0 . 5 0 0 . 001 8 t h 1 0 0 . 1 0 . 7 5 0 . 0 0 1 9 t h 1 0 0 . 1 0 . 2 5 0 . 001 1 0 t h 1 0 0 . 1 0 . 1 0 0 . 0 0 1 1 1 t h 1 0 0 . 1 0 .4oo . 0 0 1 1 2 t h 1 0 0 . 1 0 . 6 0 0 . 0 0 1 For the next series of experimental runs the following values were held as standards MAX = 1 0 0 DELO = . 1 0 RHO = . 5 0 0 D = . 0 0 1 . 2 5 0 . 2 5 0 . 2 5 0 . 7 5 0 . 7 5 0 . 7 5 0 .800 . 8 0 0 . 2 0 0 . 2 0 0 . 6 0 0 . 2 0 0 . 5 0 0 . 5 0 0 . 2 0 0 + Values used f o r vj^nj^g time series only =fe Values used for f a l l i n g time series only ** Values used for stable time series only * The underlined value i n each run was the change made for that run. values of MAX and D was to have the search process terminate when the minimum step size was reached. MAX never e f f e c t i v e l y acted as an upper bound on the number of pattern moves permitted. On the f i r s t experimental run the same held true again. In each 1 3 t h 14th +I5th i l 6 t h **17th of the three cases the search was terminated by reaching the minimum step size. Given that the computer printed the results of the search process in such a format that every pattern move and i t s associated error and smoothing constant values were avail-able i t was decided to use the new values (MAX = 100, D = .001) for the remainder of the study. This was an efficiency oriented move. It enabled a l l subsequent moves to be compared both to the standard run and the second experimental run. For a complete l i s t i n g of a l l of the experimental runs refer to Table V. The effects of the changes in MAX and D are not very great. In the 2nd experimental run of the ri s i n g series this extension of the search process only reduced the error factor from a value of 43.611 in the standard run to 43.6l?. This is a virtua l l y inconsequential change. In order to achieve this reduction the search process had to increase the number of pattern moves made from 16 in the standard run to 24 in the experimental run. This is a rather large increase given the small decrease in error. The values of the exponential constants changed very l i t t l e also. The standard run yielded solution values of A = .187, B = . 6 l 8 and C = .012. The experimental run solution values were A = .190, B = 6l9 and C = .011. Again, this is a virtually inconsequential change. This above pattern of results held true for every other experimental run and was consistent across the time series. 71 Without going into the actual results of the remaining 16 runs i t can be said, without exception, that error value and exponen-t i a l constant values benefited very l i t t l e from extending the search process to a minimum step size of .001. Figure 7 is ill u s t r a t i v e of why the extentions resulted' in such minor improvement. By far the largest portion of error value reduction and associated constant value change takes place within the pattern moves that are linked to the f i r s t two step sizes in the search process. In other words, the pattern moves associated with the i n i t i a l step size value and the f i r s t reduc-tion of i t , account for most of the changes in the results. Indeed, the bulk of this improvement in error and constant values occurs before any step size reduction takes place. Very l i t t l e change takes place after these i n i t i a l moves. As Figure 7 shows the 8 pattern moves that are associated with a DELO of .05, reduced the error value from 97.404 to 44.304. The remaining 16 pattern moves that account for the rest of the step size reduction only further reduce the error value to 43.617. An obvious conclusion is that an extension of the pattern search process is not a worthwhile move. Of course, such a statement must be taken within the context of the time series used, the importance of forecast accuracy and the costs incurred in extending the search process. The existence of this response characteristic of the pattern search process is quite important as i t had a marked impact on many other changes that were made In the system during the various experimental runs. This area w i l l be discussed in greater detail in the appropriate sections. 72 Error Value 100 90 -• Io0 40 % PATTERN MOVE"} i e\ove x 1 ? r V n £ R | 0 H . c>?> .oa«o .ora& .oowa .oo'iv .M>\U Pattern Search Step Size (DELO) -+- -+-l x \-.cor/a Figure 7 Pattern Search Step Size and Associated Error Value (Rising Time Series - 2nd Experimental Run) 73 Pattern Search Step Size This section deals with those changes that were made in the pattern search step size (DELO). As Table V shows DELO was given a series of values ranging from .01 to .30. The purpose of making these changes was to see i f any consistent patterns of response developed. One point that should be mentioned here is the interrelationship of the pattern search step size (DELO) and the step size reduction factor (RHO) • As the search process proceeds DELO is multiplied by RHO to yield a new DELO value for the following pattern and exploratory moves. This multiplication process occurs each time a step size reduction is required. Changes in either value w i l l affect the product of the two. This interrelationship was not explored in this study because i t did not appear f r u i t f u l to do so. This w i l l be clearer as the results of the changes in DELO and RHO are studied. To start off with, changes in DELO produced very l i t t l e change in either the minimum error value or the exponential constant values. As Table VI shows the values for the exponen-t i a l constants and the error factor remained very stable through-out the experimental '.runs. The pattern search process seems capable of consistently reaching the same (or very similar) end values no matter what step size was used. The major difference observed between the various values TABLE VI Results From Changes in DELO RISING SERIES FALLING SERIES STABLE SERIES Experi No. of Minimum Constant No. of Minimum Constant No.of Minimum Constant mental Itera- Error Values Itera- Error Values Itera- Error Values Run tions Value A B C tions Value A B C tions Value A B C 2nd 24 43.617 .190 .619 . Oil 18 3406.812 .822. 999 . 001 20 709.172 .532 .474 .025 3rd 16 43.617 .190 .617 .011 8 3405.810 .821 .999 .000 17 709.173 .532 .473 .025 4th 32 43.617 .191 .618 . Oil 31 3406.617 .822 .999 .000 23 709.172 • 533 .477 .025 5th 15 43.617 .190 .618 .012 16 3406.075 .822 .999 .000 18 709.170 .532 .474 .025 6th 14 43.617 .190 .617 .012 9 3405.807 .822 .999 .000 13 709.172 .532 .474 .015 7th 17 43.617 .191 .617 .012 14 3406.076 .822 .999 .000 18 709.170 .532 .474 .025 75 -\ \ \ \ 1 \ r •o\ .05 .10 as .2o .^ 0 Pattern Search Step Size (DELO) Figure 8 Pattern Search Step Size and Associated Number of Iterations 76 of DELO was the number of iterations or pattern moves needed to arrive at the fin a l values. This is a f a i r l y important point because, as a generalization, the lower the number of necessary pattern moves the greater the efficiency of the system. Here efficiency refers primarily to computer time costs. There is not, however, a one-to-one relationship between number of iterations and computer time costs. This is primarily due to the existence of base levels of compilation, input and output costs associated with computer usage. As Figure 8 shows there is quite a bit of variation in the number of iterations, as a result of changes in DELO. For example, the number of iterations for the f a l l i n g time series ranges from 8 when DELO = .10 to 32 when DELO = .01. It is interesting to note the consistency between the time series. The fact that the three of them move in a f a i r l y consistent pattern strengthens the conclusions with regards to generality. Although the data presented here only represents three different time series and as such i s a very small sample, tentative conclusions can be drawn. It seems that the best values for DELO l i e in the middle ground of the range that was looked at. Between .10 and .20 seems to be an area within which the number of iterations is minimized, given any particular level of error factor. Given these results a value of DELO = .10 was chosen to be used for the remaining experimental runs. 77 Step Size Reduction Factor The focus of this section is on the results that were generated from changes in the step size reduction factor (RHO). As with the other experimental runs the purpose of subjecting RHO to a series of orderly changes was to see what the effects of these changes would be on the error and exponential constant values and on the number of iterations. As Table V shows RMO 'was given a series of six separate values. A l l of these values f a l l in the range from .100 to •750. Even though the spread of this range of values is quite large the resulting changes in the minimum error value and the exponential constant values are very small. Indeed, the changes in these values are inconsequential. For an example, the minimum error value for the ri s i n g time series at 43.617 remained constant throughout the changes. Table VII presents a complete l i s t i n g of the results of these experimental runs. As that table shows, the exponential constant values also remained very consistent. In virtually every case, the changes that did occur took place at the third decimal place. This is a pattern which turned out to be consistent across time series. The reason for these results can be traced to the nature of the responsiveness of the pattern search system. As was discussed earlier in this chapter and as Figure 7 shows, most of the changes in the error and constant values takes place before RHO has a chance to have an effect. Most of the change takes place before the f i r s t step size reduction is implemented. TABLE VII Results From Changes in RHo Rising Series Falling Series Stable Series Experi- No. of Minimum Constant No. of Minimum Constant No. of Minimum Constant mental Itera- Error Values Itera- Error Values Itera- Error Values Run tions Value A B C tions Value A B C tions Value A B C 3rd 16 43.617 .190 .617 . Oil 8 3405.810 .821 • 999 .000 17 709.173 • 532 .^73 .025 8th 33 43.617 .191 .619 .011 21 3405.808 .823 .999 .000 39 709.172 • 532 .474 .024 9th 18 43.617 .191 .617 .011 10 3405.808 .822 • 999 .000 14 709.170 .532 .025 10th 13 43.617 .190 .619 . 012 13 3405.807 .822 • 999 .000 19 709.173 • 531 .473 .025 11th 19 43.617 .190 .617 .011 10 3405.808 .822 • 999 . 000 19 709.170 • 532 .474 .o?5 12th 22 43.61? .191 .62 0 .011 19 3405.807 .822 .999 . 000 22 709.171 • 532 .^75 .025 -S3 79 The r e s u l t of t h i s i s that the pattern search system with regards to the error and constant value, i s very i n s e n s i t i v e to changes i n .R'HO. Figure 9 depicts graphically the changes that occur i n the number of i t e r a t i o n s when the step size reduction factor i s altered. The number of i t e r a t i o n s i s the only factor that i s responsive to changes i n RH9. A S the graph shows the most marked changes occur when RHO i s assigned large values i . e . . 6 0 0 or . 7 5 0 * When these values are used the number of i t e r a t i o n s r i s e s sharply. The stable series almost doubled i t s normal number of i t e r a t i o n s when RHO = . 7 5 0 . Looking towards the other end of the graph i t can be seen that the responsive-ness to smaller values i s more l i m i t e d . For values between . 1 0 0 and . 5 0 0 the stable series f a l l s between 14 and 19 i t e r a t i o n s . Here too, the three time series demonstrate a close consistency. The conclusion to be drawn i s that the i n i t i a l value of RHO i s not of p a r t i c u l a r importance. As long as large values are avoided the system w i l l perform well. For the remaining experi-mental runs a value of . 5 0 0 was used for RHO. This value was chosen as i t represents both a reasonable value based on the findings of t h i s study and a value that i s commonly used i . e . " t r a d i t i o n a l . " Exponential Constants - A, B and C. The focus of t h i s section i s to determine the effects of changes i n the i n i t i a l values of the exponential constants. \ 1 1 V 1 1 • 100 -35v A&) .500 -\JX> r&\> Step Size Reduction Factor (RHO) Figure 9 Step Size Reduction Factor and Associated Number of Iterations TABLE VIII Results From Changes In Exponential Constants (A, B, C) Rising Series Falling Series Stable Series Experi- No. of Minimum Constant No. of Minimum Constant No. of Minimum Constant mental Itera- Error Values Itera- Error Values Itera- Error Values Run tions Value A B C tions Value A B C tions Value A B C 3 r d 16 4 3 . 6 1 7 . 1 9 0 . 6 1 7 .011 8 3405.810 .821 . 9 9 9 . 0 0 0 17 7 0 9 . 1 7 3 . 5 3 2 . 4 7 3 . 0 2 5 1 3 t h 14 4 3 . 6 1 7 . 1 9 0 . 6 1 7 . Oil 12 3405.808 .822 . 9 9 9 . 0 0 0 15 7 0 9 . 1 7 2 . 5 3 2 . 4 7 3 . 0 2 5 14th 23 4 3 . 6 1 7 .190 . 6 1 9 .011 15 3406.819 .822 . 9 9 9 . 0 0 0 12 7 0 9 . 1 7 1 . 5 3 3 . 4 7 6 .024 1 5 t h - - - - - 6 3405.808 .822 . 9 9 9 . 000 - - - - -1 6 t h 13 4 3 . 6 1 7 . 1 9 0 . 6 1 8 .011 - - - - - - - - - -1 7 t h - - - - - - - - - — 16 7 0 9 . 1 7 2 . 5 3 1 . 4 7 4 . 0 2 5 C O 82 A two pronged approach was made i n t h i s section of the analysis. The f i r s t part consisted of giving each of the three constants, A, B and G the same value. This was done f o r three d i f f e r e n t values, i . e . . 2 5 0 , . 5 0 0 and . 7 5 0 . The second part consisted of approaching each time series and each constant value i n that series i n d i v i d u a l l y . Based on information already generated i n the study each constant was given an i n i t i a l value that was close to the soluti o n value. Table V presents the l i s t of these changes. It also should be noted that a standard set of values were used f o r the other inputs into the system. These values aret MAX = 100 D = . 0 0 1 DELO = . 1 0 RN0 = . 5 0 0 This set of values consists of the "best" values that the analysis has generated up to thi s point. With regards to the f i r s t set of changes, Table VIII shows very l i t t l e movement either i n the minimum error value or the solution constant values, as a function of the i n i t i a l constant values. Experimental runs #3 (A = . 5 0 0 , B = . 5 0 0 , C - . 5 0 0 ) . #13 (A = . 2 5 0 , B = . 2 5 0 , C = . 2 5 0 ) and #14 (A = . 7 5 0 , B = . 7 5 0 , C = . 7 5 0 ) present a complete l i s t of the r e s u l t s . V i r t u a l l y the only changes that take place do so at the t h i r d decimal point. This pattern of n e g l i g i b l e change i s again present i n runs #15 (A = .800, B = .800, C = . 2 0 0 ) , #16 (A = . 2 0 0 B = . 6 0 0 , C = . 2 0 0 ) and #17 (A = . 5 0 0 , B = . 5 0 0 , C = . 2 0 0 ) . 8-3 Setting the i n i t i a l constant value near the solution value has very l i t t l e e f f e c t on either the s o l u t i o n value or the minimum error value. As a conclusion, i t can he said that with regards to the minimum error value and the exponential constant values, the pattern search process i s unresponsive to changes i n the i n i t i a l constant values. With regards to number of i t e r a t i o n s , Figure 10 graphically depicts the responsiveness of the pattern search system f o r t h i s area. The f i r s t thing to notice i s that there i s not very much consistency between the ser i e s . This lack of a systematic pattern weakens the generality of any conclusions that might be drawn. The r i s i n g series appears to be more appropriate for application of a set of low values. The f a l l i n g series i s rather inconsistent with a middle value i . e . . 5 0 0 seeming to work better than either l a r g e r or smaller values. The opposite i s true for the stable s e r i e s . In t h i s case both small i . e . . 2 5 0 or large i . e . . 7 5 0 seem to work better than middle values. Because these statements represent only one time series each the reader should r e f r a i n from making broad inferences based on t h i s data. Conclusions This section i s e s s e n t i a l l y a compilation of a l l the conclusions that have been made so f a r . To begin with, given the response patterns of the search process, the system i s not influenced very much by changes i n the input parameters. 84 - i < s -^50 .500 .ISO Constant Value Figure 10 Constant Value and Associated Number of Iterations 8<5 Throughout the experimental runs there developed a consistent pattern of minimal change i n the error value and the exponential constants. The search process appears to be able to consistently reach the same minimum error value and solut i o n constant values no matter what the inputs. This statement does not encompass the use of such extreme values as 0 or 1, however. The only dependent variable that changes at a l l i s the number of it e r a t i o n s or pattern moves. There does appear to be ce r t a i n input values that minimize the number of i t e r a t i o n s that the pattern search system needs to arrive at the solut i o n values. For the maximum number of pattern moves (MAX) 50 i s a r b i t r a r i l y chosen as a s a t i s f a c t o r y number. In no case i n the study was the search process terminated by running over MAX. Because of this i t seems to be appropriate to set MAX at such a value as not to hinder the search process yet s t i l l be able to stop i t i n cases where successive pattern moves are producing l i t t l e improvement. The best way to do this i s to set i t at a value just beyond that number of i t e r a t i o n s which one would expect the search process to run to. In the study, even under the most unfavourable value assignments, the number of i t e r a t i o n s never reached beyond 39 • Hence an appropriate value for MAX would be 50. With regards to minimum step size (D) the small size (.001) that was used throughout the study was never needed. It could e a s i l y have been increased to .01 with only a ne g l i g i b l e e f f e c t on the dependent variables. At values smaller than .01 8'6 the step size becomes too small to y i e l d much improvement. At t h i s stage i t becomes i n e f f i c i e n t with regards to computation time, given the r e s u l t s received, to continue the search. The pattern search system also appears to be unrespon-sive to changes i n the pattern search step size (DELO). Neither the error value nor the constant values can be improved or changed through the use of d i f f e r e n t DELO values. The number of it e r a t i o n s i s somewhat more responsive. Both large and small values y i e l d l a r g e r numbers of i t e r a t i o n s than do middle values i. e . .10-.20. Hence i t would be advisable to use a value of approximately .10 when s e t t i n g up a search system. Like the other inputs the step size reduction factor (RHO) also does not e l i c i t much change i n the r e s u l t s of the search system. Movements i n the error factor and the constant values are small enough to be i n s i g n i f i c a n t . With regards to the number of i t e r a t i o n s RHO values of from .100 to .500 minimize the number. There i s quite a b i t of consistency with this range. Larger values than these should be avoided as they tend to increase the number of i t e r a t i o n s . There i s l i t t l e responsiveness i n the system to changes i n the exponential constant values. Even when the i n i t i a l values are placed close to the sol u t i o n value there i s i n s i g n i f i -cant change i n either the error or solution constant values. Between the time series there i s l i t t l e consistency with regards to the effects of constant changes on the number of i t e r a t i o n s . 8 7 The rising series benefited most from a small set of values i.e. . 2 5 0 . The f a l l i n g series was most efficient with a middle value i.e. . 5 0 0 . The stable series reacted opposite to the f a l l i n g one and reacted best to values at the extremes i.e. . 2 5 0 and . 7 5 0 . Perhaps the most important finding of the study is that of the responsiveness of the search system in the series of exploratory and pattern moves that occur before the f i r s t step size reduction. By far the bulk of a l l improvement in the error value and the bulk of a l l change in the constant values takes place before the f i r s t step size reduction. This is important because i t accounts for the insensitivity of the search system to changes in MAX, D and RHO . 8'8 CHAPTER VII Recommendations Whereas the l a s t chapter dealt with the analysis and conclusions of the study t h i s chapter b r i e f l y deals with some recommendations based on the findings. F i r s t l y , I would l i k e to deal with the question of the generality of the findings. Although the res u l t s of the study showed some consistent response patterns within the pattern search system, over-generalization should be avoided. Due to the small sample size used i . e . three time ...series, i t might be more appropriate to c a l l the findings "tentative conclusions." I do, however, f e e l that some s i g n i f i c a n t response patterns have been brought to l i g h t by the study. Further in v e s t i g a t i o n into some of these patterns might prove worthwhile. One area of recommendation i s the r e p l i c a t i o n of thi s study, using d i f f e r e n t time s e r i e s . By doing t h i s , information necessary to the determination of the r e l i a b i l i t y of the res u l t s of thi s study, would be generated. As an important aspect of thi s r e p l i c a -t i o n the amount of the error reduction before the f i r s t step size reduction move should be looked at with p a r t i c u l a r attention. It would be very useful to know i f other time series display this pattern of response. s A second area that would y i e l d f r u i t f u l f i n d i n g would be a c o r r e l a t i o n study between number of i t e r a t i o n s and computer costs. While i t i s r e a l i z e d that a study of t h i s nature would be highly machine dependent with regards to r e s u l t s , the findings 8:9 would y i e l d valuable information with regards to the costs involved i n operating a pattern search-exponential smoothing forecasting system. A t h i r d area of p o t e n t i a l l y valuable research would be to r e p l i c a t e the study but using time series of varying length. Part of the reason for the i n s e n s i t i v i t y of the pattern search technique i n t h i s study may have been due to the fact that a large number of periods per time series was used i. e . .72. By f o r c i n g the inputs of the search system through such a large number of time periods the responsiveness of the system may have been smoothed out. As i t stands now the pattern search system appears capable of producing good solution r e s u l t s no matter what the values of the inputs are. I f the number of data series used were smaller then perhaps more responsiveness would be observed as the r e s u l t of a reduced smoothing e f f e c t . In closing, I would l i k e to emphasize, once more, the applied nature of the approach taken i n t h i s study. While the use of a more complete range of experimental runs would have been more s a t i s f y i n g i n a theoretical sense, i t also would have been l e s s e f f i c i e n t . The basis f o r the use of the exponential smoothing technique i s that, for certain applications, i t i s an accurate forecasting method. This study t r i e d to extend that f e e l i n g of u t i l i t y by proceeding into areas where re s u l t s seemed f r u i t f u l and by avoiding areas where res u l t s seemed r e p e t i t i v e or inconsequential. 90 BIBLIOGRAPHY 1. Bates, James and Parkinson J.R., Business Economics, Oxfordt Basil Blackwell, 1969. 2. Berry, William L. and Bliemel, Friedhelm W., "Selecting Exponential Smoothing Model Parameters: An Application of Pattern Search." 3. Brown, R.G., Statistical Forecasting for Inventory Control, New York: McGraw-Hill, 1959. 4. D'Amico, Peter, "Forecasting System Uses Modified Smoothing," Industrial Engineering, June 1971. 5. Federal Task Force on Agriculture, Dairy,- Ottawa: Canadian Agricultural Congress, 1969. 6. Hook, R. and Jeeves, T.A. , "'Direct Search',..Solution of Numerical and. Statistical Problems," Journal of the  Association of Computing Machinery, Vol . 8 , April 1961. 7. More, Geoffrey, Statistical Indicators of Cyclical Revivals and Recessions, New York: National Bureau of Economic Research, 1950. 8. Roberts, Stephen D. and Whybark, D. Clay, "Adaptive Fore-casting Techniques," Paper No. 3 3 5 , Herman C. Krannert Graduate School of Industrial Administration, Purdue University, 1971. 9. Spencer, M.H. , Mack,. C.G. and Hoguet, P.W. , Business and Economic Forecasting- An Econometric Approach, New York: Richard D. Irwin Inc., 1961. 10. Van Wormer, T.A. and Weiss, Doyle L., "Fitting Parameters-to Complex Models by Direct Search," Journal of Marketing  Research, Vol. VII, Nov. 1970. 11. Winters, Peter R., "Forecasting Sales by Exponentially Weighted Moving Averages," Management Science, Vol. VI, I960. APPENDIX A Data Series Plo 000.00-200.00-400.00 600.00 800 .00-Y AXIS, 000.00-200 .00-400 .00-5|C "3|C 5jc 'i* -V- -r> T PLOT OF RISING TIME SERIES 600.00- *J* ^ ^ X AXIS -^Time period in months Y AXIS - Time series data values 800.00-4 • ^  * 00 • * * o « o e e « | » 9 0 0 0 4 0 0 P ) O O O O O O O O O l O O O O O .00 8 .00 16 .00 24.00 32. 00 40.00 ,..| ... 48.00 . . I . . . 56.00 64.00 72.00 80.00 O 0 0 0 . 0 0 -0 0 0 , 0 0 -' - - - - - • - - - — - - - - - - -0 0 0 . 0 0 -• ... . „ _ „ _ - - - - - - - -0 0 0 . 0 0 - * 0 0 0 . co-* Y A X I S . 0 0 0 . 0 0 -* * * * * * * oco.oo- * -v- '1^  "c- ^ * * * * * * * $ * * * * * * * * * * * * 0 0 0 . 0 0 -— - • - - - ' — • - - - - -f *r *v* T t~ *r -*r-5): j{c • * * # • s}: j[e ;{e # >)c # jjc * * * * * * PLOT OF FALLING TIME SERIES 0 0 0 . 0 0 -* * * * * * X AXIS - Time period in mont! Y AXIS - Time series data 0 0 0 . 0 0 -- - -* * values J . 0 0 - . . . 1 . . I 1 I . I I I I I ! . 0 0 8 . 0 0 1 6 . 0 0 2 4 . 0 0 3 2 . 0 0 4 0 . 0 0 4 8 . 0 0 5 6 . 0 0 6 4 . 0 0 7 2 . 0 0 8 0 . 0 0 i . J 000.00 000.00 000.00 000.00 * * ** * * 000.00- * * * * * * * * * •* ** * * ** * *** * * * * * * * * * **** Y A X I S ,* * * * ** * Hp * 000.00--* * * •* * * * * * * * * * * * * * * > * * * * * ** * * * ** * . * * ** * * * * * 000 .00 ** * * * ** * * * * * ** * * * ** * * ** ** * * * PLOT OF STABLE TIME SERIES 000 .00 000.00 X AXIS - Time period in months Y AXIS - Time series data values 000.00 Of 0.00-. . . .00 I 8. 00 16. 00 24.00 3 2.00 40.00 48.00 56.00 64. 00 72.00 80. 00 

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
https://iiif.library.ubc.ca/presentation/dsp.831.1-0101576/manifest

Comment

Related Items