UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Classification capabilities of neural networks : a comparative study using student academic performance Prompibalcheep, Sansern Art 1999

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata

Download

Media
831-ubc_1999-0286.pdf [ 6.85MB ]
Metadata
JSON: 831-1.0089017.json
JSON-LD: 831-1.0089017-ld.json
RDF/XML (Pretty): 831-1.0089017-rdf.xml
RDF/JSON: 831-1.0089017-rdf.json
Turtle: 831-1.0089017-turtle.txt
N-Triples: 831-1.0089017-rdf-ntriples.txt
Original Record: 831-1.0089017-source.json
Full Text
831-1.0089017-fulltext.txt
Citation
831-1.0089017.ris

Full Text

CLASSIFICATION CAPABILITIES OF N E U R A L NETWORKS: A C O M P A R A T I V E STUDY USING STUDENT A C D E M I C P E R F O R M A N C E by SANSERN ART PROMPIBALCHEEP B B A , Thammasat University, 1988 M B A , University of Colorado at Boulder, 1992 A THESIS SUBMITTED IN PARTIAL F U L F I L L M E N T OF THE REQUIREMENTS FOR THE DEGREE OF M A S T E R OF SCIENCE in THE F A C U L T Y OF G R A D U A T E STUDIES Department of Management Information Systems We accept this thesis as confirming to the required standard THE UNIVERSITY OF BRITISH C O L U M B I A April 1999 © Sansern Art Prompibalcheep, 1999 In presenting this thesis in partial fulfilment of the requirements for an advanced degree at the University of British Columbia, I agree that the Library shall make it freely available for reference and study. I further agree that permission for extensive copying of this thesis for scholarly purposes may be granted by the head of my department or by his or her representatives. It is understood that copying or publication of this thesis for financial gain shall not be allowed without my written permission. Department of Management Information Systems The University of British Columbia Vancouver, Canada Date April 30, 1999. DE-6 (2/88) 11 Abstract Among the emerging information technologies, neural networks have been increasingly recognized as a powerful method for classifying and predicting complex data. There have been a number of neural network paradigms being developed. Each paradigm has its own specific features that are applicable to particular tasks. The most popular neural network paradigm among users in the management area is the backpropagation. This paradigm has been extensively tested and proven to outperform traditional techniques on several classification tasks. However, there have been only a few studies determining the comparative capabilities of the backpropagation paradigm to other paradigms that are potentially applicable to the same task. The main purpose of this thesis research is to investigate capabilities and performance of two neural network paradigms: backpropagation and learning vector quantization (LVQ). The other purpose is to prove that neural networks outperform a traditional technique of ordered probit model, which is used as a performance benchmark. In this study, the two neural network paradigms and the ordered probit model are utilized to classify and predict the academic performance of U B C Commerce students. For each paradigm, a number of neural network models with distinct configurations are developed. The first investigation determines how well each paradigm performs in classifying and predicting academic success. The results from running those models on both training and validation samples show that the backpropagation paradigm significantly performs better than the L V Q paradigm in most instances. The second investigation compares the best performance of those paradigms with the performance of ordered probit model. After utilizing the A N O V A to test the Ill statistical significance of difference in prediction performance, the findings show that both backpropagation and L V Q paradigms have higher performance levels than ordered probit model. However, the difference between the performance of backpropagtion and ordered probit model is significant at only the 90% confidence level. On the other hand, the difference between performances of L V Q and ordered probit model is significant at the much higher level of 95%. Essentially, the study has shown that the backpropagation paradigm, on the average, still outperforms the L V Q paradigm in classifying and predicting complex data. The study has also proven that both backpropagation and L V Q are significantly better prediction techniques than the ordered probit approach. iv Table of Contents Abstract i i Table of Contents iv List of Tables vi List of Figures xii Chapter One Introduction A Study of Neural Networks: Importance and Justification 1 The U B C Undergraduate Business Program: Its Student Recruitment Process 4 General Issues Concerning the Prediction of Academic Success 8 Neural Networks as the Alternative Prediction Technique 10 Scope of this Study 11 Chapter Two Literature Review Prediction of Academic Performance by Traditional Techniques 14 Prediction of Academic Performance by Neural Network Approaches 20 Applications of Neural Networks to Managerial and Operation Tasks 22 Chapter Three Neural Networks Introduction to Neural Networks 28 Feedforward Multilayer Perceptron 33 Self-Organizing Map (SOM) 39 Learning Vector Quantization (LVQ) 44 Chapter Four Research Purpose, Procedures, and Methodologies Purposes and Objectives 48 Description of Input and Output Variables 54 Privacy of Data 58 Data Samples and Data Collection 59 Data Analysis Methodologies 62 Performance Evaluation Criteria 76 Chapter Five Research Results Descriptive Statistics and Ordered Probit Model's Parameters 79 Classification and Prediction Capabilities between Backpropagation and Learning Vector Quantization 86 Classification and Prediction Capabilities between Ordered Probit Model and Neural Networks 90 V Chapter Six Analysis and Interpretation Classification Power of Backpropagation and Learning Vector Quantization 96 Observations from Descriptive Statistics 108 Performance Comparison between Neural Networks and Ordered Probit Model 110 Chapter Seven Conclusions Potential Contributions to Academic Circles 113 Limitations and Conditions 115 Deliverables to the B.Com. Program 117 Suggestion for Future Research 119 Concluding Remarks 121 Bibliography 124 Appendix I Tables of Results in Detail 131 vi List of Tables Table 4.1: The grade ranges, their corresponding categorized groups, and their corresponding grade letters 58 Table 4.2: Frequency and percentage of complete, incomplete, and total records, classified by specialization 61 Table 4.3: Descriptives of means, standard errors, and effects for M A T H 100 and M A T H 140 64 Table 4.4: Analysis of Variance for testing the significant difference between means of M A T H 100 and M A T H 140 64 Table 4.5: Descriptives of means, standard errors, and effects for M A T H 101 and M A T H 141 64 Table 4.6: Analysis of Variance for testing the significant difference between means of M A T H 101 and M A T H 141 65 Table 4.7: Descriptives of means, standard errors, and effects for E N G L 110, E N G L 111, ENGL 120, and E N G L 121 66 Table 4.8: Analysis of Variance for testing the significant difference among means of E N G L 110, E N G L 111, E N G L 120, and E N G L 121 66 Table 4.9: Total number of training records and the proportion of them within each group, separated by Math options of each specialization 68 Table 4.10: Total number of cross-validation records and the proportion of them within each group, separated by Math options of each specialization 69 Table 5.1: Means, standard deviations, and ranges of values of input and output variables for the Accounting with M A T H 140 & 141 option 80 Table 5.2: Regression coefficients, standard errors, and z-values for the ordered probit model of Accounting with M A T H 140 & 141 option 81 Table 5.3: Means, standard deviations, and ranges of values for input and output variables for the Accounting with M A T H 100 & 101 option 81 Table 5.4: Regression coefficients, standard errors, and z-values for the ordered probit model of Accounting with M A T H 100 & 101 option 82 Vll Table 5.5: Means, standard deviations, and ranges of values for input and output variables for the Finance with M A T H 140 & 141 option 82 Table 5.6: Regression coefficients, standard errors, and z-values for the ordered probit model of Finance with M A T H 140 & 141 option 83 Table 5.7: Means, standard deviations, and ranges of values for input and output variables for the Finance with M A T H 100 & 101 option 83 Table 5.8: Regression coefficients, standard errors, and z-values for the ordered probit model of Finance with M A T H 100 & 101 option 84 Table 5.9: Means, standard deviations, and ranges of values for input and output variables for the Marketing with M A T H 140 & 141 option 84 Table 5.10: Regression coefficients, standard errors, and z-values for the ordered probit model of Marketing with M A T H 140 & 141 option 85 Table 5.11: Means, standard deviations, and ranges of values for input and output variables for the Marketing with M A T H 100 & 101 option 85 Table 5.12: Regression coefficients, standard errors, and z-values for the ordered probit model of Marketing with M A T H 100 & 101 option 86 Table 5.13: Average correct classification rates, both each group and total, of all developed neural network models 87 Table 5.14: Means of aggregate performance levels, measured in terms of the number of correct classified cases and the corresponding percentage, of each network paradigm within each specialization track 88 Table 5.15: F-Ratios and their probability levels resulting from the A N O V A test of a significant difference between performance levels of the backpropagation paradigm and those of the L V Q paradigm 89 Table 5.16: The correct classified cases and their correlated percentages, as of each group and of total, of the training data set among three different methods 91 Table 5.17: The correct classified cases and their correlated percentages, as of each group and of total, of the validation data set among three different methods 92 Table 5.18: Means of performance levels, in terms of the correct classification percentage, of each classification method 94 V l l l Table 5.19: Analysis of Variance for testing the significant difference between the mean of correct classification rates of ordered probit model and that of backpropagation 95 Table 5.20: Analysis of Variance for testing the significant difference between the mean of correct classification rates of ordered probit model and that of learning vector quantization (LVQ) 95 Table 6.1: Neural network configurations that produce the best total correct classification rates 97 Table 1.1: Means and standard deviations of input variables within each Categorized group, for the Accounting with Math 140 & 141 option 131 Table 1.2: Means and standard deviations of input variables within each categorized group, for the Accounting with Math 100 & 101 option 131 Table 1.3: Means and standard deviations of input variables within each categorized group, for the Finance with Math 140 & 141 option 132 Table 1.4: Means and standard deviations of input variables within each categorized group, for the Finance with Math 100 & 101 option 132 Table 1.5: Means and standard deviations of input variables within each categorized group, for the Marketing with Math 140 & 141 option 132 Table 1.6: Means and standard deviations of input variables within each categorized group, for the Marketing with Math 100 & 101 option 133 Table 1.7: Classification and prediction performance of the backpropagation models on the training data set of the Accounting with Math 140 & 141 option 133 Table 1.8: Classification and prediction performance of the backpropagation models on the validation data set of the Accounting with Math 140 & 141 option 134 Table 1.9: Classification and prediction performance of the learning vector quantization models on the training data set of the Accounting with Math 140 & 141 option 134 Table 1.10: Classification and prediction performance of the learning vector quantization models on the validation data set of the Accounting with Math 140 & 141 option 135 ix Table 1.11: Classification and prediction performance of the backpropagation models on the training data set of the Accounting with Math 100 & 101 option Table 1.12: Classification and prediction performance of the backpropagation models on the validation data set of the Accounting with Math 100 & 101 option Table 1.13: Classification and prediction performance of the learning vector quantization models on the training data set of the Accounting with Math 100 & 101 option Table 1.14: Classification and prediction performance of the learning vector quantization models on the validation data set of the Accounting with Math 100 & 101 option Table 1.15: Classification and prediction performance of the backpropagation models on the training data set of the Finance with Math 140 & 141 option Table 1.16: Classification and prediction performance of the backpropagation models on the validation data set of the Finance with Math 140 & 141 option Table 1.17: Classification and prediction performance of the learning vector quantization models on the training data set of the Finance with Math 140 & 141 option Table 1.18: Classification and prediction performance of the learning vector quantization models on the validation data set of the Finance with Math 140 & 141 option Table 1.19: Classification and prediction performance of the backpropagation models on the training data set of the Finance with Math 100 & 101 option Table 1.20: Classification and prediction performance of the backpropagation models on the validation data set of the Finance with Math 100 & 101 option Table 1.21: Classification and prediction performance of the learning vector quantization models on the training data set of the Finance with Math 100 & 101 option X Table 1.22: Classification and prediction performance of the learning vector quantization models on the validation data set of the Finance with Math 100 & 101 option 141 Table 1.23: Classification and prediction performance of the backpropagation models on the training data set of the Marketing with Math 140 & 141 option 141 Table 1.24: Classification and prediction performance of the backpropagation models on the validation data set of the Marketing with Math 140 & 141 option 142 Table 1.25: Classification and prediction performance of the learning vector quantization models on the training data set of the Marketing with Math 140 & 141 Option 142 Table 1.26: Classification and prediction performance of the learning vector quantization models on the validation data set of the Marketing with Math 140 & 141 option 142 Table 1.27: Classification and prediction performance of the backpropagation models on the training data set of the Marketing with Math 100 & 101 option 143 Table 1.28: Classification and prediction performance of the backpropagation models on the validation data set of the Marketing with Math 100 & 101 option 143 Table 1.29: Classification and prediction performance of the learning vector quantization models on the training data set of the Marketing with . Math 100 & 101 option 144 Table 1.30: Classification and prediction performance of the learning vector quantization models on the validation data set of the Marketing with Math 100 & 101 option 144 Table 1.31: Analysis of Variance for the training data set of the Accounting with Math 140 & 141 option 144 Table 1.32: Analysis of Variance for the validation data set of the Accounting with Math 140 & 141 option 145 Table 1.33: Analysis of Variance for the training data set of the Accounting with Math 100 & 101 option 145 xi Table 1.34: Analysis of Variance for the validation data set of the Accounting with Math 100 & 101 option 145 Table 1.35: Analysis of Variance for the training data set of the Finance with Math 140 & 141 option 145 Table 1.36: Analysis of Variance for the validation data set of the Finance with Math 140 & 141 option 145 Table 1.37: Analysis of Variance for the training data set of the Finance with Math 100 & 101 option 146 Table 1.38: Analysis of Variance for the validation data set of the Finance with Math 100 & 101 option 146 Table 1.39: Analysis of Variance for the training data set of the Marketing with Math 140 & 141 option 146 Table 1.40: Analysis of Variance for the validation data set of the Marketing with Math 140 & 141 option 146 Table 1.41: Analysis of Variance for the training data set of the Marketing with Math 100 & 101 option 146 Table 1.42: Analysis of Variance for the validation data set of the Marketing with Math 100 & 101 option 147 X l l List of Figures Figure 1.1: Number of students applying to the B.Com. Program and number of whom being offered, during the ten-year period (from 1987 - 1997) 6 Figure 3.1: Feedforward three-layer neural network 34 Figure 3.2: Single neuron with summation and transfer functions 35 Figure 3.3: Sigmoidal or logistic function 36 Figure 3.4: Self-organizing map neural network 41 Figure 3.5: Learning vector quantization neural network 46 Figure 4.1: The architecture of neural network models, showing the components within their input and output layers 70 Figure 6.1: Self-organizing maps of the training data set, separated by categorized groups, for the Accounting with M A T H 140 & 141 option 103 Figure 6.2: Self-organizing maps of the training data set, separated by categorized groups, for the Accounting with M A T H 100 & 101 option 104 Figure 6.3: Self-organizing maps of the training data set, separated by categorized groups, for the Finance with M A T H 140 & 141 option 104 Figure 6.4: Self-organizing maps of the training data set, separated by categorized groups, for the Finance with M A T H 100 & 101 option 104 Figure 6.5: Self-organizing maps of the training data set, separated by categorized groups, for the Marketing with M A T H 140 & 141 option 104 Figure 6.6: Self-organizing maps of the training data set, separated by categorized groups, for the Marketing with M A T H 100 & 101 option 105 Figure 6.7: Bar chart comparing the correct classification rates, measured in percentages, of both neural network paradigms when applying to either training or validation data set 108 1 Chapter One Introduction This chapter provides a general overview of this research topic. It first addresses the importance of and necessity for a study of neural networks as a classification technique. Issues and difficulties concerning the prediction of academic performance of students in general, and of University of British Columbia (UBC) Commerce students in particular, are discussed. Next, it briefly explains why and how neural networks can help solve those academic difficulties. A brief study scope is described in the last section. A Study of Neural Networks: Importance and Justification One common but critical task for people within the management area is to decide between two or more possible alternatives. This task is, theoretically and practically, not an easy task since there are several inter-related factors involved in those alternatives that need to be carefully considered before an ultimate decision is made. The cumulative knowledge and experience of decision-makers fundamentally influences their ability to select the most preferable alternative. Within today's dynamic environments, however, making an accurate decision in a timely manner is also necessary. To achieve both accuracy and timeliness, the decision-makers need support from effective tools or technologies. These tools or technologies would utilize related knowledge and experience of decision-makers in processing all complicated factors and suggesting the best results within a short period of time. 2 There have been a number of emerging information technologies developed to help support decision-making tasks. Among the emerging technologies, neural network technology has become more and more popular with users in both academic and business worlds. The abilities to recognize and learn complex patterns of data, to generalize the acquired knowledge to unseen observations, to create data abstraction for classification, and to effectively handle noises within a data set, have made the neural network a powerful classification and prediction technique. Several researchers suggested that the neural network would be a desirable approach in situations where the exact form of the regression equation of a given set of data is not known. The nonlinear regression within a neural network algorithm is superior to the traditional regression model when the dimension of data is high and the relationship among variables does not correspond well to the assumed equation (Venugopal & Baets, 1994; Jain & Nag, 1997; Wang, 1998). Further, the assumptions about the underlying distribution and covariation among groups of data have no impact on the performance Of neural network. Neural networks were mainly and heavily used for analyzing data within scientific areas, such as face recognition, finger print identification, interpretation of sonar traces, and vehicle navigation (Gorr et al., 1994). Subsequently, they have been increasingly used for solving a wide range of problems, such as prediction, classification, clustering, and error detecting and control, etc., within the social, managerial, and behavioral science fields, as well as for substituting traditional techniques. In their 1997 survey, Wong, 3 Bodnovich, and Selvi, reported that there were more than 200 studies, conducted between 1988 and 1995, about neural network applications in business, management, and other related areas (Wong et al., 1997). Those studies designed and developed various neural network models for solving semi-structured or unstructured problems, as well as for supporting decision-making at either management control or operational control levels. There are several existing neural network paradigms. Each paradigm presents its own distinct features and capabilities that are applicable to particular tasks. Unfortunately, our knowledge about those network paradigms and their possible applications within the management area is still in an infant stage. There are a lot of aspects concerning the applications of those neural networks that are not well understood, and, thus, not fully utilized. As potentially extensive users of this technology, we would have to continually and thoroughly study those aspects to ensure that we will be utilizing the optimal features of neural networks. Most research studies, applying the neural network technology to classify data patterns, focus on investigating the differences between the performance of a particular neural network paradigm - the backpropagation - and that of traditional techniques, such as multiple regression, discriminant analysis, and logistic regression (Salchenberger et al., 1992; Tarn & Kiang, 1992; Bansal et al., 1993; Hruschka, 1993; Jain & Nag, 1995; Lenard et al., 1995; Shanker et al., 1996; Zhang & Hu, 1998). There are also other neural network paradigms that have potential capabilities to solve the problems usually performed by the backpropagation. From the current knowledge of the author, however, 4 there are only a few studies that explore how well different neural network paradigms perform on the same specific task (Doumas et al., 1995; Orwig et al., 1996). The author believes that, for a particular problem, we should try implementing several but suitable paradigms to identify which one produces the optimal results. After extensive experimenting with other paradigms, we might be able to argue that the backpropagation is not the only favorable neural network option for solving data classification problems. The UBC Undergraduate Business Program: Its Student Recruitment Process The student recruitment process is one of the most important tasks for any academic institutions. Having effective admission criteria, procedures, and tools ensure that they are recruiting the right students with desirable qualifications. Generally, the mission of any schools is to produce graduates with high levels of knowledge, skills, and competency to well serve the public at large. A significant factor that influences the schools in accomplishing that mission is the quality of their input resources - the entering students. To be able to admit the most desirable students from the pool of applicants, schools should have hands-on knowledge about the influential indicators of academic success that can be used mainly as the recruitment criteria. At the same time, they should be equipped with an effective supporting system that would enhance the decision-making regarding the admissions. The University of British Columbia has long been recognized for its strong focus on academic excellence. The university has a goal of becoming the best higher education institution, as well as of maintaining its strong position among the leading academic 5 institutions worldwide (Stanbury et a l , 1998). A l l university's management teams have consistently pursued this goal. Dr. Martha Piper, the current president of U B C , noted that U B C is responsible for "preparing the future citizens of the world" (Piper, 1997). She suggested that the most critical concern among large and research-intensive universities, like UBC, that must be focused on in the next decade is "the purpose of the undergraduate educational experience." In her speech on September 25, 1997, Dr. Piper stressed the importance of creating a better, university-wide, undergraduate learning experience. As a well known business school, the Faculty of Commerce and Business Administration has set its objective to be the best business school in Canada (Stanbury et al., 1998). The Faculty has gained its reputation of academic excellence in both undergraduate and graduate programs. The undergraduate program (B.Com. Program) is the biggest program in the Faculty, producing about 450 graduates annually. These graduates directly or indirectly represent the Faculty whenever they make contacts to outside communities. Thus, the public perception toward the Faculty is strongly influenced by the performance and quality of these graduates. To be recognized publicly as the most outstanding business school, the Faculty should focus its efforts on the continual improvement of the B.Com. Program. The demand from applicants, both the U B C first-year and transfer students, who would like to enter the B.Com. Program, has been more than the capacity that the program can provide. According to figure 1.1, the average number of applicants to the program was 6 1,976 persons annually. However, the program could accept only 526 persons, or about 25% of the total applicants. Limited admissions are necessary since all academic resources and services the program possesses cannot effectively satisfy the demand from all applicants. To efficiently utilize its limited resources, the program should restrict its services to only individuals who strongly show their academic potential of success. 2500 0 0 0 0 0 0 0 > 0 > 0 > 0 > 0 > O N O > 0 > 0> 0> 0> Q\ 0> ON 0*\ OS 0> 0> 0> I e a r • Apply • Offered Figure 1.1: Number of students applying to the B.Com. Program and number of whom being offered, during the ten-year period (from 1987 -T997) 1 Currently, the B.Com. Program considers the GPA of students before entering the program a major recruitment criterion. This criterion applies to 99% of all entering students (Stanbury et al., 1998). The majority of those entering students are the students who just finish their first year in Art and Science at U B C . These students are required to take courses in English, Economics, and Mathematics during their first year. Grades of these courses are also considered, in association with the first-year GPA, for the 1 The corresponding figures, used to create the bar chart, are excerpted from the exhibit 4-4 of the Interim Report of the Faculty of Commerce Undergraduate Program Review Committee as of March 20, 1998 (Stanbury et al., 1998). 7 admissions decision. In the Winter Session of 1997, the cutoff point GPA was 68%. This same consideration is applied to transfer second and third year students from other colleges as well. The above criterion reflects the belief that past academic performance is an indicator of future academic success. There have been several studies identifying that a high school or first-year college GPA is a good predictor of the ultimate performance, i.e., a graduated GPA, of students at the university level (Domer & Johnson, Jr., 1982; Shaughnessy & Evans, 1986; Gramet & Terracina, 1988). Other than the first-year GPA, the B.Com. Program also believes that performance of relevant prerequisite courses -English, Economics, and Mathematics - should reflect the potential future performance of students within the Commerce and Business Administration disciplines. Proficiency in various contents of those courses is necessary for students to be academically successful in the B.Com. Program. However, the undergraduate program committee is still not satisfied with the existing admissions process. Even though the committee believes that the above factors can be used to predict academic success, their predictability is quite inconsistent. Some students, who did not perform quite well before entering the program, turned out to be very academically successful in the program. On the other hand, some other students, whose past academic performance in the first year was outstanding, could not perform successfully after they entered the program. The possible reason for those two episodes may be that the most appropriate recruitment formula still has not been found. Nobody 8 can tell exactly how much the overall GPA, the grades of those individual subjects, or anything else, would be required for an individual student to ensure that he or she will perform successfully in the program. The other concern of the B.Com. Program is the belief that the Faculty and the entire university are not able to attract the top-notch students from both within and outside the B.C. province (Stanbury et al., 1998). These students are identified as the ones who have excellent high school and/or college GPAs, can communicate effectively, or possess particular attributes "considered valuable in a classroom learning environment." The Faculty has worked on several alternatives, attempting to attract more outstanding high school students to apply for admittance to U B C . It is hoped that most of these students, if not all, will finally choose to enter the B.Com. Program in the second year. However, once the Faculty has a desirable pool of potential candidates, there should be the right recruitment criteria, as well as supporting tools, that help the admission committee screen all applicants and recruit those who show potential success in the program. These criteria and tools would ensure that the right students are entering the program and help reduce the problem that the enrolled students do not perform satisfactorily as expected. General Issues Concerning the Prediction of Academic Success Researchers and management in business education, as well as in other disciplines, are still looking for the best solution(s) for dealing with the task of academic success 9 prediction. A number of studies have been conducted to find out what factors significantly influence the ultimate academic performance of students at the college level. Most of these studies were focused on particular professional disciplines, such as Medical Science, Engineering, Architecture, Computer Science, and Business Administration, which have been facing a great demand in various aspects from the public at large (Domer & Johnson, Jr., 1982; Campbell & McCabe, 1984; Eskew & Faley, 1988; Young, 1989). It is hoped that knowing the factors that strongly affect academic performance would lead to ways to improve recruitment efforts, teaching and learning processes, and the quality of graduates and the academic programs. However, the results from those studies were somewhat conflicting and unsatisfactory. There is no general agreement on which factor or set of factors will correctly predict the academic performance. Moreover, no one can tell which methods or approaches are the most suitable for the academic success prediction task. Particular factors were reported to be significant predictors in some studies, while regarded as insignificant indicators in the other studies. Among possible explanatory variables included within the developed prediction models, such as multiple regression equations, most of them did not substantially contribute to the variation of the predicted results. On the average, these independent variables can explain only 50% or less of the variation of the dependent variable - the academic performance (Alspaugh, 1972; Konvalina et al., 1983; Ho & Spinks, 1985; Gramet & Terracina, 1988; Eskew & Faley, 1988; Young, 1994). This implies that there are still some explanatory variables that 10 have been unidentified but would be able to explain the variation better than those factors identified. The other reason for the unsatisfactory results is, probably, attributed to the selected techniques that are not appropriate in predicting the patterns of a given set of data. For example, a linear regression analysis might generate an incorrect regression function from a given set of data with non-linear relationships. In this case, researchers would have to have a priori knowledge about the appropriate regression equation that the data represent. Identifying the correct data relationship and the correct choice of equation are somewhat difficult, i f not impossible. Moreover, within a set of data observations, researchers might have to worry about multicollinearity, which damages the predictive power of regression equations due to a high degree of error variance (Tracey et al., 1983). In the case of using discriminant analysis, researchers have to assume that observations within each categorized group are normally distributed, and that the covariance matrices of variables in each group are somewhat identical. Otherwise, the classification power of the discriminant function will be drastically reduced. However, data pertaining to demographic and behavioral characteristics usually violate a priori conditions. These methodological problems could be to blame for unsatisfactory results. Neural Networks as the Alternative Prediction Technique Neural network approaches have been selected to predict the academic success of Commerce students because of two reasons. First, there might be numerous and diversified patterns within this data domain. Some data might contain significant noise, 11 which disguises a perception of the correct pattern. Essentially, the prediction of academic performance data "requires potentially complex and subtle modeling", which should be able to effectively handle those data subtleties (Gorr et al., 1994). Neural networks seem to possess features that meet this requirement. Second, some prior studies were not quite successful in applying neural networks to predict academic success of students. In spite of the unfavorable performance of neural networks, however, researchers in this area are still positive about the predictive capability of neural networks thanks to the impressive performance of neural networks in other areas. Further investigation of applying neural network paradigms other than the backpropagation is, therefore, necessary and justifiable. Scope o f th is S t u d y This research is basically aimed at classifying and predicting the performance of U B C Commerce students, using neural network technology. Currently, the B.Com. Program is not quite satisfied with its recruitment procedures, since the procedures cannot ensure that accepted students would perform successfully in the program. The B.Com. Program needs a decision support system that can enhance the prediction of the potential academic performance of individual students. The accurately predicted results should help the B.Com. Program committees make a better decision as to whether to accept or to reject a particular student into the program. This study is conducted with a majority group of entering students, the ones who finish their first-year studies at UBC. In addition, the subjects of this study are only the 12 students who started their studies when or after the new four-year curriculum was first implemented. This study intends to cover most specializations offered in the B.Com. Program. Three specializations, Business Economics, General Management, and International Business, however, are not included since they have their own requirements that are totally different from the others, and they have not so many students as well. The author is mainly interested in determining which neural network paradigm will be the most effective in predicting the academic performance of U B C Commerce students. Two different neural network paradigms are adopted to perform this academic performance prediction task. The first paradigm is a feedforward multilayer perceptron with the backpropagation (BP) algorithm. The second one is called learning vector quantization (LVQ), a supervised learning version of self-organizing map, with the competitive learning algorithm. The author also would like to determine capabilities and performance of neural networks when compared to those of a correlated traditional technique. Within this study, the ordered probit model is adopted as a performance benchmark, since its features and algorithms regarding data classification and prediction are comparable to those of neural networks. In the next chapter, related research studies concerning the prediction of academic achievement and the applications of neural networks are fully reviewed. This literature review provides reasons for the conduct of this study and justifications for the adoption of procedures, methods, and approaches within this study. Chapter three provides basic knowledge about concepts, theories, and applications of neural networks. It focuses 1 3 mainly on three different paradigms of neural network. After the "why" about this research study has been discussed, the "what and how" about the research study are then fully described within chapter four. The chapter four first identifies both purpose and objectives of the study. It then explains data description, data collection method, research procedures and methodologies for data analysis and results evaluation. Chapter five reports all corresponding results. These results include the descriptive statistics, ordered probit model's coefficients, classification and prediction rates, and parameters of A N O V A tests. Chapter six provides the discussion of data analyses and interpretations of results and findings. Finally, chapter seven gives conclusions, which include new findings about the applications of neural networks that can contribute to the body of knowledge, any existing limitations and conditions regarding the results of this study, and suggestion for possible future research topics. 14 Chapter Two Review of Literature This chapter provides an in-depth review of research studies related to this thesis research. It describes the "what, why, and how" of the conduct of those studies, and the ultimate results from them. The contents of this chapter are basically organized into three parts. The first part covers prior research studies mainly focusing on identifying factors that significantly influence academic performance in various academic disciplines. It also includes some studies that attempt to predict academic performance using various prediction techniques. The second part consists of prior studies dealing with the application of neural networks to predict academic performance. The research studies in the third part are those of neural networks adopted and applied to tasks other than the prediction of academic performance. Prediction of Academic Performance by Traditional Techniques A number of studies have been conducted to identify factors that significantly contribute to the variation of academic success of college students. Some studies just investigate the factors that affect academic performance at a college level in general, while the others focus on the performance within particular disciplines. These research studies assumed some possible factors, and used conventional techniques, such as multiple regression or discriminant analysis, to prove their hypotheses. 15 Several factors were identified as the predictive indicators of the academic success. These predictive factors can be categorized into three groups - demographic factors, past academic performance, or cognitive factors, and psychological factors. Academic success is basically either measured as the final scores or grades, or classified into various academic standings. Most studies depend solely on the availability and easy accessibility of data items and the ability of using them as the predictive variables of future academic performance. These items are typically collected from students before entering a college, and usually consist of only demographic data and pre-college academic records. The results from these studies are mixed. Some of them claim that only demographic and academic backgrounds are capable of identifying the performance of students after matriculation (Mohammad & Almahmeed, 1988; Young, 1989). Other studies, on the other hand, argue that demographic and past academic data are not sufficient in accurately predicting the performance at the college level. They strongly support the inclusion of psychological factors, such as motivation, attitude, and perception into the prediction equations, since these factors can explain more variation of the ultimate academic performance (Nisbet et al., 1982; Gramet & Terracina, 1988; Evans & Simkin, 1989). There exists a behavioral viewpoint arguing that past academic performance strongly and significantly reflects future performance. This argument has been restated and confirmed by a number of educational research studies in various disciplines (Fowler & Glorfeld, 1981; Touron, 1983; Shaughnessy & Evans, 1986; Eskew & Faley, 1988). This past 16 academic performance is usually measured as the overall grade point average earned at the last year of high school or at the college level before entering the academic programs at a higher level. It sometimes covers the grade of particular courses taken in the past. Some researchers argue that the closer the measurement of prior academic performance is to the point of prediction, the stronger the past performance will be as the indicator of future academic success. For example, grade point averages at both high school and first-year college levels were claimed to be significant discriminators of success and failure among Architecture students (Domer & Johnson, Jr., 1982). Since the admitted students would enter the Architecture program in their second year, Domer and Johnson, Jr. argued that the admissions decision should not be made until the end of the first year. They found that even though both high school and first-year college GPAs were the good predictors, the latter was much stronger in predicting more accurate results. They, thus, did not recommend using pre-matriculation factors as sole criteria for recruitment. The post-matriculation factors as the strong indicators of later academic achievement in college are also confirmed by other studies. In an attempt to predict the status of students, i.e., persistence, early withdrawal, and dropout, after their freshman year, Pascarella, Duby, Miller, and Rasher adopted as much as 19 pre-college performance, demographic, and attitudinal variables plus two additional post-enrollment academic related variables (Pascarella et al., 1981). Using multiple discriminant analysis1 with the 1 Discriminant analysis is a popular classification technique. It is used to assign a data observation into one of several distinct groups. Whichever group the observation belongs to depends upon a result calculated from the discriminant equation. The result falls into one of the ranges of values set for each group. 17 set of these 19 pre-enrollment traits, only nine of them showed to be the significant discriminators. The researchers could only distinguish between the dropout students and the rest of persistent and early withdrawal students. However, after they included one of two post-enrollment factors - the first quarter GPA - into the discriminant equation along with the best five pre-enrollment variables, they were able to clearly distinguish between the persisters and early withdrawals. There is a further argument extending the main aspect of the previous argument. It states that the past performance of particular subjects would be a good indicator of the later success of students pursuing similar or related fields. This statement is basically evident in the fields of science, applied science, and medical science. For example, it has been long hypothesized that individuals who are proficient in Mathematics should be able to perform successfully in Computer Science. This hypothesis was confirmed by several studies (Alspaugh, 1972; Fowler & Glorfeld, 1981; Konvalina et al., 1983; Campbell & McCabe, 1984; Butcher & Muth, 1985; Oman, 1986). Further, some of them found that past experience to computers during at least one high school level could identify success in college level Computer Science courses. Eskew and Faley found that students who had a pre-college exposure to accounting or bookkeeping courses performed significantly better in the college-level introductory accounting courses than students who did not (Eskew & Faley, 1988). Two studies identified gender as an explanatory factor (Campbell & McCabe, 1984; Cronan et al., 1989). These studies used discriminant functions to classify subjects into 18 particular groups. It appeared that the gender of the individual was consistently indicated as a significant variable in their classification models, even though the researchers believed that gender was not an achievement indicator. However, gender did not enter the classification model in the study by Fowler and Glorfeld. Rather, their model included age as a relevant variable (Fowler & Glorfeld, 1981). Ironically, similar to the role of gender, age was considered to have only marginal importance to the classification models. Age also showed conflicting results. While Fowler and Glorfeld reported an inverse relationship between age and academic success, Konvalina, Stephens, & Wileman, instead, found that the older the students were, the higher the scores they got (Konvalina et al., 1983). The results from those studies, however, were not conclusive in terms of the significant relationship among variables. Some studies reported very low or no significant relationship at all between any independent variables and aptitude or academic success. In 1980, a study was conducted to investigate how well particular variables were able to predict individual programming skill (Mazlack, 1980). The researcher wanted to know whether academic discipline, gender, number of semesters in college, and the Programmer's Aptitude Test score would be correlated with academic performance. He found that none of them were significantly correlated to the success of the programming course. He concluded that it was not possible to predict success in programming on the basis of some personal and behavioral attributes, as well as standardized written tests, when the involved subjects were of college level. 19 In their attempt to predict individual mastering of computer concepts (Evans & Simkin, 1989), the researchers found that only a few variables they used were relatively strong enough to be good predictors. They developed six different linear regression equations to measure different aspects of computer proficiency. Neither of them had a strong explanatory power (their average R was less than 24%). However, the researchers suggested that other than the set of typical predictive variables, such as demographic, behavioral, and academic factors, the psychological factors could be important predictive factors of computer proficiency. Finally, there is another group of studies attempting to test the prediction capability among different traditional methods, using the same set of observations. The purpose behind these studies is to investigate other possible methods that would improve the accuracy of predicted results, and, it is hoped, to have them replace the generally used method. Tracey, Sedlacek, and Miars applied both standard least squares regression analysis and ridge regression with the same set of admissions variables to predict academic success (Tracey et al., 1983). The ridge regression was developed to overcome the weaknesses of the least squares regression regarding the multicollinearity. Disappointingly, the ridge regression only performed as good as or slightly better than the least squares regression. Although its findings were not favorable, this last study had indirectly opened the door of the quest for other viable alternatives. It implicitly suggested that there could be other methods that improve the accuracy of academic success prediction that the standard methods could not achieve. 20 Prediction of Academic Performance by Neural Network Approaches From the survey of literature relating to the topic of this section, the author has found only two studies. Both of these articles applied the backpropagation neural network to predict the academic performance of college students. One was conducted with undergraduate students, while the other with graduate students. The main purpose of these studies was to test the prediction capability of the backpropagation neural network, and to compare it with some traditional research methodologies. The first study (Gorr, et al., 1994) applied backpropagation neural network, linear regression, stepwise polynomial regression, and linear admission decision rule methods, to predict student GPAs at the College of Pharmacy of North Dakota State University. The dependent variable to be predicted was the total GPA of all courses taken in the last two years of the five-year program. The set of independent variables in this study was quite subjective. The researchers decided to adopt the same set of variables used by the admissions committee in the admission decision formula. However, they added three more parameters in their developed models. Predicted results from the models of the four methods were compared in terms of mean errors and mean absolute errors within 95% confidence intervals. The study found that although there were differences among the central values of both error terms of the prediction methods, these differences were not statistically significant. A l l prediction methods seemed to perform at the relatively same level, and the backpropagation neural network did not outperform any of them. To explain the results, the researchers provided two interpretations. First, they believed that no underlying structure of the data set within that domain could be detected. Second, 2 1 even if an underlying structure did exist, they might not have used the full capacity of that backpropagation neural network approach. The second study (Wilson & Hardgrave, 1995) was conducted at a major southwestern university to determine the capability of various classification techniques in predicting student success in the M B A program. The academic success was measured in terms of the first-year grade point average. The predicted GPAs were categorized into three groups: high risk, questionable, and no risk. The researchers identified eight independent variables, which were G M A T total, G M A T verbal, G M A T quantitative, undergraduate GPA, sex, attendance (full or part time), work experience, and age. Models from four different techniques, namely, least square multiple regression, discriminant analysis, logistic regression, and backpropagation neural network, were developed. The researchers selected the correct classification rate of the high-risk group as an indicator of prediction ability. The results showed that none of those approaches could accurately predict the students in the high-risk group. Moreover, the backpropagation neural network performed, on the average, at the same level as the other approaches. The researchers blamed the composition of data variables and the violation of statistical assumptions concerning the nature of data as the possible causes of the below-expected performance levels of all models. However, they stated their belief that the backpropagation neural network would be a promising alternative classification methodology. 22 The results from both studies implied that the prediction and classification capabilities of the backpropagation neural networks were not as impressive as had been claimed. Both studies suspected that some features of their research procedures, less-than-full-capacity neural networks, and the nature of data samples were the reasons for the unimpressive performance. Their reserved conclusions about the prediction capability of neural networks suggested a further investigation within this similar type of task would be beneficial. Applications of Neural Networks to Managerial and Operation Tasks According to their survey of journal articles published between 1988 and 1995, Wong, Bodnovich, and Selvi found that neural networks have been extensively used within Finance and Production, Operation, and Management Science areas (Wong et al., 1997). The most popular neural network paradigm for forecasting and classification tasks is the feedforward neural network with the backpropagation algorithm. Tarn and Kiang applied the backpropagation neural network to predict the bankruptcy of banks in Texas based on 19 financial ratios (Tarn & Kiang, 1992). They compared performance of neural network approach with linear discriminant, logistic regression11, kNN 1 1 1 , and ID3 algorithm'v I I Logistic Regression or logit model utilizes a nonlinear logistic function to identify which group an observation is assigned to. It is a technique that applies linear regression to data samples, which can be classified into one of only two groups. A result from the logistic function is compared with the cut-off point to determine the ultimate group of the observation. I I I k-Nearest-Neighbor (kNN) is a non-parametric method that classifies an observation into one of several groups based on some quantitative independent variables. An unseen observation is assigned to a group to which most of its k nearest neighbor (training) observations belong. i V The ID3 algorithm is simply a decision tree with a distinct classification algorithm. It employs a splitting procedure that repeatedly partitions a set of observations into some disjointed groups. 23 approaches. After applying the jackknife methodv to reduce biases within the data sets, they proved that the neural network provides better prediction than other approaches. Salchenberger, Cinar, and Lash developed a feedforward neural network model to predict the financial health of thrift institutions (Salchenberger et al., 1992). The predicted results from neural network were compared with those from the logit model. The neural network performed as well as or better than the logit model for all examined cases. Lenard, Alam, and Madey adopted a revised version of the backpropagation algorithm, called the GRG2 algorithm. They applied this GRG2 network, a typical backpropagation neural network, and a logit model to suggest which firms the auditor should issue modified reports indicating a going concern uncertainty (Lenard et al., 1995). The GRG2 model was the best performer, while the logit model was the worst performer. The superiority of backpropagtion neural network over traditional statistical techniques was also confirmed by a number of studies, applying those techniques to both real world and simulated data samples (Bansal et al., 1993; Hruschka, 1993; Subramanian et al., 1993; Yoon et al., 1993; Jain & Nag, 1997; Zhang & Hu, 1998). The superior prediction capability of a typical backpropagation network over the traditional techniques was not brought to light without suspicions. The typical backpropagation neural network sometimes does not significantly outperform some traditional techniques under some specific circumstances. Patuwo, Hu, and Hung, compared the backpropagation neural network with four other techniques - linear v The Jackknife method is a statistical procedure that helps produce the unbiased estimates of error. It allows a user to determine the uncertainties of estimators usually derived from a data set with small numbers of observations. 24 discriminant analysis, quadratic discriminnant analysis, k-nearest-neighbor (kNN), and linear programming - in classifying simulated data samples (Patuwo et al., 1993). They found that, although the backpropagation performed better than the other techniques on the training samples, it did slightly worse on the validation samples. However, they believe that when it comes to the real world samples, which usually violate several underlying statistical assumptions, the backpropagation should be seriously considered over the traditional techniques because of its "generality and flexibility." Wang discussed the drawbacks of backpropagation neural networks when applied to various managerial tasks (Wang, 1995). He pointed out that individual backpropagation neural network models might produce different classification results for a certain set of data. This variation just happened by chance and there was no clear explanation behind its occurrence. Curry and Morgan discussed their concern about the deficiencies of gradient descent methods utilized within the backpropagation algorithm (Curry & Morgan, 1997). They suggested some changes in the algorithm in order to improve the performance of backpropagation neural network. Other groups of researchers have investigated the usability of other neural network paradigms for managerial decision making tasks. Some of them adopted the self-organizing map (SOM) neural network to handle data grouping problems. Orwig, Chen, and Nunamaker, Jr., implemented a SOM neural network to the problem of classifying outputs from electronic brainstorming sessionsvl (Orwig et al., 1997). They wanted to V I An electronic brainstorming session is a part of the electronic meeting setting. The electronic brainstorming is a technique that helps electronically collect information related to complicated problems. It is useful for situations where maximum, unstructured, and anonymous participation is strongly required. 25 evaluate how well the SOM network would perform in that classification task, comparing to a human expert and a Hopfield neural networkv". The results showed that the SOM network outperformed the Hopfield network in all aspects. However, the SOM. network can only at most perform as good as the human expert for some aspects. The weakness of the SOM network is its lack of precision in producing distinct topics from the brainstorming sessions. Doumas, Mavroudakis, Gritzalis, and Katsikas, investigated the possibility of including neural networks within computer security systems to recognize and detect computer viruses (Doumas et al., 1995). They developed two neural network structures, backpropagation and SOM, to learn and classify behaviors of the viruses. To compare the classification performance of both structures, Doumas and her colleagues set two criteria: the accurate classification rate and the computational efficiency. Both neural networks performed equally in discriminating various patterns of computer viruses. However, the researchers recommended a backpropagation network for this task since it requires less computation time in both training and validating phases than the SOM network does. A study conducted in 1995 by Chen, Mangiameli, and West investigated classification and clustering capabilities of the SOM network, and compared them with those of v n The Hopfield model is one of the primary and primitive neural network models. It basically consists of a single layer of neurons with binary values. These neurons, in association with a recurrent network, can be used as associative memories. A recurrent neural network with associative memory has the capability to produce a full picture of output data when supplied with only a portion of related input data. 26 another seven clustering techniques (Chen et al., 1995). The purpose of this study was to determine how well each technique correctly clustered data within individual sets with different levels of data cluster dispersion. They found that, for all four simulated data sets with dispersion levels of very low, low, medium, and high, respectively, the S O M networks defined cluster memberships of data with significantly high percentages of accuracy. In addition, the SOM network outperformed, on the average, other clustering techniques at every level of data cluster dispersion. For a real world data set, only the SOM networks made the least misclassification rate, comparing to that of other techniques. Hybrid neural networks are the result of the combination, in one way or another, of learning algorithms and of architectures from both supervised and unsupervised learning paradigms of neural networks. The hybrid neural network has been proven to be able to solve various classification and prediction problems, but is still not quite so popular among researchers. The author has found only two studies implementing hybrid neural network models. The learning vector quantization (LVQ), a supervised learning version of the SOM network, was adopted by Gupta, Chen, and Murtaza to classify some industrial construction projects (Gupta et al., 1997). Due to the complicated, semi-structured, and risky nature of the problem, the advice from an expert had been the only possible method for classifying those projects. The researchers tried to prove that the L V Q neural network could be another likely option for this classification task. After comparing the 27 classification results from the network with those from the expert, they found that there was no significant difference. This means that the L V Q neural network performs as good as the expert, and, thus, could be an alternative classification tool for those projects. Huang and Kuh introduced another improvised neural network structure (Huang & Kuh, 1992). They believed that a new network structure combining a self-organizing feature map with a multilayer perceptron (MLP) would yield more accurate results in recognizing isolated words than would either the SOM or the M L P individually. In their hybrid neural network, the SOM part served as a mapping function that transformed higher dimension input signals into trajectories or clusters of lower dimension metrics on the map. The M L P part with backpropagation algorithm then classified the trajectories into the corresponding words. Wong, Bodnovich, and Selvi also reported that there has been an increasing amount of research on neural networks conducted for a wide range of business activities (Wong et al., 1997). They believed that neural networks would play a more critical role in supporting managerial decision making. They also suggested, among other possible areas of investigation, that an evaluation of the performance of different paradigms, architectures, and training schemes of neural networks should be further conducted. 28 Chapter Three Neural Networks This chapter basically describes concepts, theories, and learning algorithms concerning neural networks. The description is for the three different types of neural networks that are related to this study. The knowledge about each neural network type will be provided only at a fundamental level, which should be sufficient for the readers to understand how the neural networks operate to solve the classification problems. Introduction to Neural Networks The primary concept and model of the neural network was first introduced back in 1943 by McCulloch and Pitts (Ritter et al., 1992). They suggested a primitive model of a single neuron with two possible states, active (excitatory) and silent (inhibitory). Since then, there have been various neural network models developed by researchers from several disciplines. The main objective of those models was to solve problems or to handle tasks that could not be successfully accomplished by conventional techniques. Aleksander (1989) provided a definition of the neural network as follows: [Neural network is a] cellular network that has a natural propensity for storing experiential knowledge. [The network model] bears a resemblance to the brain in the sense that knowledge is acquired through training rather than programming and is retained due to changes in neuron functions. The knowledge takes the form of stable states or cycles of states in the operation of the network. A central property of such network is to recall these states or cycles in response to the presentation of cues. 29 Inspired by research studies of nervous systems, neural network models have been developed to imitate information processing that happens within a human brain. A l l types of neural networks are generally composed of neurons or groups of neurons and connections. The strength of connections is represented by their weight values. The direction of connections within a network can be one-way, two-way, or a combination of both. Distinct organizations of those components, along with distinct learning mechanisms, are developed for particular purposes or tasks. Most neural network models that have been created are either two-layer or multilayer networks. Neural networks have been implemented to solve several complicated problems in both scientific and nonscientific tasks. Neural networks possess promising characteristics and properties that are suitable for those tasks. First, neural networks possess parallel processing capability. Each neuron within a network behaves like a single independent processor. However, all neurons can be operated at the same time or in parallel. Parallel processing, therefore, enables a neural network to perform complex tasks much faster than traditional computation methods (Li, 1994; De Wilde, 1997). Second, neural networks behave like an associative memory. This is the ability to recognize, recall, and draw inferences or associations among various information items. In other words, i f we provide a neural network for only a portion of the entire set of input data, it should give us back the related complete output by recollecting to the past experiences or knowledge of the same type of data (Wasserman, 1989; Wang, 1993; De Wilde, 1997). Third, neural networks are fault-tolerant systems. The remarkable architecture of neural networks creates robust systems that effectively handle any malfunction of some neurons or 30 incomplete data sets and noises. Since the knowledge is evenly distributed over all individual storage elements and links, a loss of a few data items will cause only a small degradation of performance quality of a neural network (Knight, 1990; L i , 1994). To make the neural network models effectively work for particular tasks, users have to make sure that the selected architecture, learning algorithm, and related parameters are appropriate. The users need to consider the appropriate type and format of input and output data (Caudill, 1991; Yale, 1997). A l l of these issues, i f carefully planned, will greatly improve the performance of the neural network models and ensure more accurate results. At the same time, they will make analysis and interpretation of the outcomes more meaningful to a decision-maker. Learning within neural networks: The most remarkable characteristic of neural networks, hardly to be found within traditional computing systems, is their ability to learn from their experience. A neural network basically consists of a collection of intercommunicating neurons. The knowledge a neural network has learned will be stored in its individual neurons and in connections among the neurons. A given learning algorithm trains a neural network to recognize patterns of data in any domain. This learning process technically changes a value of parameters within each neuron and its connected weights. There are basically two distinct mechanisms of learning, supervised and unsupervised learning. 31 Supervised learning: This learning mechanism is sometimes referred to as learning with the help of a teacher. To learn within this mechanism, a neural network requires a pair of an input vector and a target output vector for each individual observation of the whole training data set. The target output vector represents the correct or desired results that the neural network is supposed to produce. At the initial stage, given an input vector, the network computes and produces its own output vector using the initial values of its parameters. The produced output vector is then compared with the corresponding target output vector. The difference, i f any, between those two vectors will be fed back to the network. During the feedback process, the values of connected weights, and, probably, the values of some parameters within neurons are adjusted. The changes in those values are to minimize the error between the produced outputs and the target ones. The learning and adjustment process will be sequentially applied to individual training vectors over and over again, until the difference for the entire training set has reached an acceptable level. Unsupervised learning: This learning mechanism has been recognized to be a close resemblance of the actual learning mechanism and environment within the brain. A neural network with this learning mechanism does not require a target output vector for a particular input vector. The network learns to recognize patterns of input vectors by extracting their statistical properties, grouping the vectors of similar properties together, and then assigning them into a distinct class. It is expected that the network would produce the same pattern of outputs for a subset of similar or closely related input vectors. The outputs from this neural network, however, are up to the process of learning 3 2 and are difficult to determine beforehand their specific patterns. The outputs generally need some transformation, visualization, and interpretation to make them become more comprehensible and meaningful to users. Input data for neural networks: Neural networks can handle various types of data from simple linearly correlated to complex nonlinearly correlated data. However, researchers generally agree that the neural network approach will be a very efficient and useful technique for finding relationships within a set of data that cannot be handled successfully by other methods or techniques. To enhance the capability of neural networks, the appropriate formats and properties of input data should be prepared. Yale suggested some guidelines for preparing the right data for training neural networks (Yale, 1997). The size of sampled input data should be substantially large to provide a high level of confidence that a neural network will finally converge. The large sample size will also ensure that all possible patterns or scenarios of input data are provided for training the neural network (Burke, 1991). The range of measurable values should be kept as tight as possible. This tight range reduces the chances of getting stuck into a local minimum. In case of categorical variables, the numerical values just behave like labels and do not convey any meaning. It is more efficient for neural networks to learn i f the categorical data are represented as a set of binary values of all corresponding possible categories. Finally, the training data set should be uniformly distributed. There should be relatively an equal number of input data for each possible scenario. 33 In the next sections, some learning algorithms and their corresponding neural network architectures are discussed. The intention is to provide general knowledge about individual neural network paradigms that will be applied within this research. First, the author will discuss the multilayer perceptron network with backpropagation learning algorithm, which is classified as a supervised learning mechanism. The self-organizing map network with a competitive learning algorithm, which is classified as an unsupervised learning mechanism, will be discussed next. Finally, the learning vector quantization with a combination of supervised and unsupervised learning mechanisms will be covered. Feedforward Multilayer Perceptron1 The feedforward multilayer neural networks have been widely used to solve several decision-making problems. They have been proven to be able to perform complicated tasks that other simple neural networks cannot do. Several research studies also show that this type of neural network outperforms other conventional techniques and statistical tools within various domains. Their structure is designed to capture a complex mapping of inputs into desired outputs. Neurons in the middle layer are believed to function as pattern detectors. These neurons, thus, give the network an ability to make reasonable generalizations of unseen data. 1 The concepts and theories of the Feedforward Multilayer Perceptron are extracted and adjusted from the following sources: Wasserman, 1989; Tarn & Kiang, 1992; rlruschka, 1993; Zahedi, 1993; Rumelhart et al., 1994; Rao & Rao, 1995; Gurney, 1997; Rogers, 1997. 34 A feedforward multilayer network is composed of neurons hierarchically organized into at least two layers, i.e., input and output layers. There are links that fully connect between neurons in different but adjacent layers, and no connections between nonadjacent layers or within the same layer. It is sometimes desirable to provide a bias with a constant output value of 1.0 to particular neurons. The bias behaves like an intercept in regression functions. It shifts the transfer function (which is mentioned in the next few sections) to the right of a horizontal axis. This permits the convergence of the learning process more rapidly. The widely adopted multilayer perceptron network has three layers, i.e., the input layer, the hidden layer, and the output layer (see figure 3.1). The number of neurons in the input layer equals the number of variables in the input set. The number of neurons in the output layer usually depends on the number of possible categories or classes of input patterns. There are no acceptable rules about how many neurons in the hidden layer will be optimal. The number usually comes out by trial-and-error, and will be different for different tasks with different data properties. Output Layer Hidden Layer Input Layer Figure 3.1: Feedforward three-layer neural network 35 A l l input and output signals generally flow only in one direction, from the input layer to the output layer. An individual neuron accepts input signals from the external environment or other neurons in the previous layer. Within a neuron, all incoming inputs are computed with their corresponding connected weights. The total net input to the neuron is calculated by the following function NetXj = S " i = 1 W y XJJ, i f the neuron is in the hidden or output layer or NetXj = X j , i f the neuron is in the input layer, where NetXj is the total net input at neuron j , xy is an input signal from neuron i to neuron j , X j is a value corresponding to the input variable at the input neuron j , and W y is a connected weight from neuron i to neuron j . The net input will be transformed into an output sent to neurons in the next layer or to the external environment. Figure 3.2 illustrates the transformation process occurring at a neuron. Figure 3.2: Single neuron with summation and transfer functions 36 There are several transfer functions, but the widely adopted one is the sigmoidal, or logistic function (see figure 3.3). This function, 1 Out; = -Netx: (1 + e produces a continuous value between 0 and 1 or, in some cases, between -1 and 1, for the output (Outj) from the neuron j . Figure 3.3: Sigmoidal or logistic function Backpropagation is one type of supervised learning algorithm, usually applied to a feedforward multilayer network. The goal of this learning mechanism is to minimize the sum of squared errors of network outputs. The learning process starts when an input vector is presented at the input neurons. At each neuron, a net input value is computed, and an output signal is produced following the rules and formulas presented above. When the network produces an ultimate output vector at the output layer, each component within that vector is compared with a corresponding component within the Output 0 Net Input 37 target output vector. The difference between the target and the actual values is computed. Let's suppose that there is only one hidden layer. After the neuron calculates the difference, which represents as error0 = targef, - actual0, the error will be propagated back to adjust all associated connections. The generalized delta rule is used for adjusting the error that is propagated back through the network. This adjusted error is computed as, 8 0 = [Out0(l -Out 0)] error0, where Out0 (or actual0) is the output signal produced by the output neuron, and the [Outo(l - Ouf,)] part is a derivative of the transfer function at the output neuron. This error is then used for calculating a change in weight value, following this equation, p w h 0 = a80Outho, where Who is the weight from the hidden to the output layer, a is a learning rate of a value between 0.1 and 1.0, and Outh0 is the output from a neuron in the hidden layer to all neurons in the output layer. The calculated change in weight value (pWh0) is updated to the corresponding weight of the connection between these two layers. The error propagation process then moves further back from neurons in the hidden layer to neurons in the input layer. Since there are no target outputs for neurons in the hidden layer, it is not quite possible to compute the error at those neurons directly. The indirect 38 way is to consider the influences that particular hidden-layer neurons have on all output neurons. It can then be assumed that the error at that hidden-layer neuron is the sum of weighted errors at the output neurons connected from that hidden-layer neuron. The error at the hidden neuron can then be computed by this equation, errorh = Emh= 1 w h o error0. Again, this error would have to be adjusted by a derivative of the transfer function at the hidden neuron. The adjusted error at a hidden-layer neuron is now computed by the following equation, 8h = [Outh(l - Outh)] errorh. The change in weight value at connections between input neurons and hidden-layer neurons is calculated as, pw i h = a8 hOutih, where Outjh is the output from a neuron in the input layer to all neurons in the hidden layer. After all corresponding weights have been updated for that iteration, they will then be used for computing the ultimate output of the network in the next iteration. This learning process keeps repeating for the rest of the training data set, and might start over again, until the network produces a mean square error below a predetermined level. The 39 set of last updated weights is then stored for further generalization of unseen data vectors. At this point, the neural network has been completely trained. The backpropagation algorithm is the most fundamental mechanism of this type of supervised learning. There have been several more complicated algorithms developed to improve performance of the feedforward multilayer networks. Those revised versions of backpropagation make the networks learn much faster, consume less computational memory, and increase accuracy of the outputs (Demuth & Beale, 1998). Self-Organizing Map (SOM)11 The principle of the SOM network was first introduced in early 1981 by Teuvo Kohonen based on the earlier work of Willshaw and Von Der Malsburg (Kiang et al., 1995). The network's structure and its learning mechanism were developed to closely resemble the topological organization and learning process of neurons within a human brain. The SOM network has a topology preserving property, which captures an important aspect of feature maps of the brain. The learning process of this network is based on the competitive learning, or winner-take-all algorithm, which is one type of unsupervised learning mechanism. This type of learning, along with the network's properties, enables the SOM network to extract some hidden topological features of a data set without an outside teacher. By providing only a set of input vectors, the network uses its learning algorithm to condense and map higher dimensional input data into a lower dimensional 11 The concepts and theories of the Self-Organizing Map are extracted and adjusted from the following sources: Ritter & Kohonen, 1989; Kohonen, 1990; Hiotis, 1993; Chen et al., 1995; Kiang et al., 1995; Kohonen, 1995; Mazanec, 1995; Rao & Rao, 1995; Gurney, 1997; Orwig et al., 1997; Rogers, 1997. 40 spatial representation. A simple visualization, which is a result of the reduction of data dimensions, facilitates the interpretation of complicated relationships among data observations. The SOM network has been used as an alternative model to other traditional neural networks for similar tasks such as pattern recognition, classification and clustering, and process control. Typically, a SOM neural network consists of two layers, an input layer and a Kohonen layer (see figure 3.4). Neurons in the input layer behave like feeders. They accept input vectors from the external environment and simply pass them to the Kohonen layer without any alterations. Each neuron corresponds to each variable within an input data set. Neurons in the Kohonen layer or SOM neurons are generally arranged in particular topologies of two dimensions, such as squares, hexagons, or randomly ordered. Each of these neurons is connected to every neuron in the input layer. The set of connections to a single SOM neuron is called a codebook, or weight vector (wik e 9?n). The location of individual SOM neurons and their neighborhood convey some meanings and relationships among the input data vectors. Each SOM neuron is to become a representation of one homogenous class, or category of input data. Thus, the adjacent neurons should represent similar classes. In other words, the further the distance between two neurons in the SOM grid is, the less similarity the category of data they represent. 41 Input Layer Kohonen Layer Figure 3.4: Self-organizing map neural network The primary competitive learning algorithm is a basic learning mechanism for the SOM and other similar networks. Its concept is to make individual neurons in the Kohonen layer compete with one another to represent one of the cluster subgroups of similar input data. For each training step, only a neuron, whose weight vector is close or very similar to the input vector, is a winner and is allowed to activate, while other neurons are inhibited. The degree of similarity between the weight vector of a particular neuron and an input vector is measured by the Euclidean distance (djk), which is calculated by this equation, The Xj, where i = 1, 2, n, is the i l component within an input vector. The Wjk, where i = 1, 2, n, is the corresponding weight connected from the i t h input neuron to the particular k t h Kohonen neuron. + (X| " w i k) 2 42 For this primary competitive learning, individual neurons are trained independently. Only one winning neuron is allowed to learn a particular class by adjusting its weight vector to be closer to input signals that belong to that class. Thus, different neurons will learn different aspects of the input data. The order in which the neurons are assigned to capture different classes of input signals is, however, randomized, mostly depending upon the initial weight values. A neuron trained with this primary algorithm acts in the same manner as its counterpart in the backpropagation multilayer network, activating when an input data signal matches the group to which it is assigned. The competitive learning algorithm of the SOM network is adjusted to include the neighborhood aspect, as well as the topological organization of the SOM neurons. At the beginning of a training period, not only a winning neuron, but also its neighbors are allowed to tune themselves to similar input patterns. This tuning means that both a winning neuron and its neighbors adjust their weight vectors to be closer to those input vectors. After the network has been repeatedly presented with randomly selected input signals, each neuron gradually becomes a single prototype of a homogeneous set of data patterns in an orderly fashion. Once the network is completely trained, a particular SOM neuron becomes a localized response to an input vector of a particular class. The position of that neuron within the SOM grid map will reflect the most important feature coordinates of that input signal. The expected result from the SOM network is, hence, a spatial arrangement of training input signals. The ones that belong to a similar class will be clustered into a similar region. This fully trained SOM network can then be used to 43 classify unseen observations into the corresponding regions on the map based on their topological relationship with the prior training data set. The learning algorithm within the SOM neural network can be summarized as follows. First, weight values of the connections between input neurons and SOM neurons are initialized to be small random values. A learning rate and a neighborhood size (number of neurons in the neighbor vicinity) are also initiated. The learning is usually between 0.1 and 1.0, and the initial neighborhood size should be nearly the size of the SOM layer or the number of neurons in each dimension (Kiang et al., 1995). Second, each input signal is presented to the network. The distance between that signal and a weight vector of each SOM neuron is computed based on the Euclidean distance function as stated previously. The SOM neuron, which has the minimum distance value, is the winner, as shown in the following equation, 11 x — w c 11 = min {|| x - Wk||}, for k = 1, 2, n, where x is an input vector, w c is the weight vector of the winning neuron, and Wk is the weight vector of the k t h neuron within the SOM layer. Finally, the weight vector of both the winning neuron and the neighborhood neurons will be adjusted to get closer to that input vector. The weight vector of other neurons outside the neighborhood vicinity will be kept intact. The adjusted values to each weight vector are calculated according to these equations, 44 wk(t + 1) = wk(t) + a(t) [x - wk(t)], i f k € Nc(t) or w k ( t+ l ) = wk(t), i f k £ N c(t), for k = 1, 2, n, and where a(t) is the learning rate at time t, and N c(t) is the neighborhood size at time t. The learning rate and the neighborhood size are decreased at each iteration. This learning process will be repeated for the whole set of input data. The process will terminate when it reaches a predefined number of iterations set by the user. Learning Vector Quantization (LVQ) 1 1 1 The above SOM network with the unsupervised learning method might not be quite efficient for the classification tasks. Within the SOM method, users have no influence over which SOM neurons represent which particular classes of input data. It is, thus, somewhat difficult for interpretation and evaluation of whether the neuron responding to a particular input vector really represents the intended class to which the input belongs. By allowing neurons in the SOM network to freely compete with each other, two input vectors, which are supposed to belong to different classes, may be put into the same region just because the distance between them is small. To make their classification task become more accurate, the SOM networks should be trained in a supervised manner. A learning vector quantization (LVQ) neural network is a self-organizing map network that is trained with the supervised version of the competitive learning mechanism. The 111 The concepts and theories of the learning vector quantization are extracted and adjusted from the following sources: Kohonen, 1990; Kohonen, 1995; Gupta et al., 1997; Demuth & Beale, 1998. 45 L V Q network can be viewed as a feedforward three-layer network that combines the Kohonen layer within its structure. It, therefore, can be used for pattern recognition and classification tasks. Since the input signals for these tasks are to be classified into a finite number of classes, subsets of corresponding connected weights are created to represent those classes. A l l of these connected weights have the same characteristics as the ones that learn patterns of data in the SOM networks. The L V Q network consists of three layers, i.e., the input layer, the Kohonen or competitive layer, and the output layer (see figure 3.5). The input and Kohonen layers are basically the same as those of the SOM network in terms of their physical layouts and computational functions. Input neurons are fully connected to every neuron in the Kohonen layer. The output layer is composed of neurons, each of which represents a particular class or category. A relatively equal number of neurons in the Kohonen layer are assigned to each class. As shown in the network structure (see also figure 3.5), the assignment is reflected by the fact that each output neuron is connected to only some neurons in the Kohonen layer. It is neurons in the Kohonen layer that learn the patterns of input data and perform the classification. The outputs from this Kohonen layer are then fine tuned to belong to a correct class at the output layer. 46 Output Layer Kohonen Layer Input Layer Figure 3.5: Learning vector quantization neural network The learning algorithm of this type of network is based on the SOM's competitive learning. Before the learning process begins, each possible subset of weight vectors is initially assigned to each class on a random basis. The initial random values of those weight vectors should roughly correspond to the probability density function of the input data. The training data set will consist of both the input signals and their corresponding target classes. During the learning period, the class regions in the input space are defined by the nearest-neighbor comparison method. This method measures the distance between an input vector (x) and any weight vectors (WJ). A neuron with the closest weight vector to that input signal is then selected to be a winner. The procedures for measuring the distance (Euclidean distance: d;) and declaring a winning neuron are the same as those of the SOM network. The class of the winning neuron will be compared with the target class of the training input signal. If the classes are similar, weights of the winning neuron will be adjusted in the direction that makes them move closer to the input vector. However, i f the class of 47 the winning neuron is different from that of the input signal, two sets of weights, belonging to two different neurons, will be adjusted. The weights of the winning neuron will be moved further away from that training input signal. At the same time, the weights of the other closest non-winning neuron to the input signal that belongs to the same class as the input signal are also adjusted. This adjustment makes that same-class neuron move closer to the input signal, and increases the chance that the neuron becomes a winner. Let w c be the weight vector of the winning neuron. After each iteration, the w c is updated following these equations, wc(t + 1) = wc(t) + ct(t) [x(t) - wc(t)], i f the winning neuron is in the same class as the input vector (x), or wc(t + 1) = wc(t) - ct(t) [x(t) - wc(t)], wk(t + 1) = wk(t) + cc(t) [x(t) - wk(t)], i f the winning neuron is not in the same class as the input vector (x), and w k is a weight vector of k t h neuron, which is the closest neuron to the input vector and belongs to the same class as the input vector, or W , ( t + 1) = W j ( t ) , i f the i t h neuron is not the winning neuron. The a(t) is a learning rate at time t. The training process continues repeating until there is no misclassification of classes or it reaches the number of iterations set by users. 48 Chapter Four Research Purposes, Procedures, and Methodologies This chapter outlines two main aspects: what goals this research study is trying to achieve and how to achieve them. The first part of the chapter defines the main purpose and objectives of this study. The rest of the chapter describes research procedures and methodologies, as means to achieve the purpose and objectives. This part includes the description of data, the approaches to collect and manage the data, and tools and techniques for analyzing the data and evaluating the results. Purposes and Objectives This research study is focused on the prediction of academic performance of students in the B.Com. Program at UBC, using neural networks. The main purpose of this research is to investigate capabilities and performance of neural networks when applied to a managerial decision-making task. As mentioned previously, neural network technology has increasingly assumed an important role in solving managerial problems. Numerous insights about this technology, in terms of its applications within the management area, are still waiting to be uncovered. Specifically, the research is aimed to achieve the following objectives. • To compare the classification and prediction capabilities of two neural network paradigms, which utilize different learning mechanisms. 49 • To prove that neural networks are more capable in handling the complicated classification tasks than a comparable traditional method, which is the ordered probit model. • To suggest the most appropriate method, the one that produces the most optimal predicted results, that will be embedded within the decision support system developed for the B.Com. Program. This research mainly investigates the classification and prediction capabilities of three different approaches - two neural network paradigms and an ordered probit model. The first neural network paradigm is a feedforward multilayer network. It typically adopts the backpropagation learning algorithm. The second one is a supervised-learning version of the self-organizing map (SOM) network, which is the so-called learning vector quantization (LVQ). This network paradigm utilizes the competitive learning algorithm. Most researchers still prefer using the backpropragation paradigm for their classification tasks. However, the L V Q network has promising potential for solving classification problems. Burke believed that the hybrid learning mechanism could improve the prediction accuracy, as well as other computational performance, of a neural network model (Burke, 1991). She argued that i f the unsupervised learning portion of a neural network can recognize a good clustering of data, the rest of the network can then be further trained for the associations of groups or combinations of groups with categories of interest. The hybrid network first finds the hidden structures or relationships within a 50 data set by its competitive learning method. This finding simplifies the problem by reducing the dimensionality of the data. The supervised learning portion then makes a classification decision by easily matching the clustered low dimensional data with the desirable classes. Why Select Backpropagation and Learning Vector Quantization From the review of literature in the previous chapter, it is evident that the backpropagation neural network has been extensively utilized to solve several classification and prediction problems. Despite its impressive performance in most situations, the backpropagation was quite unsuccessful in achieving the anticipated performance levels in the prediction of student academic success, as addressed in the corresponding studies of the backpropagation neural network application (Gorr et al., 1994; Wilson & Hardgrave, 1995). It could be that there are other neural network paradigms that are more capable of handling this academic success prediction task. The other reason could be that the procedures used to implement the backpropagation neural network are not quite perfect, as also mentioned by the researchers of those studies. The procedures utilized in those studies are likely to abort a chance of obtaining the backpropagation neural networks that have full classification and prediction capabilities. Within each of those studies, only one neural network with a particular configuration is applied on different sets of re-sampling data. It is, thus, quite questionable whether that particular neural network could perform optimally on every set of those data, even when they are from the same population. 51 Unlike the studies of backpropagation, two studies utilizing the hybrid neural networks did not clearly prove that their developed hybrid networks were better than or at lease as good as other traditional techniques, which can perform the similar tasks. Nobody can fully argue that those hybrid neural networks are the most promising techniques for handling the tasks under study. Thus, the remaining question concerning the capabilities of the hybrid networks is how well the hybrid networks can compete with other rival techniques in performing any related tasks. Both backpropagation and L V Q networks possess features and capabilities that are capable of handling the data classification and prediction. The major difference between them is in their learning algorithms, which adopt different concepts of pattern detection and recognition. The backpropagation algorithm adopts the gradient descent method that calculates the derivative of transfer function to adjust connected weights within the network. This attempt is aimed at minimizing the ultimate squared errors of outputs from the network. The L V Q algorithm adopts the Euclidean distance method that diminishes the higher dimension of data to the lower dimension, usually one or two, of data. This lower dimensional data are much easier for the fine-tuned classification process of the algorithm to assign them into the right groups. Even though in the prior studies the backpropagation neural networks did not perform satisfactorily in classifying and predicting academic success, it would be reasonable to try experimenting with them again. The first proposition for utilizing backpropagation neural networks in this study is that we can use them as a performance benchmark to 52 compare with the L V Q neural networks. The second proposition is that the procedure, which will be used in this study for applying backpropagation neural networks, is somewhat different from the procedures utilized in the prior studies. This adopted procedure, as will be mentioned in later chapters, is likely to facilitate the attainment of better performing backpropagation neural networks, and, it is hoped, to help generate more preferable prediction results than the procedures used in the prior studies. Why Select Ordered Probit Model Multiple regression analysis is a statistical technique that was commonly used in past studies of student academic performance. This regression technique is quite useful and appropriate for situations where continuous dependent variables, such as raw or actual scores, are predicted from sets of independent variables. However, within this study, the author is interested in classifying a student with particular academic and demographic backgrounds into one of categorized and ordered groups. The phenomena of academic performance prediction of this study are discrete rather than continuous. Multiple regression, therefore, might not be the suitable approach for this prediction task. Moreover, predicted results, which are continuous, from the multiple regression cannot be directly comparable to the categorical results produced by both neural network paradigms. There are several statistical methods that basically determine and estimate dependent variables with discrete values. One of them is the ordered probit model. This estimation technique is used to find relationships between an ordinal dependent variable and a set of 53 independent variables. The ordinal variable is composed of categorical values with orders, such as rating or ranking scores. None of the categorical values convey quantitative meanings. They just represent and signify the existence of orders. The ordered probit model is, hence, more appropriate for the task of academic performance prediction within this research study. It can be used to predict how well an individual student performs academically by directly identifying to which academic standing group he or she belongs. Predicted outputs from the ordered probit model are also readily comparable to those predicted by neural networks. Moreover, from knowledge of the author, no studies in the past have ever compared classification performance of neural networks with that of ordered probit model. The predicted outcomes from this ordered probit model technique will be used as a performance benchmark for comparison to those from neural network models. It is rather convincible to argue that neural networks are the better alternative i f we can show that they produce better results when compared to the comparable traditional technique, such as the ordered probit model. It is also interesting to find out whether, by utilizing the different procedure, the backpropagation paradigm would yield similar results as it did to those past academic success prediction studies. If the results from the backpropagation are, again, not quite impressive, we can then determine whether the L V Q paradigm could possibly yield better results. This finding would suggest which neural network paradigm is more appropriate for classifying and predicting academic related data, which are naturally complicated. 54 The question of whether there are any other neural network paradigms that outperform the backpropagation for the classification task could be partly answered here. Finally, it is also possible that neither of them performs satisfactorily, comparing to the adopted traditional method. This possibility would encourage the future research of other potential neural network paradigms, or even of other innovative techniques. Description of Input and Output Variables Applicants to the B.Com. Program can be the Art and Science students who finish their first-year studies at UBC, the transfer second- and third-year students from other colleges, or the mature students with relevant work experience (UBC Calendar, 1998). Since students in the first group make the biggest pool of applicants and their data are quite conveniently retrievable, they will be the focused subjects of this research study. The eligible first-year student applicants must have completed at least 30 credits on a full-time basis. They must also have taken the core requirements of English, Economics, and Mathematics during their first year. According to the above admission requirements, as well as the findings from prior research studies about student academic performance, the author has come up with the following list of input variables. The selection of these variable items also partly depends upon the availability and accessibility of them within the database at the Registrar's Office. 55 1. Gender 2. Age 3. First-Year GPA 4. Grade of an Economics course - ECON 100 5. Grade of English courses - ENGL 112, E N G L 110, ENGL 111, E N G L 120, E N G L 121 6. Grade of Math courses - M A T H 100 & 101, M A T H 120 & 121, or M A T H 140 & 141 The output variable is the level of academic performance of a student in a particular specialization. This is measured by the calculation of grade point average of five core courses individual students are required to take in a specialization they choose. The B.Com. Program currently offers 10 specializations - Accounting, Commerce and Economics, Finance, General Business Management, Industrial Relations Management, International Business, Management Information Systems, Marketing, Transportation and Logistics, and Urban Land Economics. Each specialization, except Commerce and Economics, General Business Management, and International Business, requires its students to take at least five core courses from the provided course list. The following is a list of core courses provided within each specialization. Accounting: C O M M 353, 354, 450 (Mandatory) Two courses from C O M M 452, 453, 454, 455, 459 (Elective) 56 Finance: C O M M 371, 374 (Mandatory) One course from C O M M 376, 377, 378, 379 (Elective) Two courses from C O M M 471, 472, 475, 478 (Elective) Industrial Relations Management: C O M M 327, 328, 421, 425, 428 (Mandatory) Management Information Systems: C O M M 335, 436, 437, 438, 439 (Mandatory) Marketing: C O M M 362, 363, 365, 468 (Mandatory) One course from C O M M 460, 461, 462, 463, 464, 466, 467, 469 (Elective) Transportation and Logistics: C O M M 349, 399, 441, 449 (Mandatory) One course from C O M M 444, 445, 447 (Elective) Urban Land Economics: C O M M 307, 309, 407, 408 (Mandatory) One course from C O M M 406, 409 (Elective) 57 The calculated grade point average of those core courses is classified into one of three groups depending on which grade range it falls into. The author adopted the U B C grading system and adjusted it to make the three grade ranges. To make the comparison among different methods easier and more interpretable, the classified groups will be represented by an index of 1, 2, or 3. However, due to the particular architecture of neural network models, each categorized group is also shown as a three-column vector consisting of binary values of 0 or 1. Therefore, group 1 is represented by 1 0 0, group 2 by 0 1 0, and group 3 by 0 0 1. The following reasons explain why the author decided to have three categorized groups. First, prior studies concerning the classification of data observations generally implement a small number of categorized groups. Although the most number of categorized groups that has been found is four, the majority of those studies implement either two-group or three-group categories. It is quite evident that the more number of groups the category has, the harder and more complicated the interpretation of the results will be. Second, the three-group category seems to reflect the typical human assessment - good, fair, or poor -toward the performance level of any subjects or entities. It seems also consistent with the philosophy behind the U B C grading system that converts the percentage grades into the letter grades. The author perceives all A-letter grades, A+, A , and A- , or grades above 80%, as the indication of good performance. A l l B-letter grades, B+, B, and B-, are considered the fair or average performance, since they cover the middle range of the passing grade continuum, starting from 50% to 100%. The grades below 67% or about two-third of the full 100% should be considered the poor performance. This performance 58 level is represented by the letter grades of C+, C, C-, and D. The Details of grades, both letter and percentage, in each categorized group are shown in table 4.1 below. G r o u p G r a d e R a n g e (%) C o r r e s p o n d i n g G r a d e L e t t e r 1 8 0 - 1 0 0 A+, A, A-2 6 8 - 7 9 B+, B, B-3 5 0 - 6 7 C+, C, C-, D Table 4.1: The grade ranges, their corresponding categorized groups, and their corresponding grade letters The author is not interested in predicting what exact grade point average will be for a particular student. This prediction manner does not provide sufficiently significant meaning, given the time and effort spent for coming up with the exact figure. Besides, the results from this study might be later used by the management of the B.Com. Program. Thus, the summarized figures, which show a big picture, would be more useful for making a quick but accurate decision than would the use of detailed figures. The above point of view is supported by the study of Wilson and Hardgrave. They proposed the classification approach as an ultimate approach for predicting the academic performance of students. They argued that a decision-maker would benefit more from the prediction of relative success/failure or good/poor performance of individual students than from the prediction of an actual GPA (Wilson & Hardgrave, 1995). Privacy of Data The privacy of individuals and confidentiality of their information are of utmost importance in this research study. Individuals' personal information is fully protected by 59 the Freedom of Information and Protection of Privacy Act (under section 35). A disclosure of personal information is possible only i f the released information is used solely for research or statistical purposes. To prevent the individual students from incurring unintentional or accidental damages due to the disclosure and usage of their personal data, the following procedures are applied.. The data items, such as name, social insurance number (SIN), and student number, that directly reveal identity of individual students, will not be included in this study. Even though these data items are excluded, someone might argue that it would still be possible to identify the individual. Putting values of the remaining data items, as defined within the description of variables, together could possibly create a unique set of values, which, in turn, could be traced back to the owner. This possibility is, to some extent, reduced by the adoption of the average grade of the core courses instead of the individual grades. The average value makes the chance of unique combination become less likely. Data Samples and Data Collection The raw input data are collected from the student database at the U B C registrar's office. Some of these data are used as training samples for building up the models, while the others are used as cross-validation samples for testing the predictive power of the models. Theoretically, a large sample size of data is required to ensure that effective and powerful classification models are created. For such a complex problem like this academic performance classification task, a big sample is quite necessary to make separation curves among classes of data be clearly defined (Putawo et al., 1993). 60 The set of data samples that basically cover all possible scenarios or patterns cannot be achieved only by a traditional data collection method, say, a survey of data from voluntary participants. Since some of the required data items are the final grades students get from taking particular Commerce courses, only those who would like to reveal their grades will fully participate. Students who do not perform quite well in those courses or the ones who do not care about this research project would not participate. Mazlack identified that by distributing questionnaires to the, target group of students, only a tiny portion of questionnaires will be returned. This fairly low return rate "makes the representativeness of the sample open to questions due to student self-selection. For example, 'did only those who did well return the questionnaire, or did only those who liked the instructor return the questionnaire?'" (Mazlack, 1980). The set of data acquired by this means is theoretically biased, since it represents only some portions of the entire population. In other words, it consists of only academic performance data from students who are willing to reveal their personal information. This set of data cannot be used for training the neural networks since it provides incomplete knowledge of the data domain. For the first data collection attempt, the author collected the samples of 1,729 records. These records belong to students who entered the program during 1991 to 1996. Less than 50% of these records are the complete records. The records are considered incomplete i f they have one or more missing values. Details of all related figures regarding the first set of raw data are shown in the following table 4.2. 61 Specialization Complete Records Incomplete Records Total Records Freq % Freq % Freq % Accounting 202 40% 305 60% 507 100% Finance 149 37% 255 63% 404 100% Marketing 95 35% 178 65% 273 100% Urban Land Economics 46 27% 124 73% 170 100% Transportation 17 29% 41 71% 58 100% Industrial Relations 0 0% 109 100% 109 100% MIS 0 0% 69 100% 69 100% Others N/A N/A N/A N/A 139 100% Table 4.2: Frequency and percentage of complete, incomplete, and total records, classified by specialization The first complete set of data for each individual specialization is used as a training set for generating corresponding neural network models. It is also used for building up an ordered probit model corresponding to that specialization. Regarding the number of available complete records, the author decided to analyze the student data of Accounting, Finance, and Marketing. These three specializations - Business Economics, General Management, and International Business - require students to take more than five core courses. This requirement can create diversification of courses, which is quite difficult to handle, especially in terms of extracting all corresponding grades from the database. Further, only a small number of students are in those specializations. The author, hence, decided to discard them. Since Industrial Relations Management and Management Information Systems do not have any complete records, they are automatically excluded from this study. Urban Land Economics and Transportation and Logistics do not have a sufficient number of complete records, only 46 and 17 respectively. The author believes that these small sample sizes will not yield any statistically significant results. Thus, these two specializations were taken out from this study as well. 62 The second set of data is required for testing the developed models. Since there were no more complete records left, the second collection attempt to get some more data was conducted. This time, the data were collected further back, from 1987 to 1990. This data collection added another 469 records to the first data set. Unfortunately, no more complete data records were found from this additional set. Only some few records, taken out during the first data cleaning-up process because of some invalid values, were recovered and put back into the complete set. The third data collection attempt was conducted with the same group of students who entered the program during 1991 to 1996 at one semester after the first attempt. This collection attempt also included the other group of students who just entered in 1997 or 1998. At this time, the author did collect some more complete records, which, after combined with the leftover complete records from the second attempt, created a total of 58 complete records. Data Analysis Methodologies At the meeting with the director of the B.Com. Program, we agreed that the academic performance of students in different specializations would be considered separately. This separation means that there will be some prediction models developed particularly for each individual specialization. There are 10 specializations available in the B.Com. Program. Some specializations, however, have been excluded from the study due to some technical reasons, as defined above. Consequently, the models will be developed for only the traditional specializations of Accounting, Finance, and Marketing. 63 According to the admission requirements, the applicants must have taken M A T H 140 and M A T H 141 before entering the program. However, the applicants can substitute the mathematics requirement with either M A T H 100 and M A T H 101 or M A T H 120 and M A T H 121 (UBC Calendar, 1998). These Math options make up two variables within the input variables set. The author decided to exclude students who take M A T H 120 and M A T H 121, since they are accounted for only 12 out of 2,198 cases. Further, the M A T H 100 series and M A T H 140 series cannot be treated as the same courses because they are at different levels of difficulty. Therefore, for each finalized specialization, there are two distinct sets of models. One is for students who take M A T H 100 and M A T H 101, and the other for students who take M A T H 140 and M A T H 141. The average grade of M A T H 100 is 77.7, while the average grade of its matched M A T H 140 is 81.92. At the same time, the average grade of M A T H 101 is 76.59, and the average grade of M A T H 141 is 82.74. The argument that the remaining two Math options are different from each other in terms of their difficulty should be statistically confirmed using the analysis of variance (ANOVA) test. The test will tell whether the means of the matched Math courses - M A T H 100 versus M A T H 140 and M A T H 101 versus M A T H 141- are significantly different at the alpha level of 0.05. The results of the A N O V A test, as well as some descriptive statistics, are shown in the tables below. 64 Term Count Mean Standard Error Effect All 1,520 80.69276 79.80966 MATH 100 442 77.6991 0.545166 -2.110564 MATH 140 1,078 81.92022 0.3490845 2.110564 Table 4.3: Descriptives of means, standard errors, and effects for M A T H 100 and M A T H 140 Source Term Sum of Squares df Mean Square F-Ratio Probability Level Power (Alpha = 0.05) Between Groups Within Groups Total (Adjusted) 5585.401 199412.1 204997.5 1 1518 1519 5585.401 131.365 42.52 0.000000* 0.999997 (* Significant at alpha = 0.05) Table 4.4: Analysis of Variance for testing the significant difference between means of M A T H 100 and M A T H 140 According to table 4.4, the F-ratio from the test of the first pair - M A T H 100 and M A T H 140 - is 45.52, which is much higher than the critical F at the alpha level of 0.05. Thus, the hypothesis that there was no significant difference between the means of M A T H 100 and M A T H 140 was rejected. In other words, the M A T H 100 was significantly different from M A T H 140 and, therefore, cannot be treated as the same input variable. Term Count Mean Standard Error Effect All 1,502 81.12317 79.66138 MATH 101 394 76.5863 0.606204 -3.075084 MATH 141 1,108 82.73647 0.3614906 3.075084 Table 4.5: Descriptives of means, standard errors, and effects for M A T H 101 and M A T H 141 65 Source Term Sum of Squares df Mean Square F-Ratio Probability Level Power (Alpha = 0.05) Between Groups With in Groups Total (Adjusted) 10993.6 217182.6 228176.2 1 1500 1501 10993.6 144.7884 75.93 0.000000* 1.000000 (* Significant at alpha = 0.05) Table 4.6: Analysis of Variance for testing the significant difference between means of M A T H 101 and M A T H 141 According to the table 4.6, the F-ratio from the test of the second pair - M A T H 101 and M A T H 141 - is 75.93, which is much higher than the critical F at the alpha level of 0.05. Thus, the hypothesis that there was no significant difference between the means of M A T H 101 and M A T H 141 was rejected. In other words, the M A T H 101 was significantly different from M A T H 141 that they cannot be treated as the same input variable as well. The English requirement states that the applicants must have taken E N G L 112 and one of these elective English courses - E N G L 110, E N G L 111, E N G L 120, or ENGL 121 (UBC Calendar, 1998). Since the number of students who take each of these elective English courses varies unproportionally (see table 4.7), the author decided to put them together as a single input variable called 'Elec E N G L ' . However, before this merger can be justified, the author would have to prove that there is no significant difference among these English courses. Again, the analysis of variance of more-than-two-groups was applied for this test. The author set a hypothesis that there was no significant difference among the means of ENGL 110, E N G L 111, E N G L 120, and E N G L 121. The results are shown in the following tables. 66 T e r m C o u n t M e a n S t a n d a r d E r r o r E f f e c t All 820 71.33781 72.8422 ENGL 110 645 71.43566 0.3047501 -1.406539 ENGL 111 166 70.78313 0.6007167 -2.059065 ENGL 120 5 74.4 3.461296 1.557802 ENGL 121 4 74.75 3.869846 1.907802 Table 4.7: Descriptives of means, standard errors, and effects for ENGL 110, E N G L 111, E N G L 120, and E N G L 121 S o u r c e T e r m S u m o f S q u a r e s d f M e a n S q u a r e F - R a t i o P r o b a b i l i t y L e v e l P o w e r ( A l p h a = 0.05) Between Groups Within Groups Total (Adjusted) 150.7054 48880.72 49031.43 3 816 819 50.23515 59.90285 0.84 0.472857 0.233318 Table 4.8: Analysis of Variance for testing the significant difference among means of ENGL 110, ENGL 111, E N G L 120, and E N G L 121 The average grade of ENGL 110, E N G L 111, E N G L 120, and E N G L 121, are 71.44, 70.78, 74.40, and 74.75, respectively. According to the A N O V A table (table 4.8), the calculated F-Ratio is 0.84, which is lower than the critical F at the alpha level of 0.05. The above hypothesis was, hence, accepted. In other words, there was no significant difference among the means of these elective English courses. Therefore, they can be treated in a rather similar manner. To train the neural network models, as well as to build the ordered probit models, the training sets of data are needed. Since there are three separate specializations, and each specialization has two separate models based on the Math options, there will be six different training sets. Each individual training set will consist of two subsets - the training input subset and the target output subset. 67 As mentioned above, all of the complete records found during the first data collection process are used as training sets. However, the number of observations belonging to each group within the training sets is not distributed equally. This unproportional distribution might contradict with the practice of some prior studies, which argue that a training set should have an equal proportion of all categorized groups (Shaun, 1995; Jain & Nag, 1997). Tarn and Kiang, on the other hand, argue that the training set with an equal proportion constitutes a small portion and does not reflect the real distribution of the entire population. The matching process to ensure the equal distribution of all categorized groups might introduce biases to the models (Tarn & Kiang, 1992). For this study, i f the number of cases for each categorized group has to be equal, the total number of cases within any individual training sets would be reduced significantly. These resulting small sample sizes could tarnish the predictive power of the models, since the samples might not represent all possible patterns of the data domain. The author, hence, decided to go on with the available number of complete records as the training cases. The following table shows the number of records and the proportion of records in each group, for each specialization track. 68 Specialization Math Option Number of Records Proportion of Samples in each Categorized Group (Groupl:Group2:Group3) Accounting M A T H 140 & 141 137 40:74:23 (29%:54%:17%) M A T H 100 & 101 60 24:30:6 (40%:50%:10%) Finance M A T H 140 & 141 93 42:49:2 (45%:53%:2%) M A T H 100 & 101 52 26:24:2 (50%:46%:4%) Marketing M A T H 140 & 141 79 6:70:3 (7%:89%:4%) M A T H 100 & 101 15 2:13:0 (13%:87%:0%) Table 4.9: Total number of training records and the proportion of them within each group, separated by Math options of each specialization After the models have been developed and trained, they would have to be proved for their generalization power with the unseen cases of data. Thus, a second set of cross-validation data comes into play. This validation set consists of records that are not included in the training set. These records are from the second and third data collection attempts. Unlike the training set, the validation set should have the proportion of observations in each categorized group that represents the actual composition of the entire population (Markham & Ragsdale, 1995; Jain & Nag, 1997). A n effort to test the prediction power of models using a data set that has an equal proportion of each group's observations would result in a misevaluation of a model's performance. For each specialization track, the number of validation records is about 10% or more of the number of training records. Table 4.10 below shows all correlated figures of the validation sets. 69 S p e c i a l i z a t i o n M a t h O p t i o n N u m b e r o f R e c o r d s P r o p o r t i o n o f S a m p l e s i n e a c h C a t e g o r i z e d G r o u p ( G r o u p l : G r o u p 2 : G r o u p 3 ) Accounting MATH 140 & 141 19 4:11:4 (21%:58%:21%) MATH 100 & 101 6 4:2:0 (67%:33%:0%) Finance MATH 140 & 141 16 5:8:3 (31%:50%:19%) MATH 100 & 101 4 2:2:0 (50%:50%:0%) Marketing MATH 140 & 141 12 1:11:0 (8%:92%:0%) MATH 100 & 101 1 0:1:0 (0%:100%:0%) Table 4.10: Total number of cross-validation records and the proportion of them within each group, separated by Math options of each specialization There are eight input variables entering each particular model. The first variable is gender, which is represented by 0 i f its value is female, and by 1 i f its value is male. The second variable is age of students when they were entering the program. It was calculated by subtracting the entering year from the birth year. The next four variables are first-year GPA, E C O N 100, ENGL 112, and Elec ENGL. The last two variables depend upon which Math option the model belongs to. They could be either M A T H 100 and M A T H 101 or M A T H 140 and M A T H 141. The values of the third to eighth variables are in percentages of a 100-scale. Design of neural network models: The architectures of the two network paradigms are quite similar, except at their middle layers, which have different components and connections. The input layers have eight input neurons; each neuron corresponds to a particular input variable, as mentioned above. The output layers consist of three distinct neurons. Each output neuron represents one of three different academic standing groups, 70 as also mentioned above. The figure 4.1 below illustrates some details within the architecture of these neural networks. Group 1 Group 2 Group 3 o . o A o Output Layer Middle Layer o o o o o o o o Input Layer Gender GPA 1 ENGL 112 MATH 1?0 Age ECON 100 Elec ENGL MATH 1?1 Figure 4.1: The architecture of neural network models, showing the components within their input and output layers The first paradigm is a feedforward neural network with multilayering. Most of the research studies in neural network applications agree that a three-layer neural network (with one hidden layer) is sufficient for effectively handling any complicated classification task (Caudill, 1991; Salchenberger et al., 1992; Tarn & Kiang, 1992; Subramanian et al., 1993; Patuwo et al., 1993; Jain & Nag, 1995). However, there are no rules of thumb stating which optimal value each associated parameter should have for any particular task. To make things simple, the author just followed what was conducted and suggested in the prior studies, along with some trial and error experimental values. A model configuration that needs to be determined consists of the number of neurons in the hidden layer, learning rate, number of iterations (epoch), and performance goal. The 71 number of hidden neurons is the most critical factor that greatly impacts the prediction capability of backpropagation neural network. Most researchers just come up with their own formulas in identifying the potential number of hidden neurons. However, those researchers generally agree that the number of hidden neurons should not be too many or too few (Salchenberger et al., 1992; Subramanian et a l , 1993; Patuwo et al., 1993; Lenard et al., 1995; Jain & Nag, 1997; Zhang & Hu, 1998). A network with too many hidden neurons would create the overfitting problem. A n overfitting network will accurately classify the training data, but at the expense of losing its predictive power when coming across with the unseen validating data. On the other hand, the network with too few hidden neurons might not possess sufficient ability to learn all possible patterns of data. Since the hidden neurons behave like the feature detectors, having the small number of them would force individual neurons to put together some distinct and separable patterns. After reviewing the prior studies, the author decided to try numbers of hidden neurons between 4 and 12 neurons (between 50% and 150% of the number of input neurons), increasing by 2 neurons. In other words, the possible numbers of hidden neurons are 4, 6, 8, 10, 12,14, and 16 neurons. According to some researchers, the learning rate should not be set to be too high or too low (Green & Choi, 1997; Demuth & Beale, 1998). A high learning rate makes the network overlearn the data patterns, and, at the same time, keeps its performance oscillating. This instability tends to reduce the generalization capability of the network. On the other hand, a low learning rate lengthens the learning time of a network before it can converge. Since the adopted learning rates in the past studies ranged from as low as 72 0.1 to as high as 0.9, the author decided to take those two values at both ends and to include the middle value of 0.5 as the possible learning rates. Training with the typical gradient descent backpropagation algorithm consumes quite a long time before a network can converge. There are several revisions of the backpropagation algorithm attempting to dramatically reduce both time and memory required for training the network. Demuth and Beale, the authors of "Neural Network Toolbox," strongly suggested using the Levenberg-Marquardt algorithm. This algorithm can learn the complicated patterns of data much faster since it can approach the second-order training speed without computing the Hessian matrix (Demuth & Beale, 1998). Individual backpropagation network models will keep iterating their learning process for 5,000 epochs. The performance goal is set at 0.05 of the mean squared error. Each training run will stop when the performance goal has been met, the gradient has reached the minimum value, or the number of iterations has been completed. Neurons in the hidden layer have the 'tansig' transfer function. This function accepts any input values, from - oo to + co, and then produces the output values between -1 and +1. Output neurons contain the 'logsig' transfer function, which transfers any real values into the values between 0 and +1. Since a categorized group is represented by the combination of three digits with binary values of 0 and 1, all three output values produced from the output neurons would have to be converted into either 0 or 1. The conditions for conversion are as follow. 73 • The strongest output value will be assigned the value of 1. • The two remaining lower output values will be replaced with the value of 0. The converted output vector will then identify which group the network model has predicted. Performance of backpropagation networks mainly depends upon not only the optimal number of hidden neurons but also the right set of initial weights. Basically, the poor performance of the network is a result of the occasion when the network is stuck in the local minima instead of a global minimum of the error curve. One way to reduce that chance is to make multiple training runs with different sets of randomly initial weights (Lippmann, 1987; Caudill, 1991; Demuth & Beale, 1998). For this study, each model with its specific configuration will be run five times. The averaged performance from those five runs of every possible model will then be compared to determine the best performer. The second paradigm is a learning vector quantization (LVQ) neural network. There are only a few research studies regarding the application of this network paradigm. The guidelines for setting the values of all correlated parameters are even in the mist. Since there are no evident rules for setting up the appropriate configurations for the L V Q models, the author would have to apply the same set of configurations for the backpropagation models to the L V Q models. Applying the same set of configurations to both paradigms would make it easy and consistent for the comparison of performance. However, due to some specifications of L V Q algorithms, the L V Q models developed for 74 the Marketing specialization in both Math options cannot follow all configurations that are applied to their backpropagation counterparts. Because of the highly unequal proportion of training records in each group (see table 4.9 for details), the possible number of middle or Kohonen neurons for the Marketing with M A T H 140 & 141 option would have to start from 8 instead of 4 neurons. For the same reason, the number of middle neurons for the Marketing with M A T H 100 & 101 option will vary from 13 to 16 neurons. The author added the 13- and 15-neuron configurations in order to increase the number of observed results. Brief Description of Ordered Probit Model: The ordered probit model1 is used as a traditional benchmark for the performance comparison with neural networks. Its estimation method of maximum likelihood is applied to the data of training sets. The result from running the ordered probit model is a set of coefficient values. These values will then be used to generate an estimated linear equation for a predicted score (Sj). A sum value of the predicted score and a random error is used to calculate the probability that an observed outcome will belong to a particular categorized group. In other words, a predicted output from the ordered probit model is the probability that the predicted score (Sj) plus the random error (uj) lies between any pair of cut points (kj.i and kj). The following equation shows a general pattern of a would-be ordered probit model. Pr(outputj = i) = Prfki.i < p,Gender + (32Age + p 3 G P A l + (34ECON100 + p 5 E N G L l 12 + p 6 ElecENGL + p 7 M A T H l ? 0 + p 8 M A T H l ? l + Uj < kj) 1 The concepts of the Ordered Probit Model are extracted and adjusted from the following source: Greene, 1997; StataCorp, 1997. 75 Uj is a random error and assumed to be normally distributed, i is a number that represents one of three categorized groups. Within this study, i can be either 1, 2, or 3. kj_i and kj are two adjacent cut points. Pi, P2, Ps are the coefficients of input variables. Pr(outputj = i) is a probability that the output from the j t h set of input items will belong to t h i categorized group. The procedure used for considering which academic standing group is the most likely group to be predicted from the ordered probit model is quite similar to that used for considering the predicted group from the backpropagation neural networks. For each set of input items, i.e., each input record, the ordered probit model will produce three probability values. Each value represents the probability that the outcome belongs to a corresponding group. The ultimate predicted group is the group that has the highest probability among the others. In other words, a particular group is selected to be the predicted result from the ordered probit model i f the corresponding outcome has its highest probability of belonging to that group. A l l predicted groups are for later comparison with the actual categorized groups and with the predicted groups from neural network models. The Neural Network Toolbox within the MATLAB® software package is selected to generate and run various neural network models of both paradigms. There are numerous neural network packages, both commercial and freeware, available on the market. Sexton and his colleagues stated that they tried most of the commercial packages, such as Neural Works Professional II/Plus, Brain Maker, M A T L A B , etc., and found no 76 difference among their performance (Sexton et al., 1998). The author adopts M A T L A B since it provides all required neural network features for this research. For running the ordered probit model, Stata™ software is adopted. This software provides all important features, procedures, and methods necessary for running this probit model technique. Performance Evaluation Criteria The most critical concern within the task of academic success prediction is how to correctly identify the ultimate performance of individual students. By developing a model that represents patterns of data within the domain of consideration, we would expect that the model would correctly predict the results for any unseen observations. Within the context of this study, the author focuses on the correct classification rate within each categorized group, as well as the total correct classification rate, to determine which models are the good performers. The good performing models should consistently perform well in prediction across every group. To identify which neural network paradigm is better than the other in classifying and predicting academic performance, the author will compare the mean of aggregate correct classification rates produced by each paradigm. However, to ensure that any existing difference between the performance levels of these two network paradigms does not occur by chance, the author will implement the one-way Analysis of Variance (ANOVA) test. A total correct classification rate, which was measured in terms of a percentage of correct classified cases, from each configuration of each network paradigm is included in the test. Since there are 21 different model configurations of each paradigm, the total 77 number of observations will be 42. When applied within the A N O V A test, this number of observations should create a substantially statistical significance, which ensures the validity of any conclusions that are to be made. The next step is to consider whether neural networks are superior to ordered probit model in this classification task. The author will compare the correct classification results of each neural network paradigm with those of ordered probit model. Again, the one-way A N O V A test is used to assure the statistical significance of these comparisons and their results. There are, however, some procedural issues regarding the comparisons that need to be addressed and resolved. The first issue is about how we can determine and compare their classification performance. The performance of neural networks is quite dynamic. We can develop several neural network models with various configurations to perform on the same set of data. Thus, neural networks could produce various classification results, ranging from excellent to very poor, when applied with the same data set. On the other hand, an ordered probit model, developed and tested with a given set of data, produces only a single fixed set of results. Comparing the classification results from all possibly good, fair, and poor performing neural networks with those from a static performing ordered probit model could mislead the interpretation of the comparison results. To avoid this problem, the author would have to select, out of the pool of network models, the single best performing neural network of each paradigm, and compare its performance with the performance of ordered probit model. 78 The second issue is about the number of observations (observed correct classification results) generated from each classification approach. Since the best model of each approach will produce a single total correct classification rate for each set of data, comparing that single rate from one approach with that of the other approach would not be possible for the A N O V A test. Even if we pool the classification results from all of six specialization tracks together, the number of six observations from each approach might not be statistically sufficient. To increase the number of observations and the degree of freedom, the author decided to use both correct classification rates of individual groups and aggregate correct classification rates as the observed results. This inclusion increases the number of observations from 6 to 19 cases, which, in turn, should add more degree of statistical significance to the test results. 79 Chapter Five Research Results This chapter mainly summarizes the outcome from applying the neural networks and ordered probit model. The main purpose of this chapter is to objectively show the actual results, as well as to summarize and address some interesting points of the outcomes. No analysis, interpretation, or opinion from the author will be presented in this chapter. The chapter is basically divided into three parts. The first part illustrates descriptive statistics of the data sets, such as mean and standard deviation. It also shows some important parameters, such as coefficient, and standard error, concerning the ordered probit models. The second part reports the classification performance of both backpropagation and learning vector quantization neural networks. It then shows the results from the performance comparisons between the two network paradigms using the A N O V A test. In the last part, the best performance of each neural network paradigm is selected and compared with that of ordered probit model. The A N O V A tests of significant difference between ordered probit model and each neural network are presented as well. Descriptive Statistics and Ordered Probit Model's Parameters The following tables show some descriptive statistics of input (independent) and output (dependent) variables. These figures correspond to the data of training sets. There are 80 basically six separate sets of data. Each set belongs to each specialization Finance, or Marketing - with one of two different options of Math courses. - Accounting, • Accounting specialization with M A T H 140 & 141: There are 137 training samples for this track. Variable Mean Standard Deviation Range o " Values Min Value Max Value Age 18.29 0.5809 17 20 First-Year GPA 77.18 5.2078 66 91 ECON 100 77.23 7.5608 61 96 ENGL 112 71.20 6.9323 45 88 Elective ENGL 71.72 6.5099 53 90 MATH 140 83.44 8.4826 61 99 MATH 141 86.26 8.4656 63 99 Gender Number of Male = 52 cases Number of Female = 85 cases Average Grade 75.01 7.2440 56.83 88.14 Table 5.1: Means, standard deviations, and ranges of values of input and output variables for the Accounting with M A T H 140 & 141 option The following table shows the coefficient values of all eight independent variables within the regression equation of the ordered probit model for the Accounting with M A T H 140 & 141 option. Only the coefficient of the first-year GPA (GPA1) is significantly different from zero. It can be argued that the first-year GPA is the only explanatory variable that is significantly related to the dependent variable. The pseudo R is 0.177, which means that 17.7% the variation in the predicted output (categorized group) is explained by the variation of the independent variables. 81 Independent Var iable Regression Coefficient Standard E r r o r z-Value ( H 0 : Bi = 0) Probabi l i ty Level Gender -0.3090 0.2309 -1.338 0.181 Age 0.0293 0.1760 0.167 0.868 GPA1 -0.1141 0.0456 -2.502* 0.012 ECON100 -0.0135 0.0217 -0.622 0.534 ENGL112 0.0002 0.0181 0.013 0.990 ElecENGL 0.0134 0.0176 0.764 0.445 MATH 140 0.0064 0.0170 0.376 0.707 MATH141 -0.0255 0.0161 -1.583 0.113 (* reject H 0 at p < 0.05) Pseudo R 2 0.1770 Table 5.2: Regression coefficients, standard errors, and z-values for the ordered probit model of Accounting with M A T H 140 & 141 option • Accounting specialization with M A T H 100 & 101: There are 60 training samples for this track. Var iable Mean Standard Deviat ion Range ol ' Values M i n Value M a x Value Age 18.40 0.5585 17 20 First-Year GPA 78.20 5.4704 64 90 ECON 100 79.78 7.7396 61 97 ENGL 112 71.22 6.4705 57 83 Elective ENGL 70.38 6.3675 55 86 MATH 100 82.90 9.3713 58 99 MATH 101 79.77 11.0888 50 98 Gender Number of Male = 29 cases Number of Female = 31 cases Average Grade 77.04 8.3080 53.50 93.80 Table 5.3: Means, standard deviations, and ranges of values for input and output variables for the Accounting with M A T H 100 & 101 option 82 Independent Regression Standard z-Value Probability Variable Coefficient Error (H„: Bi = 0) Level Gender 0.4674 0.3666 1.275 0.202 Age 0.4597 0.3098 1.484 0.138 GPA1 -0.0847 0.0810 -1.045 0.296 ECON100 -0.0498 0.0368 -1.353 0.176 ENGL112 -0.0063 0.0291 -0.218 0.828 ElecENGL -0.0338 0.0313 -1.081 0.280 MATH 100 -0.0210 0.0285 -0.736 0.462 MATH 101 0.0170 0.0237 0.715 0.475 Pseudo R 2 0.2527 Table 5.4: Regression coefficients, standard errors, and z-values for the ordered probit model of Accounting with M A T H 100 & 101 option For this track, none of the coefficients of independent variables are significantly different from zero. The pseudo R is 0.2527, which means that 25.27% the variation in the predicted output is explained by the variation of the independent input variables. • Finance specialization with M A T H 140 & 141: There are 93 training samples for this track. Variable Mean Standard Deviation Range o ' Values Min Value Max Value Age 18.57 0.9136 17 23 First-Year GPA 77.98 5.5794 55 89 ECON 100 78.34 8.0629 54 95 ENGL 112 72.26 8.3743 52 90 Elective ENGL 72.48 7.3892 55 91 MATH 140 83.24 10.9648 50 99 MATH 141 86.49 9.4451 50 99 Gender Number of Male = 52 cases Number of Female = 41 cases Average Grade 78.50 6.0007 63.33 92.80 Table 5.5: Means, standard deviations, and ranges of values for input and output variables for the Finance with M A T H 140 & 141 option 8 3 Independent Variable Regression Coefficient Standard Error z-Value (H 0 : Bi = 0) Probability Level Gender -0.2609 0.2851 -0.915 0.360 Age 0.1400 0.1466 0.952 0.341 GPA1 -0.0084 0.0550 -0.152 0.879 ECON100 -0.0594 0.0258 -2.299* 0.021 ENGL 112 0.0176 0.0205 0.856 0.392 ElecENGL -0.0130 0.0226 -0.575 0.565 MATH 140 , 0.0098 0.0177 0.554 0.579 MATH141 -0.0046 0.0210 -0.217 0.828 (* reject H 0 at p < 0.05) Pseudo R2 0.1084 Table 5.6: Regression coefficients, standard errors, and z-values for the ordered probit model of Finance with M A T H 140 & 141 option The coefficient of ECON100 is the only value that is significantly different from zero. When considering the pseudo R 2 of 0.1084, we can see that only a small portion of variation in output is explainable with the variation of input variables. • Finance specialization with M A T H 100 & 101: There are 52 training samples for this track. Variable Mean Standard Deviation Range o ' Values Min Value Max Value Age. 18.48 0.8282 17 21 First-Year GPA 78.29 4.7665 71 90 ECON 100 80.17 7.4326 66 97 ENGL 112 70.87 6.1867 59 84 Elective ENGL 68.06 7.0944 50 85 MATH 100 82.94 8.8260 63 98 MATH 101 82.60 8.9624 64 99 Gender Number of Male = 22 cases Number of Female = 30 cases Average Grade 78.78 5.7117 65.17 89.33 Table 5.7: Means, standard deviations, and ranges of values for input and output variables for the Finance with M A T H 100 & 101 option 84 Independent Variable Regression Coefficient Standard Error z-Value (H„: Bi = 0) Probability Level Gender 0.4132 0.4105 1.007 0.314 Age 0.3324 0.2486 1.337 0.181 GPA1 -0.3039 0.1160 -2.620* 0.009 ECON100 0.0040 0.0447 0.090 0.928 ENGL112 0.0016 0.0390 0.041 0.967 ElecENGL 0.0813 0.0380 2.139* 0.032 MATH 100 -0.0253 0.0333 -0.760 0.447 MATH101 0.0877 0.0344 2.546* 0.011 (* reject H 0 at p < 0.05) Pseudo R2 0.2871 Table 5.8: Regression coefficients, standard errors, and z-values for the ordered probit model of Finance with M A T H 100 & 101 option The coefficient of first-year GPA is, again, a significant factor for this track. Further, the coefficients of both Elective English and Math 101 are also significant from zero. The pseudo R 2 of 0.2871 is quite higher than that of the previous Finance track. • Marketing specialization with M A T H 140 & 141: There are 79 training samples for this track. Variable Mean Standard Deviation Range o ' Values Min Value Max Value Age 18.48 0.7656 18 23 First-Year GPA 75.11 3.9775 64 86 ECON 100 72.15 6.8726 55 89 ENGL 112 71.47 6.2652 57 85 Elective ENGL 72.44 7.3480 53 94 MATH 140 78.52 9.8928 51 97 MATH 141 81.23 9.9999 50 98 Gender Number of Male = 27 cases Number of Female = 52 cases Average Grade 74.68 4.0416 60.40 , 86.83 Table 5.9: Means, standard deviations, and ranges of values for input and output variables for the Marketing with M A T H 140 & 141 option 85 Independent Regression Standard z-Value Probability Variable Coefficient Error ( H „ : B i = 0) Level Gender 0.1503 0.4561 0.330 0.742 Age 0.2170 0.2416 0.898 0.369 GPA1 -0.1306 0.1012 -1.290 0.197 ECON100 -0.0549 0.0406 -1.352 0.176 ENGL112 0.0430 0.0428 1.006 0.314 Elec ENGL -0.0214 0.0340 -0.630 0.529 MATH 140 0.0428 0.0249 1.715 0.086 MATH141 0.0109 0.0274 0.396 0.692 Pseudo R2 0.2520 Table 5.10: Regression coefficients, standard errors, and z-values for the ordered probit model of Marketing with M A T H 140 & 141 option For this track, nothing is significantly different from zero. The pseudo R z is 0.2520, which is also somewhat low. In other words, only 25.2% of variation in output are explainable with the variation of input variables. • Marketing specialization with M A T H 100 & 101: There are only 15 records available as the training samples for this track. Variable Mean Standard Deviation Range ol ' Values Min Value Max Value Age 18.38 0.6450 18 20 First-Year GPA 74.32 4.9846 66 81 ECON 100 75.87 7.7447 63 95 ENGL 112 70.07 4.5586 62 78 Elective ENGL 69.80 6.5814 60 81 MATH 100 77.60 8.2445 65 96 MATH 101 72.33 8.5329 57 85 Gender Number of Male = 8 cases Number of Female = 7 cases Average Grade 74.62 3.9423 68.20 82.80 Table 5.11: Means, standard deviations, and ranges of values for input and output variables for the Marketing with M A T H 100 & 101 option 86 I n d e p e n d e n t R e g r e s s i o n S t a n d a r d z - V a l u e P r o b a b i l i t y V a r i a b l e C o e f f i c i e n t E r r o r ( H 0 : B i = 0) L e v e l Gender 3.5221 0.000 0.000 1.000 Age -1.7657 0.000 -0.000 1.000 GPA1 0.0986 0.000 0.000 1.000 ECON100 -0.1333 0.000 -0.000 1.000 ENGL 112 -1.8121 0.000 -0.000 1.000 ElecENGL 0.0320 8821857 0.000 1.000 MATH 100 0.3278 0.000 0.000 1.000 MATH 101 0.3766 0.000 0.000 1.000 Pseudo R 2 1.0000 Table 5.12: Regression coefficients, standard errors, and z-values for the ordered probit model of Marketing with M A T H 100 & 101 option The ordered probit model has a problem in running with the data of this Marketing track. Since there are only 15 observations within the data set, the standard errors are very closed to zero, except for that of Elec ENGL. None of the coefficients are significant from zero. Moreover, this small sample size makes the pseudo R 2 a perfect 1.00. Due to the small number of both training and validation samples, the ordered probit model could not generate any prediction of output from these samples. There will be no classification and prediction results for this Marketing track. Thus, the performance consideration of neural networks and ordered probit model will be on only the first five specialization tracks. Classification and Prediction Capabilities between Backpropagation and Learning Vector Quantization Correct Classification Performance: Each neural network model with the distinct number of hidden neurons and learning rate is repeatedly run for five times. The results from those runs, measured in terms of both the number and percentage of correct 87 classified cases, are then averaged. The following table reports only the average correct classification rates, in percentages, for each specialization track. The results in detail are illustrated in tables within appendix I. S p e c i a l i z a t i o n T r a c k D a t a Set N e u r a l N e t w o r k P a r a d i g m C o r r e c t C l a s s i f i c a t i o n R a t e (%) G r o u p 1 G r o u p 2 G r o u p 3 T o t a l Accounting with MATH 140 & 141 Training BP 68 92 50 78 LVQ 29 52 25 41 Validation UP 33 •73 ' 12 52 LVQ 25 53 :": 23 41 Accounting with MATH 100 & 101 Training BP 80 91 52 83 LVQ 39 52 15 43 Validation BP 63 64 N/A 63 LVQ 39 35 N/A . 37 Finance with MATH 140 & 141 Training BP 78 86 10 80 LVQ 41 51 8 45 Validation BP 61 59 2 49 LVQ 40 54 ', 13 ' 42 Finance with MATH 100 & 101 Training BP 85 83 42 83 LVQ 51 37 17 43 Validation BP 65 33 • -N/A 49 LVQ 40 44- • N/A 42 Marketing with MATH 140 & 141 Training BP 50 99 40 93 LVQ 4 86 7 76 Validation BP • 1 . 88 , N / A ' - " 81 LVQ 3 .87- • - • N / A 80 Marketing with MATH 100 & 101 Training BP 69 99 N/A 95 LVQ 31 97 N/A 88 Validation BP N/A 98 N/A 98 LVQ N/A .. 92 ', N/A 92 Table 5.13: Average correct classification developed neural network models rates, both each group and tola of all Test of a significant difference in performance: This section reports the results from the test of a significant difference between two sets of performance levels of the two neural network paradigms. The A N O V A test has been performed on both the training and validation sets of each specialization track. There are 21 different classification results for each paradigm within the Accounting and Finance tracks. There are also 21 different classification results for the backpropagation paradigm within both Marketing tracks. 88 However, due to the extremely unequal group proportion within those two Marketing tracks, the L V Q paradigm could generate only 12 and 15 different correct classification rates for the M A T H 140 & 141 and M A T H 100 & 101 options, respectively. The following table illustrates, for every instance, the mean values of total correct classified cases, as well as the corresponding percentages in parentheses. It should be noticed that a gap between the performance levels of these two network paradigms is quite wide on the training set. However, this performance gap is greatly reduced when the validation set is applied. This reduction is basically due to the decrease in performance of the backpropagation neural network from the training to validation set. The performance of the L V Q neural network seems to be relatively stable when migrating from the training to validation set. Specialization Training Set Validation Set BP LVQ BP L V Q Accounting: w/ MATH 140 & 141 106.44 55.58 9.89 7.80 (78%) (41%) (52%) (41%) w/MATH 100 & 101 49.62 25.89 3.78 2.23 (83%) (43%) (63%) (37%) Finance: w/ MATH 140 & 141 74.80 42.23 7.84 6.70 (80%) (45%) (49%) (42%) w/MATH 100 & 101 42.97 22.45 1.96 1.68 (83%) (43%) (49%) (42%) Marketing: w/MATH 140 & 141 73.24 60.38 9.71 9.58 (93%) (76%) (81%) (80%) w/MATH 100 & 101 14.30 13.16 0.98 0.92 (95%) (88%) (98%) (92%) Table 5.14: Means of aggregate performance levels, measured in terms of the number of correct classified cases and the corresponding percentage, of each network paradigm within each specialization track 89 The following table reports the F-Ratios from the A N O V A tests. The F-Ratio identifies whether the difference between the performance levels of the backpropagation and those of the L V Q is significant. The table also reports at which confidence level the significance exists. In most instances, the significant difference does exist at both an alpha level of 0.05 and an alpha level of 0.01. There is one instance of the Finance with M A T H 100 & 101 option that the difference between the performance levels on the validation samples is significant at the alpha level of 0.05 but not at the level of 0.01. Further, for both Math options of the Marketing specialization, the differences between the performance levels on the validation samples are not at all significant at the level of 0.05. A l l detailed figures corresponding to the A N O V A tests are illustrated in tables within appendix I. S p e c i a l i z a t i o n M a t h O p t i o n D a t a S a m p l e s F - R a t i o P r o b a b i l i t y L e v e l Accounting MATH 140 & 141 Training 634.33 0.000* Validation 70.15 0.000* MATH 100 & 101 Training 373.45 0.000* Validation 64.82 0.000* Finance MATH 140 & 141 Training 348.38 0.000* Validation 9.87 0.003* MATH 100 & 101 Training 429.81 0.000* Validation 4.32 0.044* Marketing MATH 140 & 141 Training 133.60 0.000* Validation 0.22 0.640 MATH 100 & 101 Training 25.38 0.000* Validation 2.94 0.096 (* Significant at alpha = 0.05) Table 5.15: F-Ratios and their probability levels resulting from the A N O V A test of a significant difference between performance levels of the backpropagation paradigm and those of the L V Q paradigm 90 Classification and Prediction Capabilities between Ordered Probit Model and Neural Networks The Best Scenario Correct Classification Performance: This section basically compares the classification performance of the three prediction approaches - ordered probit model, backpropagation, and learning vector quantization. Figures in the following tables are the classification results of the best scenario of each approach. Among various levels of performance of both neural network paradigms, the highest correct classification rate is selected. It will then be compared with the correct classification rate of ordered probit model. Classification results on the training samples are presented in the first of the following tables (table 5.16), while classification results on the validation samples are in the second table (table 5.17). 91 M e t h o d B e s t S c e n a r i o G r o u p 1 G r o u p 2 G r o u p 3 T o t a l Ordered Probit Model • Accounting: w/ MATH 140 & 141 25 62 2 88 (63%) (84%) (9%) (64%) w/MATH 100 & 101 16 25 1 42 (67%) (83%) (17%) (70%) • Finance: w/ MATH 140 & 141 25 38 0 63 (60%) (78%) (0%) (68%) w/MATH 100 & 101 19 18 0 37 (73%) (75%) (0%) (71%) • Marketing: w/MATH 140 & 141 1 70 0 71 (17%) (100%) (0%) (90%) w/MATH 100 & 101 N/A N/A N/A N/A (N/A) (N/A) (N/A) (N/A) Backpropagation 134 • Accounting: w/ MATH 140 & 141 38 74 22 (95%) (100%) (96%) (98%) w/MATH 100 & 101 24 30 6 60 (100%) (100%) (100%) (100%) • Finance: w/ MATH 140 & 141 42 49 1 92 (100%) (100%) (50%) (99%) w/MATH 100 & 101 26 24 2 52 (100%) (100%) (100%) (100%) • Marketing: w/MATH 140 & 141 6 70 3 79 (100%) (100%) (100%) (100%) w/MATH 100 & 101 2 13 N/A 15 (100%) (100%) (N/A) (100%) Learning Vector Quantization • Accounting: w/ MATH 140 & 141 31 49 1 81 (78%) (66%) (4%) (59%) w/MATH 100 & 101 17 17 1 35 (71%) (57%) (17%) (58%) • Finance: w/MATH 140 & 141 29 28 0 57 (69%) (57%) (0%) (61%) w/MATH 100 & 101 19 13 1 33 (73%) (54%) (50%) (64%) • Marketing: w/ MATH 140 & 141 1 69 0 70 (17%) (99%) (0%) (89%) w/MATH 100 & 101 2 13 N/A 15 (100%) (100%) (N/A) (100%) Table 5.16: The correct classified cases and their correlated percentages, as of each group and of total, of the training data set among three different methods 92 Best Scenario Method Group 1 Group 2 Group 3 Total Ordered Probit Model • Accounting: w/ MATH 140 & 141 1 9 2 12 (25%) (82%) (50%) (63%) w/MATH 100 & 101 3 2 N/A 5 (75%) (100%) (N/A) (83%) • Finance: w/ MATH 140 & 141 4 4 0 8 (80%) (50%) (0%) (50%) w/MATH 100 & 101 1 0 N/A 1 (50%) (0%) (N/A) (25%) • Marketing: w/MATH 140 & 141 0 11 N/A 11 (0%) (100%) (N/A) (92%) w/MATH 100 & 101 N/A N/A N/A N/A (N/A) (N/A) (N/A) (N/A) Backpropagation 13 • Accounting: w/ MATH 140 & 141 2 10 1 (50%) (91%) (25%) (68%) w/MATH 100 & 101 4 2 N/A 6 (100%) (100%) (N/A) (100%) • Finance: w/MATH 140 & 141 5 6 0 11 (100%) (75%) (0%) (69%) w/MATH 100 & 101 2 2 N/A 4 (100%) (100%) (N/A) (100%) • Marketing: w/ MATH 140 & 141 0 11 N/A 11 (0%) (100%) (N/A) (92%) w/MATH 100 & 101 N/A 1 N/A 1 (N/A) (100%) (N/A) (100%) Learning Vector Quantization • Accounting: w/ MATH 140 & 141 3 7 3 13 (75%) (64%) (75%) (68%) w/MATH 100 & 101 4 2 N/A 6 (100%) (100%) (N/A) (100%) • Finance: w/ MATH 140 & 141 3 7 1 11 (60%) (88%) (33%) (69%) w/MATH 100 & 101 2 2 N/A 4 (100%) (100%) (N/A) (100%) • Marketing: w/MATH 140 & 141 1 11 N/A 12 (100%) (100%) (N/A) (100%) w/MATH 100 & 101 N/A 1 N/A 1 (N/A) (100%) (N/A) (100%) Table 5.17: The correct classified cases and their correlated percentages, as of each group and of total, of the validation data set among three different methods Table 5.17 shows the numbers and percentages of correctly classified validation samples generated by these three approaches. It can be seen that both backpropagation and L V Q 93 models outperform ordered probit models in almost every instance. Moreover, the aggregate correct classification rates of both backpropagation and L V Q models are higher than two-thirds (66%), while some of the classification rates of ordered probit models are below this level. Test of a significant difference in performance: This section reports the results from the test of a significant difference between the best performance levels of each neural network paradigm and the only performance levels of ordered probit model. The test has been performed solely on the validation sets, since the author would like to know how well each trained method performs on the unseen observations. The ordered probit model could not predict results for the Marketing with Math 100 & 101 track, thus, the classification rates produced by other methods of this track are also taken out from the comparison. Further, since both correct classification rates in each group and aggregate correct classification rates are included in the A N O V A test, there are ultimately 17 different correct classification observations from each method. Only the classification results measured in percentages are used as the observations for the comparison. The actual or head-counted numbers of correct classified cases cannot be used for comparison since each number is based on different sample sizes. Identifying any significant difference in performance using these head-counted numbers is incorrect, and definitely misleads the interpretation of results and conclusions. 94 Table 5.18 illustrates the means of the 17 observed correct classification percentages of each classification method, as mentioned above. It is quite interesting to see that while the average performance level of the best backpropagation decreases from the training set to validation set, the average performance level of the best L V Q , instead, increases. Further, on the training sets, the backpropagation has the highest average performance among other methods, while, on the validation sets, the L V Q has the highest performance among others. Moreover, it should be noticed that the mean figures of both backpropagation and L V Q within this table 5.18 are not similar to those in table 5.14. The mean figures in table 5.18 are calculated from only the classification rates, both by group and aggregate, of the best performing neural network models from the six specialization tracks, not from the classification rates of all developed neural network models. The reason of this different calculation is previously stated within chapter 4, regarding the fairness and appropriateness of performance comparison between each neural network paradigm and ordered probit model. Method Mean Training Set Validation Set Ordered Probit Model 54.45% 54.41% Backpropagation (BP) 96.90% 74.71% Learning Vector Quantization (LVQ) 52.15% 84.24% Table 5.18: Means of performance levels, in terms of the correct classification percentage, of each classification method The following A N O V A tables report the results of the performance comparison between each neural network paradigm and ordered probit model. The tables report both F-ratios and probability levels. They also specify whether the existing differences in means are significant at a particular alpha level of 0.05 or 0.10. 95 Source Term Sum of Squares df Mean Square F-Ratio Probability Level Power (Alpha = 0.05) Between Groups Within Groups Total (Adjusted) 3500.735 38993.65 42494.38 1 32 33 3500.735 1218.552 2.87 0.099796** 0.376218 (* * Significant at alpha = 0.10) Table 5.19: Analysis of Variance for testing the significant difference between the mean of correct classification rates of ordered probit model and that of backpropagation Source Term Sum of Squares df Mean Square F-Ratio Probability Level Power (Alpha = 0.05) Between Groups Within Groups Total (Adjusted) 7560.265 25529.18 33089.44 1 32 33 7560.265 797.7867 9.48 0.004248* 0.847285 (* Significant at alpha = 0.05) Table 5.20: Analysis of Variance for testing the significant difference between the mean of correct classification rates of ordered probit model and that of learning vector quantization (LVQ) The performance comparison between ordered probit model and backpropagation shows that the difference in their classification performance does exist at a 90% confidence level. At the same time, the difference between the classification performance of ordered probit model and that of L V Q model is also significant at a higher confidence level of 95%. 96 Chapter Six Analysis and Interpretation of Results The main focus of this chapter is to answer questions implied within the objectives of this research, as addressed in the previous chapter. To do so, the results are first thoroughly analyzed and then findings from the analysis are interpreted. This chapter also discusses the knowledge we get from the findings, as well as other unexpected but interesting outcomes. Classification Power of Backpropagation and Learning Vector Quantization The application of neural networks toward several classification tasks is still, to some extent, trail and error. There are no acceptable rules or standards stating which combination of parameters and their values will generate the most optimal neural network model for the task under study. The general practice is to try several models with different configurations, and then determine which model creates the best results. Within this study, the author has developed a number of models. Each model consists of a particular number of hidden neurons, ranging from 4 to 16 neurons, and one of three learning rates, 0.1, 0.5, or 0.9. The author implemented all possible configurations, where applicable, to both backpropagation and L V Q paradigms. The classification results from these models are reported in the previous chapter, as well as in appendix I. In the next two sections, the author discusses some interesting findings concerning the number of hidden neurons and the learning rate. The following table identifies the model configurations that produce the highest correct classification rates. 97 Specialization Track Data Set Neural Network Paradigm #of Hidden Neurons Learning Rate Correct Classification Rate (%) Group 1 Group 2 Group 3 Total Accounting with MATH 140 & 141 Training BP 10 0.1 90 94 54 86 LVQ 6 0.1 38 56 33 47 Validation BP 14 0.5 30 85 15 59p; LVQ 10 .0.1. 25 ' 71 15 ,', 50 Accounting with MATH 100 & 101 Training BP 14 0.1 97 99 67 95 LVQ 14 0.1 63 56 7 54 Validation BP 16 0.5 75 90 N/A 80 LVQ ' 4 . . 0.5 .50 80 N/A 60 Finance with MATH 140 & 141 Training BP 14 0.1 93 95 20 92 LVQ 16 0.1 71 48 0 58 Validation BP . 12 0.1,0.5 64 75 7 59 LVQ 16 0.1 60, 68 7 • 54 Finance with MATH 100 & 101 Training BP 14 0.1 98 99 60 97 LVQ 16 0.5 56 46 10 50 Validation BP 12 0.1,0.5 70 60 N/A 65 3 LVQ 14 0.1 50 90 N/A • .70 Marketing with MATH 140 & 141 Training BP 16 0.1 77 100 73 97 LVQ 16 0.5 7 95 0 85 Validation BP 14 0.5, 0.9 0 100 N/A' 92 LVQ 14 0.1 20 95 N/A 88 Marketing with MATH 100 & 101 Training BP 4, 14 0.1,0.9 100 100 N/A 100 LVQ 16 0.1 80 100 N/A 97 Validation BP 4- 16 0.1 -0.9 N/A 100 N/A 100 LVQ 8- 16.. 0.1 -0.9 "N/A 100 N/A 100 Table 6.1: Neural network configurations that produce the best total classi ication results Optimal Number-(s) of Hidden Neurons: According to the table 6.1, most of the best performing backpropagation models have either 12, 14 or 16 hidden neurons. The 4-, 6-and 8-hidden-neuron models are the rarest cases that create the best performance (only in the models for the Marketing with M A T H 100 & 101 option). The finding from this study is quite contrary to the findings of some previous studies. Those studies argued that the optimal number of hidden neurons would be around 75% of the number of input neurons (Salchenberger et al., 1992; Jain & Nag, 1995; Lenard et al., 1995; Gupta et al., 1997). Besides, for this study, the number should be around 150 to 200% of the number of input neurons. Even though some researchers argue that so many neurons in the 98 hidden layer could jeopardize the generalization ability of the network (Patuwo et al., 1993; Subramanian et al., 1993; Lenard et al., 1995; Zhang & Hu, 1998), it seems not to be the case for this study. The pattern of the optimal numbers of Kohonen (hidden) neurons of the L V Q networks is quite similar to that of backpropagation networks on both training and validation sets. The best performing L V Q networks could have any numbers of hidden neurons, but most of them have 14 or 16 hidden neurons (175, or 200%, of the number of input neurons). Classification performance levels of neural network models, both backpropagation and L V Q , within the Marketing with M A T H 100 & 101 option are quite exceptional. Almost every model performs near perfect on the training set, and does so perfectly on the validation set. The main reason for this impressive performance is due to the rather small number of observations. The training set consists of 15 observations; two of them classified into the first group, and 13 of them classified into the second group. Only 1 observation, classified into the second group, is available for validating the models. It should not be difficult for all neural network models to correctly classify those observations into their right groups. Even just to simply guess every observation into the second group seems not to deteriorate the correct classification rate so much. According to the above findings, the backpropagation neural network tends to produce more accurate prediction results when it has quite a large number of hidden neurons, more than 100% of the number of input neurons. The likely interpretation of this 99 phenomenon is that there are many abstract patterns of data to be detected within this domain of academic performance. These various patterns might result from the complex non-linear interactions among the input data items. In his tutorial article, Hiotis explained the role of hidden neurons within the backpropagation neural networks (Hiotis, 1993). He stated that an individual hidden neuron behaves like a feature or pattern detector by concentrating and comprehending "particular information containing in the input signals." The distinct combination, to some extent, of these features identifies a possible output signal that is associated with a given input vector. In other words, a particular output signal could be generated by various combinations of possible features of input data1. Since there are eight input variables in this study, the interaction among some or all of them could create quite a large numbers of features or patterns, which, subsequently, influence ultimate academic performance. As mentioned earlier, within this domain, we find that some students with similar academic backgrounds and demography often end up with much different academic achievement levels in their later studies. To effectively recognize and detect these several distinct features, as well as to be able to predict accurate outcomes, hence, the large number of hidden neurons is quite necessary for the backpropagation models. Perhaps it might not be worth trying to make a good interpretation out of the findings about the optimal numbers of Kohonen neurons. To the author's knowledge, there has been no indication of the impacts of too many or too few Kohonen neurons toward the 1 It might be helpful for readers to understand the feature detection and interaction aspects by determining the general architecture of neurons and their connected weights within a backpropagation neural network, as shown in chapter 3 or other materials elsewhere. 100 ultimate classification and prediction performance of an L V Q neural network. Besides, Kohonen argued that it is the right number of those Kohonen neurons assigned to each class that has real impacts on the achievement of prediction accuracy (Kohonen, 1995). According to the competitive learning algorithm utilized within the M A T L A B package, the codebook vectors are initially assigned with random values. They are then randomly selected to represent any classes of input data at the beginning of the training phase. The values of the codebook vectors are adjusted during the training process to make the vectors represent the correct classes. Having too many or too few Kohonen neurons should not create any significant difference in prediction. However, since the assignment of particular Kohonen neurons to a particular class is unchangeable, the performance of L V Q networks would depend on whether or not the utilized algorithm is generating the optimal assignment. Optimal Learning Rate(s): It has been suggested that a learning rate that is not too high or too low should be used to train a neural network. The high learning rate would make the neural network learn data patterns quite fast. However, at the same time, the high learning rate makes the learning process greatly fluctuate, and creates a difficulty for the neural network to converge. The small rate, on the other hand, enables the neural network to converge at the lowest point of the error curve, but takes a longer time before it can converge (Green & Choi, 1997; Demuth & Beale, 1998). The actual results tend to be consistent with the above assertion. Most of the best performing neural network models implement a learning rate of either 0.1 or 0.5. Only 1 0 1 two other backpropagation network models with a learning rate of 0.9 did have the best performance. However, other corresponding models within the same tracks with a learning rate of 0.1 or 0.5 also accompany the 0.9 learning-rate models. Finally, because of a very small size for the validation set within the Marketing with M A T H 100 & 101 option, various learning rates seem not to show distinct impacts on the classification performance. Significant Difference in Correct Classification Performance: The A N O V A test is used to determine which network paradigm performs better in classifying and predicting academic success. The A N O V A test identifies whether there exists a significant difference between correct classification results of these neural network paradigms. Although the author reports both total and each group's correct classification rates, the comparison of performance will be focused only on the total correct classification rates. Table 5.14 in the previous chapter shows that, in every instance, the average aggregate correct classification rate of the backpropagation models is higher than that of L V Q models. The A N O V A test confirms the superiority of backpropagation models over L V Q models by proving that, in most instances, the differences between correct classification rates of these two paradigms are significant at the 95% confidence level. From the test results, we could argue that the backpropagation algorithm is more powerful than the L V Q algorithm in recognizing the patterns of the given data and in predicting the right patterns of unseen data within this domain. However, on the validation samples within both tracks of Marketing, the differences between classification 102 performance levels of these two paradigms are not statistically significant. In the next sections, the author will further discuss these exceptional test results. On the training data sets, the backpropagation models are able to fit most or all of the possible data patterns, while the L V Q models recognize only some data patterns. A backpropagation model uses the gradient descent method in adjusting its parameters to produce the minimum output errors. Once the backpropagation model has reached the global minimum of the error curve, it can effectively recognize all possible data patterns and produce the outcomes accordingly. The only problem of this gradient descent method is that it could possibly make the model get stuck into one of several local minima. This situation usually occurs when a set of initial random weights is not the right one. The local minima problem generally deters the model from learning data patterns, and makes the model produce quite low correct classification rates. To repeat running the same backpropagation model several times could help find the right set of initial weights that will not put a network in any local minima (Zahedi, 1993). An L V Q model adopts the Euclidean distance method in adjusting its weights. The ultimate goal of this method is to make one or some codebook vectors, i.e., sets of connected weights, closely resemble input vectors of a similar pattern. The finally trained L V Q model has particular sets of interconnected weights and middle neurons that represent each distinct class of similar input data. Within this study, L V Q models seem unable to learn most or all data patterns of the training samples. The main concept behind the LVQ's learning algorithm is to cluster data with similar patterns into the same 103 class. Unfortunately, this algorithm can not work effectively when coming across a set of complicated data with inconsistent and fluctuating patterns. For example, the data set might consist of several data observations that are categorized into the same group, but are somewhat different from one another in most or all variable dimensions. It is quite impossible to let the L V Q model first learn these data patterns in an unsupervised manner and then to be supervisory trained to classify those pre-categorized data into the designated groups. To further address this issue, the author has run all training sets with the unsupervised part of the L V Q model to see how the training observations are clustered in a two-dimensional plane. Since there is no direct way to perform this task, the author had to adopt the self-organizing map (SOM) model, which is usually used for clustering data without any supervision. The results from running the SOM neural network with the training sets are illustrated in the figures below. Group 1 Q Group 2 5 4 2 % -] n 0 2 3 4 5 6 1 2 3 4 5 5 Figure 6.1: Self-organizing maps of the training data set, separated by categorized groups, for the Accounting with M A T H 140 & 141 option 104 Figure 6.2: Self-organizing maps of the training data set, separated by categorized groups, for the Accounting with M A T H 100 & 101 option Figure 6.3: Self-organizing maps of the training data set, separated by categorized groups, for the Finance with M A T H 140 & 141 option 6 Group 1 Croup 2 4 2 ) 2 3 4 5 ) 2 3 4 5 6 Figure 6.4: Self-organizing maps of the training data set, separated by categorized groups, for the Finance with M A T H 100 & 101 option Group 1 Group 2 Group 3 Figure 6.5: Self-organizing maps of the training data set, separated by categorized groups, for the Marketing with M A T H 140 & 141 option 105 Group 1 g Group 2 6 5 5 * 3 0 2 3 4 5 6 ) 2 3 4 5 Figure 6.6: Self-organizing maps of the training data set, separated by categorized groups, for the Marketing with M A T H 100 & 101 option It can be seen that for every track of Accounting and Marketing, the observations in the first group do cluster in particular areas, which are quite distinctive from those occupied by the observations in the third group. The observations in the second group, however, scatter all over the plane, including most areas occupied by either the first or the third group. In the case of the Finance tracks, the observations in both the first and the second groups scatter and occupy most of the plane, including areas taken by the observations in the third group. This haphazard distribution on the plane generates a difficulty for the supervisory part of L V Q models to classify all data observations into the right groups. Since the observations in the second group highly populate the plane for all Accounting and Marketing tracks, the L V Q models tend to classify most data observations into this group. This makes the correct classification rates of the second group the highest rates among those of the other groups. The situation is also true for the Finance tracks. Both the first and the second groups highly populate the plane. The correct classification rates of these two groups are somewhat the same, and are much higher than the correct classification rate of the third group. The aggregate performance, on the validation samples, of backpropagation models is not significantly higher than that of L V Q models in every instance. The exception is in both 106 Marketing tracks. The backpropagation models perform slightly better than the L V Q models but their performance levels are not significantly different from each other at the 95% confidence level. A possible explanation for this insignificance is that the proportion of observations in each group is greatly uneven, as mentioned previously. There is only one validation case, which falls into the second group, for the M A T H 100 & 101 option. Within the set of 12 validation cases for the M A T H 140 & 141 option, eleven of them are in the second group. It is, thus, not difficult for both paradigms, trained with the data sets of the second group concentration, to correctly predict the results for those validation observations, most of which are also in the second group. The author believes that the classification and prediction results of these Marketing tracks will be similar to those of the other specialization tracks, i f more samples of the first and the third groups can be found. Performance Degradation: By comparing the aggregate performance of the two network paradigms, it can be seen that backpropagation models are much superior to L V Q models in learning the patterns of data on any training sets. On the average, most backpropagation models produce the correct classification rates of about 80 to 90% in the training phase. Most L V Q models, on the other hand, produce the results of only about 40%, except for the Marketing tracks, where the rates go up to around 80%. When applying both neural network paradigms to the validation sets, the gaps between their performance levels become smaller than those in the training phase. The average performance of backpropagation models is quite fluctuated across the specialization 107 tracks, ranging from 50% to almost 100%. The average performance of L V Q models, on the other hand, is less fluctuated, clustering at the 40% level for both Accounting and Finance and at 80 to 90% levels for Marketing. By considering each paradigm individually, the performance of backpropagation models is substantially degraded when migrating from the training to validation sets, while the performance of L V Q models seems to be rather stable. The performance degradation of the backpropagation models could be explained by the problem of overfitting (Shaun, 1995; Demute & Beale, 1998). The high classification rates of backpropagation models on the training sets imply overtraining situations. Not surprisingly, the classification rates significantly drop when running the models on the validation sets because of their insufficient generalization capability. The other possible reason for the performance degradation of backpropagation models is the small size of validation samples compared to the size of training samples. The number of validation samples is about 10% of the number of training samples. The validation set might mostly consist of observations with patterns that are not well recognized and learnt by the trained backpropagation models. Within such a small validation set, misclassifying just only one or a few observations could cause a severe drop in the correct classification rate. It is quite difficult, though, to find a good explanation for the consistent performance of L V Q models from the training to cross-validation session. The most viable explanation would be that the L V Q models are not, in most instances, overtrained. This non-overtraining scenario should enable the L V Q models to maintain their generalization 108 power when applied to the validation sets. The following figure shows the comparative classification rates of these two paradigms. 120 100 80 w S . i -o U © o 60 eS 5 ^ 60 £ 0 M 4> 40 20 Mathl40&141 MathlOO&lOl Math140&141 Mathl00&l01 Mathl40&141 MathlOO&IOl Accounting Finance Marketing Specialization Track I BP-Training HBP-Validation H LVQ-training LVQ-Validation Figure 6.7: Bar chart comparing the correct classification rates, measured in percentages, of both neural network paradigms when applying to either training or validation data set Observa t ions f r o m Desc r ip t i ve Stat ist ics Within the set of 426 training records, there exist particular data patterns and characteristics that are quite interesting. When considering performance in the first-year courses, on the average, Commerce students in all three disciplines tend to perform well academically in the quantitative courses - Mathematics and Economics. Their performance in the qualitative (English) courses is somewhat lower than that of the quantitative ones. The average first-year GPA, which is currently used as the major admission factor, for all students in the training samples, is 78.96%. On the other hand, the average performance of the five core courses of these students is 78.20%, which is 109 not different from their first-year performance. This finding could support the past argument that the high school or first-year college GPA substantially reflects future academic performance, especially in terms of cumulative GPA or third- and fourth-year GPAs (Folwer & Glorfeld, 1981; Touron, 1983; Shaughnessy & Evans, 1986; Eskew & Faley, 1988). We should also expect the consistent patterns between the input and output data across the three categorized groups. In other words, we should expect students who are classified into the first group to possess better past academic records than their cohorts in the second group. At the same time, we should expect the same pattern between the students in the second group and the students in the third group. When considering the average grade of the first-year courses, as well as the first-year GPA, the author found that the above expectation was true for almost every specialization track. However, there exists an exception with the fellows within the Finance with M A T H 140 & 141 option. Within this track, the students classified in the third group have higher average grades in all first-year courses and higher first-year GPAs than their counterparts classified in the first or second group. This oddity might result from the fact that there are only two samples in this third group, making the average figures irregularly high. The performance paradox also occurs to the elective English course within the Finance with M A T H 100 & 101 option, the M A T H 140 course within the Marketing with M A T H 140 & 141 option, and the Economics and M A T H 100 courses within the Marketing with M A T H 100 & 101 option. It seems that the unequal proportion of students in each group 110 is to blame for this paradox, but we cannot firmly argue that since this paradox is not always true within other tracks, which also have an unequal proportion. Performance Comparison between Neural Networks and Ordered Probit Model Unlike neural networks, ordered probit model produces only one set of results for each specialization track. Prior studies comparing the performance of neural networks and traditional classification methods adopted different procedures in coming up with a particular number of observed results for making the comparison. Both the studies by Wilson and Hardgrave, and by Gorr, Nagin, and Szczypula, implemented quite similar procedures in testing and comparing the classification performance of neural networks and the traditional methods. Those studies determined and selected only one particular configuration of a backpropagation neural network model at the beginning stage. That selected model was then run with several sets of re-sampling data. It is, however, clearly shown from the results in the previous section of this study that the best performing neural network of each specialization track possesses a distinct configuration different from those of the best performers in other specialization tracks. In other words, a neural network with a particular configuration would optimally perform on only one or some particular sets, but not every set of data. Running the same network model with all the different sets of data, therefore, would not create the most optimal result for them, and would level off the overall performance. Further, the researchers of both studies argued that the neural network configurations they selected might not represent the fullest prediction capability. Any conclusions they made were quite reserved to the lower bound of a neural network's potential performance. The above explanation and argument would Ill justify the procedures adopted in this study regarding the selection of the best performing neural network for each set of data and the comparison between the performance levels of the best performing neural networks and ordered probit models. A l l three classification approaches were processing both training and validation sets. However, the author considers only their performance on the validation sets. Correct classification rates produced from individual approaches on the validation samples would show how powerful their generalization capabilities are. The performance on the training samples would not provide much knowledge about the difference among the prediction capabilities of these three approaches. Since we can continuously train a neural network model until it has fitted the entire training data set perfectly, it will always be true that neural networks outperform ordered probit model. According to table 5.18 in the previous chapter, the average performance of the best performing backpropagation models is 74.71%, while the average performance of ordered probit models is 54.41%. The A N O V A test reveals that, although the average performance of backpropagation models is higher than that of ordered probit models, the difference is not significant at a 95% confidence level. However, the corresponding F-ratio of 2.87 is significant at the alpha level of 0.10. We can, thus, still argue that the backpropagation models significantly outperform the ordered probit models with a confidence level of 90%. 112 Unlike the backpropagation models, the best L V Q models outperform the ordered probit models at a much higher confidence level. The average performance of the best performing L V Q models is 84.24%, which is also higher than the average of the ordered probit models. The corresponding F-ratio of 9.48, which is significant at the alpha level of 0.05, proves that the L V Q models strongly and significantly outperform the ordered probit models. The final remarks for the performance comparisons within this study are twofold. First, it is possible that the developed neural network models of both paradigms might still be in the low or medium levels of their potential capacities. As mentioned earlier, there are no standards or rules of thumb that identify which configurations of those neural network paradigms will provide the most optimal results. Although the author has tried developing neural networks with a wide range of configurations, he cannot assure that the existing best performing models are ultimately the best models that can be developed. The best models that were found in this study might possess capabilities that are close to -or still far away from - the greatest capabilities of neural networks for the application under study. Second, the number of validation samples used to test the prediction performance might be too small. This small sample size could increase a chance of misclassification of each method, but, probably, with different degrees. Thus, the results from the performance comparisons within this study might not reflect the ultimate picture of the superiority of one approach over the other approach. 113 Chapter Seven Conclusion This chapter concludes what has been found from this research and how the findings add value to the knowledge of neural network applications. Moreover, it also discusses some limitations that, to some extent, prevent us from making ultimate statements about neural network applications. A l l possible factors and conditions that needed to be fully considered before evaluating and generalizing the results are explained. Finally, further investigation within this area that would result in possible future research studies is also addressed. Potential Contributions to Academic Circles Neural networks have a great potential in enhancing the decision making process, in which a decision-maker is dealing with complicated phenomena. Neural networks are superior in recognizing and handling the complex data patterns, which usually violate statistical assumptions generally required by traditional statistical techniques (Wilson & Hardgrave, 1995). The task of accurately predicting the academic performance of students has shown to be a difficult one. The ordered probit model, utilized in this study, could not develop strong relationships between a set of explanatory variables and a predicted variable. A l l pseudo R 2s produced from the probit models are less than 30%. The results from running several configurations of backpropagation and L V Q neural networks indicate that the backpropagation paradigm is better than the L V Q paradigm in 114 classifying and predicting academic performance. The author first hypothesized that the L V Q networks would outperform the backpropagation networks in classifying complex data, and could be used as the alternative prediction technique. However, it is evident from this study that, for the task of solely classifying data of a given set into the right groups, the backpropagation network is a more suitable technique. The hypothesis is also rejected when coming to the prediction of unseen observations. The performance levels of backpropagation models are still higher than those of L V Q models, although some of them are not significantly different from each other at the 95% confidence level. Since there have been only a few studies applying the L V Q neural network to the classification task, the full potential of the L V Q neural network as a predictive model might not have been uncovered yet. Moreover, the small performance gap between backpropagation and L V Q on the validation samples implies that the L V Q approach has the promising potential to be an alternative for the task of predicting complex data. According to the prior studies applying neural networks to predict academic success, it was shown that neural networks did not significantly outperform other traditional techniques. The results from this study, however, provide quite a different story. Both backpropagation and L V Q neural networks haye a higher average correct classification rate than ordered probit model in every specialization track. Although the difference between the performance of backpropagation and ordered probit model is not significant at the confidence level of 95%, we can still make a conclusion with some reservations that backpropagation neural network has significantly higher performance than ordered probit model at the somewhat lower confidence level of 90%. 115 The author has also proposed another procedure for testing the performance of neural networks and comparing it with that of ordered probit model. Instead of first deciding which network configurations would be applied to different sets of data, the author just experimented numerous network configurations with every set of data, and later determined which ones were the most favorable performers. This practice created a lot more chance of finding the best performing neural network model for each individual data set. The author believes that, by comparing the best performing models of one approach to those of the other approach, we can generate a more accurate and appropriate interpretation of the results. Limitations and Conditions Similar to other research studies, this study possesses several limitations that need to be addressed before making any ultimate arguments or conclusions. Those limitations concern the implication of the results and findings to other settings (external validity or generalization), the number of samples in the experiments (internal validity), and the neural network configurations (internal validity). Prior studies of neural network applications indicated that neural networks generally outperform other traditional techniques in various tasks within various domains. The results from this study seem to support those findings of the superiority of neural networks over traditional techniques. As mentioned above, the findings from this study contradict those of other similar studies in the academic success prediction area, and, they 116 should therefore be carefully considered before making any generalization to other settings. The findings might be applicable to the settings where correlated factors and environments are relatively similar to those of UBC's B.Com. Program. For example, they might be suitable for undergraduate business programs at other institutions in B.C. or in Canada. Moreover, since this study just covers only the three specializations of Accounting, Finance, and Marketing, it would not be useful and justifiable to apply or to generalize the study's results to other specializations. Since this study was conducted on rather small sample sizes, especially the sets of validation samples, we might not be able to make conclusive statements about the findings. If we can collect more observations to be used for validation, we could make stronger arguments about the classification and prediction capabilities of those approaches. At this point, what we can conclude is that there is a trend that neural networks are better than ordered probit model in classifying and predicting complex data. Further, we can state with confidence that backpropagation models are superior to L V Q models in recognizing patterns of a given set of data, thanks to sufficient numbers of training cases. However, due to the small numbers of validation cases, the conclusion that backpropagation models have greater generalization powers than L V Q models should be made with some cautions. It is possible that none of the neural network configurations employed within this study represent the full classification and prediction capabilities. Gorr and his colleagues believed that there could be more complex models that would improve the prediction 117 performance within their study (Gorr et al., 1994). In this study, the author only determines the variation of two parameters, i.e., number of hidden nodes, and learning rate, that would influence the performance of neural networks. Perhaps there exist other factors the author is not aware of or does not focus on that could improve performance. Moreover, other than realizing what factors are really influential to performance, selecting the right values for those factors is also important, but, at the same time, is hard to achieve. Kohonen argued that to accomplish the most accurate prediction within any tasks to which neural network models are applied depends upon several factors. However, such appropriate values can only be found by trial-and-error, as well as by extensive experience (Kohonen, 1995). There are no cookbooks that help find the right values of those factors easily. There are ongoing experiments and research attempting to improve the learning algorithm of each neural network paradigm. The learning algorithms implemented in this study are just the earlier versions available within the neural network application package. Newly improved versions, as well as newly related techniques, that could increase the prediction accuracy of neural networks are still to come. Deliverables to the B.Com. Program It has been shown, within this study and elsewhere, that neural networks are more powerful than traditional methods. It is, thus, justifiable to utilize neural networks within the student recruitment process of the B.Com. Program. Neural networks would be an 118 effective and powerful technique for predicting ultimate academic performance of U B C Commerce students. By applying neural networks to recognize patterns of past academic records in association with some demographic factors, and to predict future academic success, users (B.Com. Program's committees) do not need to greatly concern about the natures of data set they are dealing with. The data set could violate some statistical assumptions generally required by traditional statistical techniques, or could contain missing items or noise. However, these data defects seem not to significantly deteriorate classification and prediction power of neural networks. Given a set of input variables currently used as the admission criteria by the B.Com. Program, neural networks can predict the academic performance with a rather high degree of accuracy, compared to the traditional technique of probit model. These input variables, along with the other two demographic variables, as adopted in this study, might not be the most influential explanatory factors that strongly impact the variation of the academic performance. However, neural networks seem not to have any difficulty in finding the appropriate relationships among them and in coming up with the substantially favorable and accurate results. Fundamentally, these data variables are readily available within UBC's database and are conveniently accessible. There is no need for the B.Com. Program to attempt to acquire other explanatory variables that are more related and influential to the variation of academic success. The costs and efforts that would have to 119 be spent in finding the right set of explanatory factors could outweigh marginal benefits the B.Com. Program might receive from having more accurate results. After the neural network models have been trained, they can be used to predict the academic performance of students. For each specialization track, the best performing model will be selected. This set of models will then be included in the decision support system developed for the undergraduate program. The B.Com. Program can inquire of this system to predict the possible performance of particular students in any particular specialization, given their set of past academic and demographic data. For example, the system will predict how well a student will perform in Accounting, and, using the same set of data, how well he or she will do in Marketing. Suggestion for Future Research This research study could be repeated in the future when more complete data become available. It would be interesting to see whether the results will be similar to or share the same trend as the current results. Another study should also be conducted with the remaining specializations - MIS, Urban Land Economics, Industrial Relations, etc., again, when the number of complete records becomes more substantial. The results from these future studies might strengthen or weaken what the author has found from the first three specializations within this study. This would enable us to make general arguments regarding the prediction of academic performance within the Business Administration discipline using neural networks. 120 Only directly entered or non-transfer students who just finish their first year at U B C are the subjects of this research study. The results from this study should provide some guidelines for conducting further investigations that include second-year and third-year transfer students from other colleges or universities, and mature students with work experience. The admission requirements of these groups of students are quite different from those of the group of first-year U B C applicants (UBC Calendar, 1998). These students might be able to substitute some prerequisite courses required at the first year at U B C with equivalent courses the students took at their prior institutions. This substitutability could be an explanation for the missing of some prerequisite courses' grades within the incomplete records. We can utilize neural networks to predict the academic performance of these groups of students. However, we would have to consider the appropriate and valid ways to deal with the missing values, before applying the neural networks. For example, those missing first-year prerequisite courses' grades could be replaced with the grades of corresponding substituted courses. Otherwise, different sets of input variables that correspond to different natures of academic records of these students might have to be implemented. Other than predicting student academic performance into one of three academic standing groups, the backpropagation neural networks can be used to predict the performance in terms of actual grades (with continuous percentage values). A possible focus of the further study could be a prediction of student performance in various disciplines of second-year core Commerce courses, given the input data of past academic records and demographic factors, as have been used in this study. On the other hand, the academic 121 performance of students in those core Commerce courses can be treated as independent variables to predict the average grade of five core specialization courses. It should be very useful to determine how well the backpropagation neural networks will perform when the output variable is continuous rather than categorical. Furthermore, although the data items used in the study are basically from the student database, it would be more interesting to investigate other potential explanatory factors currently not available in the database. For example, related training, work experience, extra-curricular activities, or even psychological factors could possibly be relevant indicators of academic success. To collect these data items, a questionnaire could be distributed to students who are about to enter the program. Finally, other than attempting to predict academic success, the undergraduate program, in cooperation with the career center, can investigate the possibility of using these data to predict future job prospects of students. The models from this further study could help the B.Com. Program and the career center guide their students on which career path they should pursue. Students would have a chance to be fully equipped with the skills necessary for their prospective career before they enter the job market. Concluding Remarks This research study was conducted because of two anticipated benefits. The first and foremost benefit is that the study would provide more insights and knowledge about the application of neural networks to the classification and prediction of data within the 122 management area. The author would like to show that, other than traditional techniques, there are newly emerging methods and technologies that effectively enhance the managerial decision making process. The second benefit is that the findings and results from this study would be useful for a development of the decision support system for the admissions process. This support system can help the B.Com. Program in its student recruitment efforts by predicting the future academic performance of individual applicants. Having this information, the B.Com. Program can determine whether they should accept particular applicants. At the same time, the system would also help the B.Com. Program to better advise entering students of which specialization might be the most suitable for them. There are still so many aspects about neural networks and their applications that are not fully understood. The current findings, explanations, and evaluations of characteristics and capabilities of neural networks might be wrong in the future when more insights about them are discovered. For example, despite being recognized by their satisfactory performance in classification and prediction, the backpropagation neural networks have been consistently commented on and questioned by some researchers in terms of their traditional learning algorithms (Wang 1995; Curry & Morgan, 1997). They believe that the gradient descent learning algorithms have some weaknesses and might not be able to create optimal performance. They have suggested new methods to improve the learning algorithms, which, in turn, should increase their performance. From this example, we can see that new effective techniques and methods regarding neural networks and their applications have been continually unveiled. Consequently, even at this point, it might 123 not be easy for anyone to make ultimate conclusions about particular aspects of neural networks, given the current knowledge we have about them. 124 Bibliography Aleksander, Igor. Neural Computing Architectures: The Design of Brain-Like Machines. London, U K : North Oxford Academic Publishers Ltd., 1989. Alspaugh, Carol Ann. "Identification of Some Components of Computer Programming Aptitude," Journal for Research in Mathematics Education. 3(2) (Mar 1972): 89-98. Arlin, Marshall. Analysis of Variance for Educational Research. Unpublished Manuscript, Vancouver, BC: Faculty of Education, University of British Columbia, 1997. Bansal, Arun, Robert J. Kauffman, and Rob R. Weitz. "Comparing the Modeling Performance of Regression and Neural Networks as Data Quality Varies: A Business Value Approach," Journal of Management Information Systems. 10(1) (Summer 1993): 11-32. Brancheau, James C. "Completing Your Masters Thesis in Information Systems: Guidelines and Suggestions," (1995-97), http://www.colorado.edu/infs/icb/isthesis.html. Burke, Laura Ignizio. "Introduction to Artificial Neural Systems for Pattern Recognition," Computers and Operations Research. 18(2) (1991): 211-220. Butcher, D. F., and W. A . Muth. "Predicting Performance in an Introductory Computer Science Course," Communications of the A C M . 28(3) (Mar 1985): 263-268. Campbell, Patricia F., and George P. McCabe. "Predicting the Success of Freshmen in a Computer Science Major," Communications of the A C M . 27(11) (Nov 1984): 1108-1113. Caudill, Maureen. "Using Neural Nets: Representing Knowledge," A l Expert. 4(12) (Dec 1989): 34-41. Caudill, Maureen. "Neural Network Training Tips and Techniques," A l Expert. 6(1) (Jan 1991): 56-62. Chen, S. K. , P. Mangiameli, and D. West. "The Comparative Ability of Self-Organizing Neural Networks to Define Cluster Structure," Omega, International Journal of Management Science. 23(3) (1995): 271-279. Cohen, Jacob, and Patricia Cohen. Applied Multiple Regression/Correlation Analysis for the Behavioral Sciences. Hillsdale, NJ: Lawrence Erlbaum Associates, 1983. Cronan, Timothy P., Phillip R. Embry, and Steven D. White. "Identifying Factors that Influence Performance of Non-Computing Majors in the Business Computer Information Systems Course," Journal of Research on Computing in Education. 21(4) (Summer 1989): 431-443. 125 Curry, B., and P. Morgan. "Neural Networks: A Need for Caution," Omega, International Journal of Management Science. 25(1) (1997): 123-133. Daniel, Wayne W., and James C. Terrell. Business Statistics: For Management and Economics. Boston, M A : Houghton Mifflin Company: 1992. Davis, Gordon Bitter. Writing the Doctoral Dissertation: A Systematic Approach. Hauppauge, N Y : Barron's, 1997. Demuth, Howard, and Mark Beale. Neural Network Toolbox User's Guide. Natick, M A : The MathWorks, 1998. De Wilde, Philippe. Neural Network Models: Theory and Projects. London, U K : Springer-Verlag, 1997. Domer, D. E., and A . E. Johnson, Jr. "Selective Admissions and Academic Success: An Admissions Model for Architecture Students," College and University. 58(1) (Fall 1982): 19-30. Doumas, Anastasia, Konstantinos Mavroudakis, Dimitris Gritzalis, and Sokratis Katsikas. "Design of a Neural Network for Recognition and Classification of Computer Viruses," Computers and Security. 14(5) (1995): 435-448. Eskew, Robert K. , and Robert H. Faley. "Some Determinants of Student Performance in the First College-Level Financial Accounting Course," The Accounting Review. 63(1) (January 1988): 137-147. Evans, Gerald E., Mark G. Simkin. "What Best Predicts Computer Proficiency?," Communications of the A C M . 32(11) (Nov 1989): 1322. Fowler, George C , Louis W. Glorfeld. "Predicting Aptitude in Introductory Computing: A Classification Model," AEDS Journal. 14(2) (Winter 1981): 96-109. Fox, John. Applied Regression Analysis, Linear Models, and Related Methods. Thousand Oaks, C A : Sage, 1997. Gorr, Wilpen L., Daniel Nagin, and Janusz Szczypula. "Comparative Study of Artificial Neural Network and Statistical Models for Predicting Student Grade Point Averages," International Journal of Forecasting. 10 (1994): 17-34. Gramet, Pamela, and Lorraine Terracina. "Qualitative and Quantitative Variables in a Selective Admissions Process," College and University. 63(4) (Summer 1988): 368-373. Green, Brian Patrick, and Jae Hwa Choi. "Assessing the Risk of Management Fraud through Neural Network Technology," Auditing: A Journal of Practice & Theory. 16(1) (Spring 1997): 14-28. 126 Greene, William H . Econometric Analysis. Upper Saddle River, NJ: Prentice Hall, 1997. Gupta, V . K. , J. G. Chen, and M . B. Murtaza. " A Learning Vector Quantization Neural Network Model for the Classification of Industrial Construction Projects," Omega, International Journal of Management Science. 25(6) (1997): 715-727. Gurney, Kevin. An Introduction to Neural Networks. London, U K : U C L Press, 1997. Hart, Anna. "Using Neural Networks for Classification Tasks - Some Experiments on Datasets and Practical Advice," Journal of the Operational Research Society. 43(3) (1992): 215-226. Hiotis, Andre. "Inside a Self-Organizing Map: Two-Dimensional Map for Experimenting with Neural-Network Paradigm," AI Expert. 8(4) (Apr 1993), 38-41. Ho, David Y . F., and John A. Spinks. "Multivariate Prediction of Academic Performance by Hong Kong University Students," Contemporary Educational Psychology. 10 (1985): 249-259. Huang, Zezhen, and Anthony Kuh. " A Combined Self-Organizing Feature Map and Multilayer Perceptron for Isolated Word Recognition," IEEE Transactions on Signal Processing. 40(11) (Nov 1992): 2651-2657. Hruschka, Harald. "Determining Market Response Functions by Neural Network Modeling: A Comparison to Econometric Techniques." European Journal of Operational Research. 66 (1993): 27-35. Jain, Bharat A. , and Barin N . Nag. "Artificial Neural Network Models for Pricing Initial Public Offerings," Decision Sciences. 26(3) (1995): 283-299. Jain, Bharat A. , and Barin N . Nag. "Performance Evaluation of Neural Network Decision Models," Journal of Management Information Systems. 14(2) (Fall 1997): 201-216. Kiang, Melody Y. , Uday R. Kulkarni, and Kar Yan Tarn. "Self-Organizing Map Network as an Interactive Clustering Tool - An Application to Group Technology," Decision Support Systems. 15(4) (1995): 351-374. Knight, Kevin. "Connectionist Ideas and Algorithms," Communications of the A C M . 33(11) (Nov 1990): 59-74. Kohonen, Teuvo. "The Self-Organizing Map," Proceeding of the IEEE. 78(9) (Sep 1990): 1464-1480. Kohonen, Teuvo. Self-Organizing Maps. Berlin Heidelberg: Springer-Verlag, 1995. 127 Konvalina, John, Larry Stephens, and Stanley Wileman. "Identifying Factors Influencing Computer Science Aptitude and Achievement," AEDS Journal. 16(2) (Winter 1983): 106-112. Lenard, Mary Jane, Pervaiz Alam, and Gregory R. Madey. "The Application of Neural Networks and a Qualitative Response Model to the Auditor's Going Concern Uncertainty Decision," Decision Sciences. 26(2) (1995): 209-226. L i , Eldon Y. "Artificial Neural Networks and their Business Applications," Information and Management. 27 (1994): 303-313. Lippmann, Richard P. "An Introduction to Computing with Neural Nets," IEEE ASSP Magazine. 4(3) (Apr 1987): 4-22. Markham, Ina S., and Cliff T. Ragsdale. "Combining Neural Networks and Statistical Predictions to Solve the Classification Problem in Discriminant Analysis," Decision Sciences. 26(2) (1995): 229-241. Mazance, Josef A . "Positioning Analysis with Self-Organizing Maps," Cornell Hotel and Restaurant Administration Quarterly. 36(6) (Dec 1995): 80-97. Mazlack, Lawrence J. "Identifying Potential to Acquire Programming Skill ," Communications of the A C M . 23(1) (Jan 1980): 14-17. Mohammad, Yousuf H. J., and Mohammad A. H. Almahmeed. "An Evaluation of Traditional Admission Standards in Predicting Kuwait University Students' Academic Performance," Higher Education. 17(2) (1988): 203-217. Nisbet, Janice, Virgil E. Ruble, and K . Terry Schurr. "Predictors of Academic Success with High Risk College Students," Journal of College Student Personnel. 23(3) (May 1982): 227-235. Oman, Paul W. "Identifying Student Characteristics Influencing Success in Introductory Computer Science Courses," AEDS Journal. 19(2) (Winter/Spring 1986): 226-233. Orwig, Richard E., Hsinchun Chen, and Jay F. Nunamaker, Jr. " A Graphical, Self-Organizing Approach to Classifying Electronic Meeting Output," Journal of the American Society for Information Science. 48(2) (Feb 1997): 157-170. Pascarella, Ernest T., Paul B. Duby, Vernon A. Miller, and Sue P. Rasher. "Preenrollment Variables and Academic performance as Predictors of Freshman Year Persistence, Early Withdrawal, and Stopout Behavior in an Urban, Nonresidential University," Research in Higher Education. 15(4) (1981): 329-349. Patuwo, Eddy, Michael Y . Hu, and Ming S. Hung. "Two-Group Classification Using Neural Networks," Decision Sciences. 24(4) (1993): 825-845. 128 Rao, Valium B., and Hayagriva V . Rao. C++ Neural Networks and Fuzzy Logic. New York, N Y : MIS:Press, 1995. Ritter, H. , and T. Kohonen. "Self-Organizing Semantic Maps," Biological Cybernetics. 61 (1989): 241-254. Ritter, Helge, Thomas Martinetz, and Klaus Schulten. Neural Computation and Self-Organizing Maps. Don Mills, ON: Addison-Wesley, 1992. Rogers, Joey. Object-Oriented Neural Networks in C++. Chestnut Hi l l , M A : Academic Press, Inc., 1997. Rumelhart, David E., Bernard Widrow, and Michael A . Lehr. "The Basic Ideas in Neural Networks," Communications of the A C M . 3(3) (Mar 1994): 87-92. Salchenberger, Linda M . , E. Mine Cinar, and Nicholas A . Lash. "Neural Networks: A New Tool for Predicting Thrift Failures," Decision Sciences. 23(4) (1992): 899-916. Sexton, Randall S., Robert E. Dorsey, and John D. Johnson. "Toward Global Optimization of Neural Networks: A Comparison of the Genetic Algorithm and Backpropagation," Decision Support Systems. 22 (1998): 171-185. Shanker, M . , Michael Y . Hu, and Ming S. Hung. "Effect of Data Standardization on Neural Network training," Omega, International Journal of Management Science. 24(4) (1996): 385-397. Shaughnessy, Michael F., and Robert Evans. "Word/World Knowledge: Prediction of College GPA," Psychological Reports. 59 (1986): 1147-1150. Stanbury, W. T., Dan Gardiner, S. W. Hamilton, Erica Mills, Craig Pinder, Bernhard Schwab, Dan Simunic, James Kwong, and James Nevison. Interim Report of the Faculty of Commerce Undergraduate Program Review Committee. Unpublished Manuscript, Vancouver, BC: Faculty of Commerce and Business Administration, University of British Columbia, 1998. StataCorp. Stata Reference Manual: Release 5 Volume 2. College Station, T X : Stata Press, 1997. Subramanian, Venkat, Ming S. Hung, and Michael Y . Hu. "An Experimental Evaluation of Neural Networks for Classification," Computers and Operations Research. 20(7) (1993): 769-782. Tarn, Kar Yan, and Melody Y. Kiang. "Managerial Applications of Neural Networks: The Case of Bank Failure Predictions," Management Science. 38(7) (Jul 1992): 926-947. 129 Tracey, Terence J., William E. Sedlacek, and Russell D. Miars. "Applying Ridge Regression to Admissions Data by Race and Sex," College and University. 58(3) (Spring 1983): 313-317. Touron, Javier. "The Determination of Factors Related to Academic Achievement in the University : Implications for the Selection and Counselling of Students," Higher Education. 12 (1983): 399-410. Venugopal, V. , and W. Baets. "Neural Networks and Statistical Techniques in Marketing Research: A Conceptual Comparison," Marketing Intelligence and Planning. 12(7) (1994): 30-38. Wang, DeLiang. "Pattern Recognition: Neural Networks in Perspective," IEEE Expert. 8 (August 1993): 52-60. . Wang, Shouhong. "The Unpredictability of Standard Back-Propagation Neural Networks in Classification Applications," Management Science. 41(3) (1995): 555-559. . Wang, Shouhong. "An Insight into the Standard Back-Propagation Neural Network Model for Regression Analysis," Omega, International Journal of Management Science. 26(1) (1998): 133-140. Wasserman, Philip D. Neural Computing: Theory and Practice. New York, N Y : Van Nostrand Reinhold, 1989. Wilson, Rick L., and Bi l l C. Hardgrave. "Predicting Graduate Student Success in an M B A Program: Regression versus Classification," Educational and Psychological Measurement. 55(2) (Apr 1995): 186-195. Wong, Bo K. , Thomas A . Bodnovich, and Yakup Selvi. "Neural Network Applications in Business: A Review and Analysis of the Literature (1988-95)," Decision Support Systems. 19(1997): 301-320. Yale, Karia. "Preparing the Right Data Diet for Training Neural Networks," IEEE Spectrum. 34(3) (Mar 1997): 64-66. Yoon, Youngohc, George Swales, Jr., and Thomas M . Margavio. " A Comparison of Discriminant Analysis versus Artificial Neural Networks," Journal of the Operational Research Society. 44(1) (1993): 51-60. Young, Abimbola S. "Pre-enrollment Factors and Academic Performance of First-Year Science Students as a Nigerian University: A Multivariate Analysis," Higher Education. 18 (1989): 321-339. 130 Young, John W. "Differential Prediction of College Grades by Gender and by Ethnicity: A Replication Study," Educational and Psychological Management. 54(4) (Winter 1994): 1022-1029. Zahedi, Fatemeh. Intelligent Systems for Business: Expert Systems with Neural Networks. Belmont, C A : Wadsworth, 1993. Zhang, Gioqinang, and Michael Y . Hu. "Neural Network Forecasting of the British Pound/US Dollar Exchange Rate," Omega, International Journal of Management Science. 26(4) (1998): 495-506. , Shaun, "Neural Networks," (1995), http://www.soton.ac.uk/~sni/neuralnet.html. , . The University of British Columbia: 1998/99 Calendar. Vancouver, BC: University of British Columbia, 1998. 131 Appendix I Tables of Results in Detail This appendix illustrates the research results in detail. There are basically three sections within this appendix. The first section consists of tables elaborating the mean and standard deviation values of each input variable by each categorized group. The second section is composed of tables reporting correct classification rates produced by all configurations of both neural network paradigms. The final section provides all detailed figures concerning the A N O V A tests of the differences between the classification and prediction performance of the backpropagation models and that of the L V Q models. The following tables, table 1.1 to table 1.6, show the means and standard deviations of input variables, separated by each categorized group. There are a total of 6 different tables for this section. Variable Group 1 (80 -100 %) N = 40 Group 2 (68 - 79 %) N = 74 Group 3 (50 - 67 %) N = 23 Mean SD Mean SD Mean SD Age 18.25 0.5884 18.32 0.5993 18.21 0.5184 First-Year GPA 81.45 4.5739 75.93 4.4363 73.78 3.8489 ECON 100 81.65 6.8931 76.58 7.1539 71.65 5.4657 ENGL 112 73.52 6.7444 70.18 6.9650 70.39 6.4437 Selective ENGL 73.33 6.2936 71.03 6.2329 71.17 7.4994 MATH 140 87.83 7.0924 82.04 8.4860 80.30 8.0534 MATH 141 92.23 6.3468 83.81 6.2329 83.78 7.4994 Gender 0.425 0.5006 0.378 0.4883 0.304 0.4705 Table 1.1: Means and standard deviations of input variables within each categorized group, for the Accounting with Math 140 & 141 option Variable Group 1 (80 -100 %) N = 24 Group 2 (68 - 79 %) N = 30 Group 3 (50 - 67 %) N = 6 Mean SD Mean SD Mean SD Age 18.41 0.6539 18.30 0.4661 18.83 0.4083 First-Year GPA 81.58 4.5675 76.67 4.8162 72.33 3.5024 ECON 100 84.08 7.0890 77.87 6.9368 72.17 4.0208 ENGL 112 72.38 6.4391 70.70 6.6132 69.17 6.0470 Selective ENGL 72.33 6.5120 69.50 6.4260 67.00 2.6833 MATH 100 86.67 8.8252 81.50 8.5167 74.83 9.9683 MATH 101 83.38 10.4082 78.87 10.8810 69.83 8.9536 Gender 0.333 0.4815 0.600 0.4983 0.500 0.5477 Table 1.2: Means and standard deviations of input variables within each categorized group, for the Accounting with Math 100 & 101 option 132 Variable Group 1 (80 -100 %) N = 42 Group 2 (68 - 79 %) N = 49 Group 3 (50 - 67 %) N = 2 Mean SD Mean SD Mean SD Age 18.45 0.7392 18.69 1.0449 18.00 0.0000 First-Year GPA 79.86 4.4314 76.20 5.9895 82.00 1.4142 ECON 100 81.81 7.1984 75.37 7.7800 78.50 3.5355 ENGL 112 72.55 7.5327 71.98 9.1503 73.00 9.8995 Selective ENGL 73.86 7.4263 71.20 7.3427 75.00 1.4142 MATH 140 84.88 10.0855 81.55 11.6709 90.50 2.8284 MATH 141 88.05 9.3677 85.00 9.5241 90.50 3.5355 Gender 0.595 0.4968 0.551 0.5025 0.000 0.0000 Table 1.3: Means and standard deviations of input variables within each categorized group, for the Finance with Math 140 & 141 option Variable Group 1 (80 -100 %) N = 26 Group 2 (68 - 79 %) N = 24 Group 3 (50 - 67 %) N = 2 Mean SD Mean SD Mean SD Age 18.50 0.9055 18.50 0.7802 18.00 0.0000 First-Year GPA 80.35 5.1064 76.54 3.3360 72.50 0.7071 ECON 100 82.73 8.1366 77.83 5.9247 75.00 0.0000 ENGL 112 71.42 6.3509 70.63 6.1560 66.50 4.9498 Selective ENGL 68.62 7.0941 67.13 7.3147 72.00 4.2426 MATH 100 85.08 7.3752 81.50 9.5871 72.50 10.6066 MATH 101 83.54 9.9488 81.92 8.1876 78.50 3.5355 Gender 0.346 0.4852 0.45.8 0.5090 1.000 0.0000 Table 1.4: Means and standard deviations of input variables within each categorized group, for the Finance with Math 100 & 101 option Variable Group 1 (80 -100 %) N = 6 Group 2 (68 - 79 %) N = 70 Group 3 (50 - 67 %) N = 3 Mean SD Mean SD Mean SD Age 18.33 0.5164 18.47 0.7750 19.00 1.0000 First-Year GPA 78.67 3.5024 74.89 3.9619 73.33 0.5774 ECON 100 78.17 5.9805 71.97 6.5739 64.33 7.7675 ENGL 112 72.67 6.1210 71.46 6.4194 69.33 2.0817 Selective ENGL 77.17 10.2258 72.26 7.0929 67.33 1.1547 MATH 140 71.67 13.2313 78.91 9.5762 83.00 6.2450 MATH 141 82.67 7.1181 81.11 10.3189 81.00 9.6437 Gender 0.333 0.5164 0.343 0.4781 0.333 0.5774 Table 1.5: Means and standard deviations of input variables within each categorized group, for the Marketing with Math 140 & 141 option 133 V a r i a b l e G r o u p 1 (80 - 1 0 0 % ) N = 2 G r o u p 2 (68 - 79 % ) N = 13 G r o u p 3 (50 - 67 % ) N = 0 M e a n S D M e a n S D M e a n S D Age 18.80 1.1314 18.32 0.5872 N/A N/A First-Year GPA 76.05 2.7577 74.05 5.2705 N/A N/A ECON 100 71.50 3.5355 76.54 8.0789 N/A N/A ENGL 112 78.00 0.0000 68.85 3.4844 N/A N/A Selective ENGL 77.00 1.1412 68.69 6.3559 N/A N/A MATH 100 76.00 8.4853 77.85 8.5327 N/A N/A MATH 101 73.50 12.0208 72.15 8.5230 N/A N/A Gender 0.000 0.0000 0.615 0.5064 N/A N/A Table 1.6: Means and standard deviations of input variables within each categorized group, for the Marketing with Math 100 & 101 option The following tables (table 1.7 to table 1.30) report the correct classification rates of all possible neural network configurations. In each cell, the top figure represents the number, of correct classified cases and the bottom figure is the calculated percentage of correct classification. The tables report the figures of both aggregate performance and performance in each group. There are a total of 24 different tables for this section. H i d d e n N o d e s L e a r n i n g R a t e 0.1 0.5 0.9 G l G 2 G 3 T o t a l G l G 2 G 3 T o t a l G l G 2 G 3 T o t a l 4 24.40 61% 63.80 86% 11.20 49% 99.40 73% 24.40 61% 63.80 86% 11.20 49% 99.40 73% 24.60 62% 67.00 91% 12.20 53% 103.80 76% 6 23.40 59% 61.40 83% 10.80 47% 95.60 70% 26.20 66% 69.40 94% 10.40 45% 106.00 77% 31.60 79% 72.00 97% 13.80 60% 117.40 86% 8 24.60 62% 67.80 92% 13.00 57% 105.40 77% 31.80 80% 68.80 93% 15.00 65% 115.60 84% 28.40 71% 72.40 98% 12.00 52% 112.80 82% 10 35.80 90% 69.60 94% 12.40 54% 117.80 86% 36.00 90% 56.20 76% 13.00 57% 105.20 77% 31.20 78% 67.80 92% 12.40 54% 111.40 81% 12 28.80 72% 68.80 93% 13.40 58% 111.00 81% 27.00 68% 71.00 96% 11.60 50% 109.60 80% 26.60 67% 69.40 94% 12.40 54% 108.40 79% 14 21.80 55% 70.40 95% 9.60 42% 101.80 74% 21.60 54% 73.00 99% 10.20 44% 104.80 76% 23.20 58% 63.60 86% 7.80 34% 94.60 69% 16 29.00 73% 71.40 96% 11.00 48% 111.40 81% 18.40 46% 64.60 87% 4.40 19% 87.40 64% 29.60 74% 72.00 97% 14.80 64% 116.40 85% Table 1.7: Classification and prediction performance of the backpropagation models on the training data set of the Accounting with Math 140 & 141 option 134 H i d d e n N o d e s L e a r n i n g R a t e 0.1 0 .5 0 .9 G l G 2 G 3 T o t a l G l G 2 G 3 T o t a l G l G 2 G 3 T o t a l 4 1.40 35% 7.60 69% 0.00 0% 9.00 47% 1.40 35% .7.60. 69% 0:00 0% 9.00 47% 1.40 35% 7.20 65% 1.00 25% 9.60 51% 6 1.40 35% 7.60 69% 0.80 20% 9.80 52% 1.60 40% 7.60 69% 0.60 15% 9.80 52% 1.40 35% 8.00 73% 0.00 0% 9.40 49% 8 1.00 25% 9.00 82% 0.60 15% 10.60 56% 1.40 35% 8.00 73% 0.80 20% 10.20 54% 1.00 25% 8.00 73% 0.60 15% 9.60 51% 10 2:00 50% 7.40 67% 0.60 15% 10.00 53% 1.80 45% 7.40 67% 0.60 15% 9.80 52% 2.00 50% 7.20 65% 0.20 5% 9.40 49% 12 1.00 25% 7.20 65% 0.80 20% 9.00 47% 1.00 25% 9.00 82% 1.00 25% 11.00 58% 1.40 35% 8.60 78% 0.40 10% 10.40 55% 14 1.00 25% 8.80 80% 0.40 10% 10.20 54% 1.20 30% 9.40 85% 0.60 15% 11.20 59% 1.20 30% 7.60 69% 0.20 5% 9.00 47% 16 1.20 30% 8.80 80% 0.60 15% 10.60 56% 1.20 30% 8.60 78% 0.00 0% 9.80 52% 0.80 20% 9.00 82% 0.40 10% 10.20 54% Table 1.8: Classification and prediction performance of the backpropagation models on the validation data set of the Accounting with Math 140 & 141 option H i d d e n N o d e s L e a r n i n g R a t e 0 .1 0 .5 0.9 G l G 2 G 3 T o t a l G l G 2 G 3 T o t a l G l G 2 G 3 T o t a l 4 17.60 44% 36.40 49% 8.00 35% 62.00 45% 13.40 34% 36.20 49% 6.80 30% 56.40 41% 11.40 29% 36.40 49% 5.80 25% 53.60 39% 6 15.20 38% 41.40 56% 7.60 33% 64.20 47% 10.00 25% 34.00 46% 9.80 43% 53.80 39% 11.40 29% 38.40 52% 7.40 32% 57.20 42% 8 14.60 37% 37.00 50% 6.80 30% 58.40 43% 6.20 16% 35.60 48% 5.80 25% 47.60 35% 14.80 37% 30.40 41% 4.60 20% 49.80 36% 10 8.60 22% 51.60 70% 3.80 17% 64.00 47% 10.00 25% 40.40 55% 5.00 22% 55.40 40% 4.80 12% 48.20 65% 4.80 21% 57.80 42% 12 15.20 38% 37.60 51% 5.40 23% 58.20 42% 9.80 25% 37.00 50% 4.00 17% 50.80 37% 10.40 26% 34.80 47% 10.00 43% 55.20 40% 14 13.00 33% 40.20 54% 2.00 9% 55.20 40% 10.40 26% 40.20 54% 6.00 26% 56.60 41% 14.40 36% 33.20 45% 5.00 22% 52.60 38% 16 11.60 29% 42.00 57% 2.20 10% 55.80 41% 8.20 21% 35.80 48% 4.20 18% 48.20 35% 11.80 30% 36.40 49% 6.20 . 27% 54.40 40% Table 1.9: Classification and prediction performance of the learning vector quantization models on the training data set of the Accounting with Math 140 & 141 option 135 H i d d e n N o d e s L e a r n i n g R a t e 0.1 0 .5 0.9 G 1 G2 G3 T o t a l G l G2 G3 T o t a l G 1 G2 G3 T o t a l 4 1.40 35% 5.80 53% 0.40 10% 7.60 40% 1.20 30% 5.40 49% 1.20 30% 7.80 41% 1.60 40% 5.20 47% 0.60 15% 7.40 39% 6 0.80 20% 4.80 44% 1.60 40% 7.20 38% 1.20 30% 5.60 51% 1.20 30% 8.00 42% 1.20 30% 4.60 42% 1.20 30% 7.00 37% 8 1.20 30% 5.80 53% 1.60 40% 8.60 45% 0.80 20% 6.40 58% 1.40 35% 8.60 45% 1.00 25% 4.80 44% 0.60 15% 6.40 34% 10 1.00 25% 7.80 71% 0.60 15% 9.40 49% 0.80 20% 7.00 64% 1.00 25% 8.80 46% 0.60 15% 6.80 62% 0.80 20% 8.20 43% 12 0.80 20% 4.40 40% 0.60 15% 5.80 31% 0.20 5% 6.80 62% 1.40 35% 8.40 44% 0.60 15% 4.80 44% 0.80 20% 6.20 33% 14 1.00 25% 5.40 49% 0.60 15% 7.00 37% 1.20 30% 6.00 55% 0.80 20% 8.00 42% 1.40 35% 6.40 58% 1.20 30% 9.00 47% 16 0.80 20% 7.60 69% 0.00 0% 8.40 44% 0.80 20% 6.20 56% 1.00 25% 8.00 42% 1.20 30% 5.80 53% 1.00 25% 8.00 42% Table 1.1 models ( 0: Classification and prediction performance of the learning vector quantization m the validation data set of the Accounting with Math 140 & 141 option H i d d e n N o d e s L e a r n i n g R a t e 0.1 0.5 0.9 G 1 G2 G3 T o t a l G l G2 G3 T o t a l G l G2 G3 T o t a l 4 15.60 65% 28.80 96% 1.40 23% 45.80 76% 15.60 65% 28.80 96% 1.40 23% 45.80 76% 15.60 65% 28.80 96% 1.40 23% 45.80 76% 6 16.60 69% 28.40 95% 2.40 40% 47.40 79% 16.80 70% 19.20 64% 0.80 13% 36.80 61% 18.60 78% 27.80 93% 4.60 77% 51.00 85% 8 22.00 92% 28.00 93% 3.60 60% 53.60 89% 17.40 73% 26.80 89% 2.60 43% 46.80 78% 17.00 71% 27.00 90% 3.00 50% 47.00 78% 10 15.20 63% 27.60 92% 2.40 40% 45.20 75% 17.80 74% 28.60 95% 4.40 73% 50.80 85% 21.60 90% 28.60 95% 4.60 77% 54.80 91% 12 21.20 88% 27.40 91% 3.20 53% 51.80 86% 21.20 88% 27.40 91% 3.20 53% 51.80 86% 19.20 80% 27.40 91% 3.80 63% 50.40 84% 14 23.20 97% 29.60 99% 4.00 67% 56.80 95% 23.00 96% 29.20 97% 3.80 63% 56.00 93% 20.00 83% 29.20 97% 4.60 77% 53.80 90% 16 22.80 95% 28.00 93% 3.20 53% 54.00 90% 21.40 89% 22.80 76% 2.80 47% 47.00 78% 20.40 85% 25.00 83% 4.20 70% 49.60 83% Table 1.11: Classification and prediction performance of the backpropagation models on the training data set of the Accounting with Math 100 & 101 option 136 H i d d e n N o d e s L e a r n i n g R a t e 0 .1 (1 .5 0 .9 G 1 G2 G3 T o t a l G 1 G2 G3 T o t a l G 1 G2 G3 T o t a l 4 2.40 60% 1.80 90% N/A N/A 4.20 70% 2.40 60% 1.80 90% N/A N/A 4.20 70% 2.40 60% 1.80 90% N/A N/A 4.20 70% 6 2.20 55% 1.20 60% N/A N/A 3.40 57% 2.60 65% 0.60 30% N/A N/A 3.20 53% 2.60 65% 1.00 50% N/A N/A 3.60 60% 8 2.40 60% 1.40 70% N/A N/A 3.80 63% 2.40 60% 1.40 70% N/A N/A 3.80 63% 2.00 50% 1.40 70% N/A N/A 3.40 57% 10 1.80 45% 1.60 80% N/A N/A 3.40 57% 2.40 60% 1.40 70% N/A N/A 3.80 63% 2.80 70% 0.40 20% N/A N/A 3.20 53% 12 3.20 80% 0.80 40% N/A N/A 4.00 67% 3.20 80% 0.80 40% N/A N/A 4.00 67% 2.40 60% 1.60 80% N/A N/A 4.00 67% 14 2.80 70% 1.40 70% N/A N/A 4.20 70% 2.60 65% 1.40 70% N/A N/A 4.00 67% 2.00 50% 1.60 80% N/A N/A 3.60 60% 16 2.80 70% 0.80 40% N/A N/A 3.60 60% 3.00 75% 1.80 90% N/A N/A 4.80 80% 2.20 55% 0.80 40% N/A N/A 3.00 50% Table 1.12: Classification and prediction performance of the backpropagation models on the validation data set of the Accounting with Math 100 & 101 option H i d d e n N o d e s L e a r n i n g R a t e 0.1 0.5 0.9 G l G2 G3 T o t a l G l G2 G3 T o t a l G l G2 G3 T o t a l 4 5.60 23% 16.60 55% 1.60 27% 23.80 40% 8.20 34% 16.20 54% 1.20 20% 25.60 43% 5.80 24% 13.60 45% 2.80 47% 22.20 37% 6 10.00 42% 14.40 48% 1.60 27% 26.00 43% 7.40 31% 14.00 47% 1.00 17% 22.40 37% 7.80 33% 18.20 61% 1.40 23% 27.40 46% 8 11.00 46% 16.20 54% 1.20 20% 28.40 47% 9.80 41% 16.80 56% 1.20 20% 27.80 46% 10.80 45% 17.20 57% 0.80 13% 28.80 48% 10 10.00 42% 18.00 60% 0.40 7% 28.40 47% 10.40 43% 13.60 45% 0.60 10% 24.60 41% 9.80 41% 11.40 38% 0.00 0% 21.20 35% 12 11.80 49% 16.20 54% 1.00 17% 29.00 48% 5.00 21% 17.60 59% 0.20 3% 22.80 38% 9.80 41% 11.80 39% 0.80 13% 22.40 37% 14 15.00 63% 16.80 56% 0.40 7% 32.20 54% 8.00 33% 17.00 57% 0.40 7% 25.40 42% 8.80 37% 12.00 40% 1.20 20% 22.00 37% 16 12.20 51% 18.80 63% 0.00 0% 31.00 52% 9.60 40% 16.40 55% 0.60 10% 26.60 44% 8.00 33% 16.60 55% 1.00 17% 25.60 43% Table 1.13: Classification and prediction performance of the learning vector quantization models on the training data set of the Accounting with Math 100 & 101 option 137 H i d d e n N o d e s L e a r n i n g R a t e 0 .1 1) .5 0 .9 G 1 G 2 G 3 T o t a l G 1 G 2 G 3 T o t a l G 1 G 2 G 3 T o t a l 4 0.60 15% 0.40 20% N/A N/A 1.00 17% 2.00 50% 1.60 80% N/A N/A 3.60 60% 0.80 20% 0.20 10% N/A N/A 1.00 17% 6 1.80 45% 0.80 40% N/A N/A 2.60 43% 1.00 25% 0.60 30% N/A N/A 1.60 27% 2.20 55% 1.00 50% N/A N/A 3.20 53% 8 1.40 35% 0.80 40% N/A N/A 2.20 37% 1.00 25% 0.40 20% N/A N/A 1.40 23% 2.00 50% 1.40 70% N/A N/A 3.40 57% 10 1.80 45% 0.40 20% N/A N/A 2.20 37% 1.40 35% 1.40 70% N/A N/A 2.80 47% 1.40 35% 0.40 20% N/A N/A 1.80 30% 12 2.20 55% 0.00 0% N/A N/A 2.20 37% 1.00 25% 0.00 0% N/A N/A 1.00 17% 1.40 35% 1.00 50% N/A N/A 2.40 40% 14 2.20 55% 0.40 20% N/A N/A 2.60 43% 1.60 40% 0.80 40% N/A N/A 2.40 40% 1.60 40% 1.60 80% N/A N/A 3.20 53% 16 1.80 45% 0.00 0% N/A N/A 1.80 30% 1.60 40% 0.80 40% N/A •N/A 2.40 40% 1.60 40% 0.40 20% N/A N/A 2.00 33% Table 1.14: Classification and prediction performance of the learning vector quantization models on the validation data set of the Accounting with Math 100 & 101 option H i d d e n N o d e s L e a r n i n g R a t e (1 .1 0 .5 C .9 G l G 2 G 3 T o t a l G l G 2 G 3 T o t a l G l G 2 G 3 T o t a l 4 34.60 82% 41.20 84% 0.20 10% 76.00 82% 34.60 82% 41.20 84% 0.20 10% 76.00 82% 34.60 82% 41.20 84% 0.20 10% 76.00 82% 6 30.00 71% 38.60 79% 0.20 10% 68.80 74% 19.60 47% 42.40 87% 0.20 10% 62.20 67% 30.00 71% 38.60 79% 0.20 10% 68.80 74% 8 27.40 65% 44.60 91% 0.20 10% 72.20 78% 28.00 67% 41.40 84% 0.20 10% 69.60 75% 27.40 65% 44.60 91% 0.20 10% 72.20 78% 10 34.00 81% 36.80 75% 0.00 0% 70.80 76% 37.80 90% 45.60 93% 0.00 0% 83.40 90% 34.00 81% 36.80 75% 0.00 0% 70.80 76% 12 32.60 78% 47.20 96% 0.20 10% 80.00 86% 32.60 78% 47.20 96% 0.20 10% 80.00 86% 36.80 88% 40.00 82% 0.40 20% 77.20 83% 14 39.00 93% 46.40 95% 0.40 20% 85.80 92% 36.40 87% 42.60 87% 0.40 20% 79.40 85% 36.00 86% 44.20 90% 0.60 30% 80.80 87% 16 37.40 89% 45.80 93%. 0.00 0% 83.20 89% 29.40 70% 35.00 71% 0.00 0% 64.40 69% 33.20 79% 40.00 82% 0.00 0% 73.20 79% Table 1.15: Classification and prediction performance of the backpropagation models on the training data set of the Finance with Math 140 & 141 option 138 H i d d e n N o d e s L e a r n i n g R a t e 0 .1 G .5 C .9 G l G2 G3 T o t a l G l G2 G3 T o t a l G 1 G2 G3 T o t a l 4 4.40 88% 4.00 50% 0.00 0% 8.40 53% 4.40 88% 4.00 50% 0.00 0% 8.40 53% 4.40 88% 4.00 50% 0.00 0% 8.40 53% 6 2.00 40% 4.00 50% 0.00 0% 6.00 38% 1.20 24% 5.00 63% 0.20 7% 6.40 40% 2.00 40% 4.00 50% 0.00 0% 6.00 38% 8 2.00 40% 4.00 50% 0.20 7% 6.20 39% 3.00 60% 5.20 65% 0.00 0% 8.20 51% 2.00 40% 4.00 50% 0.20 7% 6.20 39% 10 3.00 60% 4.00 50% 0.00 0% 7.00 44% 3.20 64% 5.60 70% 0.00 0% 8.80 55% 3.00 60% 4.00 50% 0.00 0% 7.00 44% 12 3.20 64% 6.00 75% 0.20 7% 9.40 59% 3.20 64% 6.00 75% 0.20 7% 9.40 59% 3.60 72% 3.40 43% 0.00 0% 7.00 44% 14 3.40 68% 5.80 73% 0.00 0% 9.20 58% 2.80 56% 6.00 75% 0.20 7% 9.00 56% 4.00 80% 4.60 58% 0.00 0% 8.60 54% 16 3.60 72% 5.20 65% 0.00 0% 8.80 55% 2.40 48% 5.40 68% 0.00 0% 7.80 49% 3.60 72% 4.80 60% 0.00 0% 8.40 53% Table 1.16: Classification and prediction performance the validation data set of the Finance with Math 140 & of the backpropagation models on 141 option H i d d e n N o d e s L e a r n i n g R a t e 0.1 0 .5 0.9 G 1 G2 G3 T o t a l G 1 G2 G3 T o t a l G 1 G2 G3 T o t a l 4 18.60 44% 23.80 49% 0.20 10% 42.60 46% 6.00 14% 28.20 58% 0.40 20% 34.60 37% 13.20 31% 26.60 54% 0.40 20% 40.20 43% 6 16.60 40% 22.60 46% 0.00 0% 39.20 42% 9.20 22% 32.60 67% 0.20 10% 42.00 45% 9.20 22% 24.60 50% 0.60 30% 34.40 37% 8 15.00 36% 23.00 47% 0.00 0% 38.00 41% 17.80 42% 23.60 48% 0.00 0% 41.40 45% 19.80 47% 20.20 41% 0.00 0% 40.00 43% 10 24.20 58% 20.00 41% 0.00 0% 44.20 48% 13.40 32% 25.00 51% 0.40 20% 38.80 42% 15.80 38% 27.40 56% 0.00 0% 43.20 46% 12 26.00 62% 23.80 49% 0.00 0% 49.80 54% 17.20 41% 21.80 44% 0.00 0% 39.00 42% 18.60 44% 22.40 46% 0.40 20% 41.40 45% 14 26.80 64% 24.80 51% 0.00 0% 51.60 55% 15.40 37% 27.80 57% 0.00 0% 43.20 46% 17.80 42% 24.00 49% 0.20 10% 42.00 45% 16 30.00 71% 23.60 48% 0.00 0% 53.60 58% 14.00 33% 27.00 55% 0.20 10% 41.20 44% 18.60 44% 27.60 56% 0.20 10% 46.40 50% Table 1.17: Classification and prediction performance of the learning vector quantization models on the training data set of the Finance with Math 140 & 141 option 139 H i d d e n N o d e s L e a r n i n g R a t e 0.1 0 .5 0.9 G l G2 G3 T o t a l G l G2 G3 T o t a l G l G2 G3 T o t a l 4 2.40 48% 4.20 53% 1.00 33% 7.60 48% 0.80 16% 3.60 45% 1.40 47% 5.80 36% 1.20 24% 4.60 58% 0.20 7% 6.00 38% 6 2.00 40% 3.40 43% 0.60 20% 6.00 38% 1.00 20% 5.60 70% 0.20 7% 6.80 43% 1.00 20% 3.40 43% 0.60 20% 5.00 31% 8 2.40 48% 3.60 45% 0.60 20% 6.60 41% 1.80 36% 5.60 70% 0.20 7% 7.60 48% 1.80 36% 2.00 25% 0.40 13% 4.20 26% 10 2.80 56% 4.80 60% 0.60 20% 8.20 51% 1.80 36% 4.60 58% 0.20 7% 6.60 41% 1.80 36% 4.20 53% 0.00 0% 6.00 38% 12 3.00 60% 4.60 58% 0.40 13% 8.00 50% 2.00 40% 4.00 50% 0.60 20% 6.60 41% 2.80 56% 3.20 40% 0.40 13% 6.40 40% 14 2.80 56% 5.40 68% 0.20 7% 8.40 53% 1.20 24% 5.20 65% 0.00 0% 6.40 40% 2.00 40% 3.80 48% 0.40 13% 6.20 39% 16 3.00 60% 5.40 68% 0.20 7% 8.60 54% 1.80 36% 4.00 50% 0.00 0% 5.80 36% 2.80 56% 5.00 63% 0.20 7% 8.00 50% Table IJ models ( 8: Classification and prediction performance of the learning vector quantization )n the validation data set of the Finance with Math 140 & 141 option H i d d e n N o d e s L e a r n i n g R a t e 0.1 0 .5 0.9 G l G2 G3 T o t a l G 1 G2 G3 T o t a l G 1 G2 G3 T o t a l 4 23.60 91% 18.00 75% 0.80 40% 42.40 82% 23.60 91% 18.00 75% 0.80 40% 42.40 82% 23.60 91% 18.00 75% 0.80 40% 42.40 82% 6 24.00 92% 12.80 53% 0.80 40% 37.60 72% 18.40 71% 17.20 72% 0.40 20% 36.00 69% 24.00 92% 12.80 53% 0.80 40% 37.60 72% 8 19.40 75% 21.20 88% 0.40 20% 41.00 79% 18.60 72% 21.20 88% 1.00 50% 41.60 80% 19.40 75% 21.20 88% 0.40 20% 41.00 79% 10 25.00 96% 19.20 80% 1.00 50% 45.20 87% 23.80 92% 16.20 68% 0.60 30% 40.60 78% 25.00 96% 19.20 80% 1.00 50% 45.20 87% 12 17.00 65% 22.80 95% 0.40 20% 40.20 77% 17.00 65% 22.80 95% 0.40 20% 40.20 77% 19.60 75% 23.20 97% 1.00 50% 43.80 84% 14 25.60 98% 23.80 99% 1.20 60% 50.60 97% 24.20 93% 23.20 97% 1.60 80% 49.00 94% 20.60 79% 23.80 99% 1.60 80% 46.00 88% 16 24.80 95% 23.60 98% 0.60 30% 49.00 94% 24.80 95% 15.80 66% 1.00 50% 41.60 80% 24.60 95% 23.20 97% 1.20 60% 49.00 94% Table 1.19: Classification and prediction performance of the backpropagation models on the training data set of the Finance with Math 100 & 101 option 140 H i d d e n N o d e s L e a r n i n g R a t e 0 .1 0 .5 0.9 G 1 G 2 G 3 T o t a l G l G 2 G 3 T o t a l G l G 2 G 3 T o t a l 4 1.60 80% 0.20 10% N/A N/A 1.80 45% 1.60 80% 0.20 10% N/A N/A 1.80 45% 1.60 80% 0.20 10% N/A N/A 1.80 45% 6 1.40 70% 0.60 30% N/A N/A 2.00 50% 1.20 60% 1.00 50% N/A N/A 2.20 55% 1.40 70% 0.60 30% N/A N/A 2.00 50% 8 1.60 80% 0.80 40% N/A N/A 2.40 60% 1.00 50% 0.60 30% N/A N/A 1.60 40% 1.60 80% 0.80 40% N/A N/A 2.40 60% 10 0.80 40% 0.40 20% N/A N/A 1.20 30% 1.20 60% 0.60 30% N/A N/A 1.80 45% 0.80 40% 0.40 20% N/A N/A 1.20 30% 12 1.40 70% 1.20 60% N/A N/A 2.60 65% 1.40 70% 1.20 60% N/A N/A 2.60 65% 1.20 60% 0.80 40% N/A N/A 2.00 50% 14 1.40 70% 0.20 10% N/A N/A 1.60 40% 1.40 70% 0.80 40% N/A N/A 2.20 55% 1.00 50% 0.60 30% N/A N/A 1.60 40% 16 1.20 60% 1.00 50% N/A N/A 2.20 55% 1.60 80% 0.80 40% N/A N/A 2.40 60% 1.00 50% 0.80 40% N/A N/A 1.80 45% Table 1.20: Classification and prediction performance of the backpropagation models on the validation data set of the Finance with Math 100 & 101 option H i d d e n N o d e s L e a r n i n g R a t e 0 .1 0 .5 0.9 G l G 2 G 3 T o t a l G l G 2 G 3 T o t a l G 1 G 2 G 3 T o t a l 4 14.20 55% 4.80 20% 0.60 30% 19.60 38% 15.80 61% 5.60 23% 0.40 20% 21.80 42% 17.00 65% 3.00 13% 0.60 30% 20.60 40% 6 12.80 49% 5.60 23% 0.40 20% 18.80 36% 13.40 52% 10.40 43% 0.60 30% 24.40 47% 12.20 47% 8.20 34% 0.40 20% 20.80 40% 8 11.00 42% 9.20 38% 0.80 40% 21.00 40% 13.20 51% 8.00 33% 0.20 10% 21.40 41% 9.00 35% 10.20 43% 0.00 0% 19.20 37% 10 13.60 52% 11.00 46% 0.60 30% 25.20 48% 13.60 52% 9.40 39% 0.20 10% 23.20 45% 11.60 45% 9.40 39% 0.00 0% 21.00 40% 12 14.00 54% 9.80 41% 0.00 0% 23.80 46% 12.00 46% 11.60 48% 0.40 20% 24.00 46% 12.40 48% 9.00 38% 0.20 10% 21.60 42% 14 13.40 52% 10.20 43% 0.20 10% 23.80 46% 11.80 45% 12.00 50% 0.40 20% 24.20 47% 14.20 55% 8.00 33% 0.20 10% 22.40 43% 16 15.00 58% 9.00 38% 0.00 0% 24.00 46% 14.60 56% 11.00 46% 0.20 10% 25.80 50% 13.60 52% 10.60 44% 0.60 30% 24.80 48% Table 1.21: Classification and prediction performance of the learning vector quantization models on the training data set of the Finance with Math 100 & 101 option 141 H i d d e n N o d e s L e a r n i n g R a t e 0.1 0.5 0.9 G l G2 G3 T o t a l G l G2 G3 T o t a l G l G2 G3 T o t a l 4 1.20 60% 0.60 30% N/A N/A 1.80 45% 1.20 60% 0.20 10% N/A N/A 1.40 35% 1.20 60% 0.60 30% N/A N/A 1.80 45% 6 0.40 20% 0.40 20% N/A N/A 0.80 20% 0.80 40% 1.00 50% N/A N/A 1.80 45% 0.40 20% 0.80 40% N/A N/A 1.20 30% 8 0.60 30% 1.00 50% N/A N/A 1.60 40% 0.80 40% 0.80 40% N/A N/A 1.60 40% 0.20 10% 0.20 10% N/A N/A 0.40 10% 10 0.60 30% 1.20 60% N/A N/A 1.80 45% 0.80 40% 1.00 50% N/A N/A 1.80 45% 1.00 50% 0.80 40% N/A N/A 1.80 45% 12 0.60 30% 1.40 70% N/A N/A 2.00 50% 0.80 40% 1.40 70% N/A N/A 2.20 55% 1.00 50% 0.80 40% N/A N/A 1.80 45% 14 1.00 50% 1.80 90% N/A N/A 2.80 70% 0.40 20% 1.00 50% N/A N/A 1.40 35% 1.40 70% 0.60 30% N/A N/A 2.00 50% 16 0.40 20% 1.20 60% N/A N/A 1.60 40% 0.60 30% 1.20 60% N/A N/A 1.80 45% 1.40 70% 0.40 20% N/A N/A 1.80 45% Table 1.22: Classification and prediction performance of the learning vector quantization models on the validation data set of the Finance with Math 100 & 101 option H i d d e n N o d e s L e a r n i n g R a t e 0.1 0.5 0.9 G l G2 G3 T o t a l G 1 G2 G3 T o t a l G 1 G2 G3 T o t a l 4 1.20 20% 69.80 100% 0.80 27% 71.80 9 1 % 1.20 20% 69.80 100% 0.80 27% 71.80 9 1 % 1.20 20% 69.80 100% 0.80 27% 71.80 9 1 % 6 1.20 20% 68.40 98% 0.40 13% 70.00 89% 2.20 37% 69.20 99% 0.80 27% 72.20 9 1 % 1.60 27% 64.00 9 1 % 1.00 33% 66.60 84% 8 4.60 77% 70.00 100% 1.40 47% 76.00 96% 3.20 53% 70.00 100% 2.00 67% 75.20 95% 4.00 67% 60.40 86% 0.60 20% 65.00 82% 10 2.00 33% 70.00 100% 0.80 27% 72.80 92% 4.00 67% 70.00 100% 1.60 53% 75.60 96% 4.20 70% 70.00 100% 1.80 60% 76.00 96% 12 2.80 47% 69.60 99% 1.40 47% 73.80 93% 2.60 43% 70.00 100% 1.20 40% 73.80 93% 2.80 47% 69.60 99% 1.40 47% 73.80 93% 14 3.80 63% 70.00 100% 1.20 40% 75.00 95% 3.80 63% 70.00 100% 0.80 27% 74.60 94% 3.80 63% 70.00 100% 0.80 27% 74.60 94% 16 4.60 77% 70.00 100% 2.20 73% 76.80 97% 3.60 60% 70.00 100% 1.80 60% 75.40 95% 4.00 67% 69.80 100% 1.60 53% 75.40 95% Table 1.23: Classification and prediction performance of the backpropagation models on the training data set of the Marketing with Math 140 & 141 option 142 Hidden Nodes Learning Rate C .1 0 .5 (1 .9 G l G2 G3 Total G l G2 G3 Total G l G2 G3 Total 4 0.00 0% 9.80 89% N/A N/A 9.80 82% 0.00 0% 9.80 89% N/A N/A 9.80 82% 0.00 0% 9.80 89% N/A N/A 9.80 82% 6 0.00 0% 10.20 93% N/A N/A 10.20 85% 0.00 0% 9.60 87% N/A N/A 9.60 80% 0.00 0% 9.00 82% N/A N/A 9.00 75% 8 0.00 0% 9.80 89% N/A N/A 9.80 82% 0.00 0% 8.40 76% N/A N/A 8.40 70% 0.00 0% 8.60 78% N/A N/A 8.60 72% 10 0.00 0% 10.80 98% N/A N/A 10.80 90% 0.00 0% 9.40 85% N/A N/A 9.40 78% 0.00 0% 9.80 89% N/A N/A 9.80 82% 12 0.00 0% 8.60 78% N/A N/A 8.60 72% 0.00 0% 9.40 85% N/A N/A 9.40 78% 0.00 0% 8.60 78% N/A N/A 8.60 72% 14 0.00 0% 9.80 89% N/A N/A 9.80 82% 0.00 0% 11.00 100% N/A N/A 11.00 92% 0.00 0% 11.00 100% N/A N/A 11.00 92% 16 0.20 20% 10.40 95% N/A N/A 10.60 88% 0.00 0% 9.80 89% N/A N/A 9.80 82% 0.00 0% 10.00 91% N/A N/A 10.00 83% Table 1.24: Classification and prediction performance of the backpropagation models on the validation data set of the Marketing with Math 140 & 141 option Hidden Nodes Learning Rate 0 .1 0.5 0.9 G 1 G2 G3 Total G l G2 G3 Total G 1 G2 G3 Total 13 0.40 7% 54.60 78% 0.80 27% 55.80 71% 0.40 7% 58.60 84% 0.40 13% 59.40 75% 0.00 0% 59.40 85% 0.20 7% 59.60 75% 14 0.20 3% 54.60 78% 0.20 7% 55.00 70% 0.20 3% 61.20 87% 0.00 0% 61.40 78% 0.20 3% 62.80 90% 0.00 0% 63.00 80% 15 0.40 7% 59.60 85% 0.00 0% 60.00 76% 0.20 3% 59.20 85% 0.00 0% 59.40 75% 0.20 3% 61.20 87% 0.20 7% 61.60 78% 16 0.00 0% 60.20 86% 0.20 7% 60.40 76% 0.40 7% 66.60 95% 0.00 0% 67.00 85% 0.20 3% 61.40 88% 0.40 13% 62.00 78% Table 1.25: Classification and prediction performance of the learning vector quantization models on the training data set of the Marketing with Math 140 & 141 option Hidden Nodes Learning Rate 0 .1 0 .5 0.9 G l G2 G3 Total G l G2 G3 Total G 1 G2 G3 Total 13 0.20 20% 8.80 80% N/A N/A 9.00 75% 0.00 0% 9.60 87% N/A N/A 9.60 80% 0.00 0% 8.80 80% N/A N/A 8.80 73% 14 0.20 20% 10.40 95% N/A N/A 10.60 88% 0.00 0% 9.80 89% N/A N/A 9.80 82% 0.00 0% 9.40 85% N/A N/A 9.40 78% 15 0.00 0% 9.20 84% N/A N/A 9.20 77% 0.00 0% 9.00 82% N/A N/A 9.00 75% 0.00 0% 9.20 84% N/A N/A 9.20 77% 16 0.00 0% 10.20 93% N/A N/A 10.20 85% 0.00 0% 10.40 95% N/A N/A 10.40 87% 0.00 0% 9.80 89% N/A N/A 9.80 82% Table 1.26: Classification and prediction performance of the learning vector quantization models on the validation data set of the Marketing with Math 140 & 141 option 143 H i d d e n N o d e s L e a r n i n g R a t e 0 .1 0 .5 0 .9 G l G2 G3 T o t a l G l G2 G3 T o t a l G 1 G2 G3 T o t a l 4 1.40 70% 13.00 100% N/A N/A 14.40 96% 1.40 70% 13.00 100% N/A N/A 14.40 96% 2.00 100% 13.00 100% N/A N/A 15.00 100% 6 0.80 40% 13.00 100% N/A N/A 13.80 92% 0.00 0% 12.80 98% N/A N/A 12.80 85% 1.00 50% 13.00 100% N/A N/A 14.00 93% 8 1.20 60% 13.00 100% N/A N/A 14.20 95% 1.40 70% 13.00 100% N/A N/A 14.40 96% 1.20 60% 12.60 97% N/A N/A 13.80 92% 10 1.80 90% 12.80 98% N/A N/A 14.60 97% 1.20 60% 12.60 97% N/A N/A 13.80 92% 1.60 80% 13.00 100% N/A N/A 14.60 97% 12 1.80 90% 12.80 98% N/A N/A 14.60 97% 1.60 80% 13.00 100% N/A N/A 14.60 97% 1.00 50% 13.00 100% N/A N/A 14.00 93% 14 2.00 100% 13.00 100% N/A N/A 15.00 100% 1.20 60% 13.00 100% N/A N/A 14.20 95% 1.20 60% 13.00 100% N/A N/A 14.20 95% 16 2.00 100% 12.80 98% N/A N/A 14.80 99% 1.20 60% 13.00 100% N/A N/A 14.20 95% 1.80 90% 13.00 100% N/A N/A 14.80 99% Table 1.27: Classification and prediction performance of the backpropagation models on the training data set of the Marketing with Math 100 & 101 option H i d d e n N o d e s L e a r n i n g R a t e 0 .1 0 .5 0.9 G l G2 G3 T o t a l G 1 G2 G3 T o t a l G 1 G2 G3 T o t a l 4 N/A N/A 1.00 100% N/A N/A 1.00 100% N/A N/A 1.00 100% N/A N/A 1.00 100% N/A N/A 1.00 100% N/A N/A 1.00 100% 6 N/A N/A 0.80 80% N/A N/A 0.80 80% N/A N/A 1.00 100% N/A N/A 1.00 100% N/A N/A 1.00 100% N/A N/A 1.00 100% 8 N/A N/A 1.00 100% N/A N/A 1.00 100% N/A N/A 1.00 100% N/A N/A 1.00 100% N/A N/A 1.00 100% N/A N/A 1.00 100% 10 N/A N/A 1.00 100% N/A N/A 1.00 100% N/A N/A 1.00 100% N/A N/A 1.00 100% N/A N/A 1.00 100% N/A N/A 1.00 100% 12 N/A N/A 1.00 100% N/A N/A 1.00 100% N/A N/A 0.80 80% N/A N/A 0.80 80% N/A N/A 1.00 100% N/A N/A 1.00 100% 14 N/A N/A 1.00 100% N/A N/A 1.00 100% N/A N/A 1.00 100% N/A N/A 1.00 100% N/A N/A 1.00 100% N/A N/A 1.00 100% 16 N/A N/A 1.00 100% N/A N/A 1.00 100% N/A N/A 1.00 100% N/A N/A 1.00 100% N/A N/A 1.00 100% N/A N/A 1.00 100% Table 1.28: Classification and prediction performance of the backpropagation models on the validation data set of the Marketing with Math 100 & 101 option 144 Hidden Nodes Learn ing Rate 0 .1 II .5 0 .9 G l G2 G3 Tota l G 1 G2 G3 Total G 1 G2 G3 Tota l 8 1.00 50% 13.00 100% N/A N/A 14.00 93% 0.60 30% 13.00 100% N/A N/A 13.60 91% 0.40 20% 12.80 98% N/A N/A 13.20 88% 10 1.00 50% 13.00 100% N/A N/A 14.00 93% 0.20 10% 12.40 95% N/A N/A 12.60 84% 0.20 10% 11.80 91% N/A N/A 12.00 80% 12 1.00 50% 13.00 100% N/A N/A 14.00 93% 0.40 20% 13.00 100% N/A N/A 13.40 89% 0.20 10% 12.40 95% N/A N/A 12.60 84% 14 0.80 40% 13.00 100% N/A N/A 13.80 92% 0.40 20% 12.80 98% N/A N/A 13.20 88% 0.20 10% 12.00 92% N/A N/A 12.20 81% 16 1.60 80% 13.00 100% N/A N/A 14.60 97% 0.80 40% 11.40 88% N/A N/A 12.20 81% 0.40 20% 11.60 89% N/A N/A 12.00 80% Table 1.29: Classification and prediction performance of the learning vector quantization models on the training data set of the Marketing with Math 100 & 101 option Hidden Nodes Learn ing Rate 0 .1 0 .5 0 .9 G 1 G2 G3 Total G 1 G2 G3 Tota l G l G2 G3 Total 8 N/A N/A 1.00 100% N/A N/A 1.00 100% N/A N/A 1.00 100% N/A N/A 1.00 100% N/A N/A 1.00 100% N/A N/A 1.00 100% 10 N/A N/A 1.00 100% N/A N/A 1.00 100% N/A N/A LOO 100% N/A N/A 1.00 100% N/A N/A 0.80 80% N/A N/A 0.80 80% 12 N/A N/A 1.00 100% N/A N/A 1.00 100% N/A N/A 1.00 100% N/A N/A 1.00 100% N/A N/A 1.00 100% N/A N/A 1.00 100% 14 N/A N/A 1.00 100% N/A N/A 1.00 100% N/A N/A 1.00 100% N/A N/A 1.00 100% N/A N/A 0.60 60% N/A N/A 0.60 60% 16 N/A N/A 1.00 100% N/A N/A 1.00 100% N/A N/A 0.60 60% N/A N/A 0.60 60% N/A N/A 0.80 80% N/A N/A 0.80 80% Table 1.30: Classification and prediction performance of the learning vector quantization models on the validation data set of the Marketing with Math 100 & 101 option The following tables show all detailed figures regarding the A N O V A tests of significant differences between the correct classification performance of the backpropagation models and that of the L V Q models. Each table corresponds to a particular specialization track with either a training or validation data set. There are a total of 12 different tables for this section. Source Te rm Sum of Squares d f Mean Square F-Ratio Probabi l i ty Level Power (A lpha = 0.05) Between Groups Within Groups Total (Adjusted) 27157.71 1712.522 28870.24 1 40 41 27157.71 42.81305 634.33 0.000000* 1.000000 (* Significant at alpha = 0.05) Table 1.31: Analysis of Variance for the training data set of the Accounting with Math 140 & 141 option 145 Source Term Sum of Squares df Mean Square F-Ratio Probability Level Power (Alpha = 0.05) Between Groups 45.67714 1 45.67714 70.15 0.000000* 1.000000 Within Groups 26.04571 40 0.6511428 Total (Adjusted) 71.72285 41 (* Significant at alpha = 0.05) Table 1.32: Analysis of Variance for the validation data set of the Accounting with Math 140 & 141 option Source Term Sum of Squares df Mean Square F-Ratio Probability Level Power (Alpha = 0.05) Between Groups 5914.347 1 5914.347 373.45 0.000000* 1.000000 Within Groups 633.4781 40 15.83695 Total (Adjusted) 6547.825 41 (* Significant at alpha = 0.05) Table 1.33: Analysis of Variance for the training data set of the Accounting with Math 100 & 101 option Source Term Sum of Squares df Mean Square F-Ratio Probability Level Power (Alpha = 0.05) Between Groups 25.30381 1 25.30381 64.82 0.000000* 1.000000 Within Groups 15.61524 40 0.3903809 Total (Adjusted) 40.91905 41 (* Significant at alpha = 0.05) Table 1.34: Analysis of Variance for the validation data set of the Accounting with Math 100 & 101 option Source Term Sum of Squares df Mean Square F-Ratio Probability Level Power (Alpha = 0.05) Between Groups 11139.43 1 11139.43 348.38 0.000000* 1.000000 Within Groups 1278.983 40 31.97457 Total (Adjusted) 12418.41 41 (* Significant at alpha = 0.05) Table 1.35: Analysis of Variance for the training data set of the Finance with Math 140 & 141 option Source Term Sum of Squares df Mean Square F-Ratio Probability Level Power (Alpha = 0.05) Between Groups 13.48667 1 13.48667 9.87 0.003158* 0.865502 Within Groups 54.65905 40 1.366476 Total (Adjusted) 68.14571 41 (* Significant at alpha = 0.05) Table 1.36: Analysis of Variance for the validation data set of the Finance with Math 140 & 141 option 146 Source Term Sum of Squares df Mean Square F-Ratio Probability Level Power (Alpha = 0.05) Between Groups 4422.881 1 4422.881 429.81 0.000000* 1.000000 Within Groups 411.6152 40 10.29038 Total (Adjusted) 4834.496 41 (* Significant at alpha = 0.05) Table 1.37: Analysis of Variance for the training data set of the Finance with Math 100 & 101 option Source Term Sum of Squares df Mean Square F-Ratio Probability Level Power (Alpha = 0.05) Between Groups 0.8571429 1 0.8571429 4.32 0.044013* 0.527778 Within Groups 7.927619 40 0.1981905 Total (Adjusted) 8.784761 41 (* Significant at alpha = 0.05) Table 1.38: Analysis of Variance for the validation data set of the Finance with Math 100 & 101 option Source Term Sum of Squares df Mean Square F-Ratio Probability Level Power (Alpha = 0.05) Between Groups 1261.87 1 1261.87 133.60 0.000000* 1.000000 Within Groups 292.8062 31 9.445361 Total (Adjusted) 1554.676 32 (* Significant at alpha = 0.05) Table L39: Analysis of Variance for the training data set of the Marketing with Math 140 & 141 option Source Term Sum of Squares df Mean Square F-Ratio Probability Level Power (Alpha = 0.05) Between Groups Within Groups Total (Adjusted) 0.1125974 15.60619 15.71879 1 31 32 0.1125974 0.5034255 0.22 0.639576 0.074405 Table 1.40: Analysis of Variance for the validation data set of the Marketing with Math 140 & 141 option Source Term Sum of Squares df Mean Square F-Ratio Probability Level Power (Alpha = 0.05) Between Groups Within Groups Total (Adjusted) 11.2767 15.10552 26.38222 1 34 35 11.2767 0.4442801 25.38 0.000015* 0.998313 (* Significant at alpha = 0.05) Table 1.41: Analysis of Variance for the training data set of the Marketing with Math 100 & 101 option 147 Source Term Sum of Squares df Mean Square F-Ratio Probability Level Power (Alpha = 0.05) Between Groups Within Groups Total (Adjusted) 0.0325079 0.376381 0.4088889 1 34 35 0.0325079 0.01107003 2.94 0.095699** 0.384255 (* * Significant at alpha = 0.10) Table 1.42: Analysis of Variance for the validation data set of the Marketing with Math 100 & 101 option 

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
http://iiif.library.ubc.ca/presentation/dsp.831.1-0089017/manifest

Comment

Related Items