Open Collections

UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

An investigation into using artificial neural networks for empirical design in the mining industry Miller-Tait, Logan 1998

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata

Download

Media
831-ubc_1998-0550.pdf [ 8.93MB ]
Metadata
JSON: 831-1.0081150.json
JSON-LD: 831-1.0081150-ld.json
RDF/XML (Pretty): 831-1.0081150-rdf.xml
RDF/JSON: 831-1.0081150-rdf.json
Turtle: 831-1.0081150-turtle.txt
N-Triples: 831-1.0081150-rdf-ntriples.txt
Original Record: 831-1.0081150-source.json
Full Text
831-1.0081150-fulltext.txt
Citation
831-1.0081150.ris

Full Text

An Investigation into Using Artificial Neural Networks for Empirical Design in the Mining Industry By Logan Miller-Tait B.A.Sc, University of British Columbia, 1990 A Thesis Submitted in Partial Fulfillment of the Requirements for the Degree of Master's of Applied Science in The Faculty of Graduate Studies Department of Mining and Mineral Process Engineering We accept this thesis as conforming to the required standard The Unive r s i ty of B r i t i s h C o l u m b i a September, 1998 © L o g a n M i l l e r - T a i t , 1998 In presenting this thesis in partial fulfilment of the requirements for an advanced degree at the University of British Columbia, I agree that the Library shall make it freely available for reference and study. I further agree that permission for extensive copying of this thesis for scholarly purposes may be granted by the head of my department or by his or her representatives. It is understood that copying or publication of this thesis for financial gain shall not be allowed without my written permission. Department of M<fV/M0 r- M l N££AL f>@CCb^ &l^6\ *J&&£l The University of British Columbia Vancouver, Canada DE-6 (2/88) II Abstract The mining industry relies heavily upon empirical analysis for design and prediction. Neural networks are computer programs that use parallel processing, similar to the human brain, to analyze data for trends and correlation. Neural networks will analyze input factors by assigning a weighting or numerical value to each input factor or each input factor combination. This is done to provide an estimate or prediction for an output factor. As more information becomes available the neural network will continually adjust and change the input factor weightings to improve output prediction. The objective of this thesis is to see if neural networks can provide assistance in analyzing data within the mining industry. Neural networks were used to study rockburst prediction, ELOS (equivalent linear overbreak/slough ), and dilution estimates in open stopes. There were two aspects of the neural network rockburst prediction analysis. The first aspect was to see if the neural network could predict rockbursts through the physical inputs such as R M R and span. The neural network was very successful in predicting rockbursts. The other aspect of the rockburst study was to see if neural networks, using seismic data, could predict when a rockburst would occur. The neural networks were unsuccessful in predicting when a rockburst would occur. Further analysis with seismic data presented in a different format may improve results. Neural networks were also used in this study to assist in ELOS prediction. A joint U B C -Canmet research project ( Clarke, 1997 ) recorded ELOS from 75 open stopes while I l l recording 23 relevant input factors for each stope. This neural net analysis was conducted to assist in determining the most relevant factors. Rock quality, particularly RMR, proved to be the most significant factor in ELOS determination. Blasting factors such as blasthole diameter, length, and layout were also very relevant and helped predict outlier points. This analysis shows some potential advantages and shortcomings of using neural networks in the mineral industry. The final neural network was to compare open stope dilution formula estimates with neural network estimates. Using the same database for neural net training as the formula derivation, the neural net dilution estimates on unseen data were, on average, more accurate than the formula predictions. The conclusion focuses on how neural networks could be practically applied in the mining industry. I V Table of Contents Section Page Acknowledgments Kill Abstract II Table of Contents IV List of Figures IX List of Tables XI Chapter 1 Introduction, Neural Network Description 1.1 Introduction - Neural Networks 1.1.1 Introduction - Rockburst Input Prediction 1.1.2 Introduction - Neural Network Seismic Analysis 1.1.3 Introduction - Neural Network ELOS Estimation 1.1.4 Introduction - Neural Net/Formula Dilution Prediction Comparison 1.2 History of Artificial Neural Networks 1.3 Current Research 1.4 Basic Single Layer Neural Network Program 1.5 Multilayered Neural Networks 1.6 Braincel Neural Network Program 1.7 Braincel Example on how a Neural Net Works 1.8 Best Net Searches 1.9 Steps to Produce an Optimum Braincel Expert Network 1.10 Summary 1 1 3 4 5 6 8 8 10 13 16 22 24 25 Chapter 2: Neural Networks - Rockburst Prediction at the Arthur W. White Mine 2. J Introduction 2.2 A .W. White Mine Empirical Rockburst Study 2.3 Two Layer Neural Network Analysis 2.4 Single Layer Neural Network Analysis 2.5 Input Factor Weightings 2.6 Summary Chapter 3: Brunswick Mining & Smelting Rockburst Location Analysis 3.1 Introduction 42 3.2 Neural Network Physical Data Inputs 43 3.3 Neural Network Analysis 47 3.4 Physical Characteristic Input Rankings 50 3.5 Summary 52 Chapter 4: BMS Seismic Data Analysis 4.1 Introduction 53 4.2 B M S Microseismic Recording Systems 53 4.3 Neural Network Seismic Data Inputs 56 4.4 1 put and Output Factor Rankings. 61 4.5 Initial Day of Bump Network 62 4.6 Input Factor Graph Analysis 63 4.7 Bump Predicting Analysis 70 4.8 Input Factor Adjustment 70 4.9 Adjusted Day of Bump Network 73 4.10 Previous Day Network 73 4.11 Cumulative Week Before Network 75 26 27 32 35 38 41 vi 4.12 Incremental Neural Net Study 76 4.13 Recommendations 76 4.14 Summary 79 Chapter 5 : Neural Network Wall Slough Dilution Prediction 5.1 Introduction 80 5.2 Background 80 5.3 Linear Neural Net Processing 81 5.4 Log Scale Neural Net Processing 83 5.5 Input Relevance Determination 84 5.6 N , HR and Variable Input Log Scale Processing 85 5.7 Log Processed R M R , HR, and Variable Input Factor 87 5.8 Linear Processed RMR, HR, and Variable Input Factor 90 5.9 Individual Input Analysis 93 5.9.1 Surface , 93 5.9.2 Depth 96 5.9.3 Dip 98 5.9.4 Adjusted Dip 1 100 5.9.5 Width 102 5.9.6 Strike Length 104 5.9.7 Stope Height 106 5.9.8 Height/Length Ratio 108 5.9.9 Surface Character 110 5.9.10 Undercutting Severity 112 5.9.11 Blasthole Severity 114 5.9.12 Average Blasthole Length 116 5.9.13 Hole/String Diameter Ratio 118 5.9.14 Spacing/Burden Ratio 120 5.9.15 Offset Distance - 122 5.9.16 Powder Factor 124 5.9.17 Blast Layout Adjacent to Contact 126 5.9.18 Stope Support 129 5.9.19 Time Between Initial Blast and CMS Survey 131 5.9.20 Number of Stope Blasts required to Complete Stope 133 5.9.21 Mining in Vicinity 135 5.10.1 Blasthole Diameter, Blasthole Length, Strike Length, RMR, and HR 137 5.10.2 RMR, Blasthole Length, Blasthole Layout, HR 139 5.10.3 R M R , Adjusted Dip, Blasthole Diameter, Blasthole Length and HR 141 5.10.4 R M R , Surface, Blasthole Diameter, Blasthole Length, Blasthole Layout, Time Before Survey, HR 143 5.10.5 Log Neural Net: RMR, Surface, Strike Length, Stope Height, Offset Distance, Time Before Survey, HR 145 5.10.6 Log Neural Net: N , Surface, HZL Ratio, Offset Distance, Powder Factor, Time Before Survey, HR 147 '5.11 Input Analysis Summary 149 5.12 Design Lines Versus Neural Network Results 149 5.13 Conclusion 154 Chapter 6: Neural Network/Formula Dilution Prediction Comparison 6.1 Introduction 156 6.2 Neural Network Analysis 156 6.3 Neural Network - Formula Comparison on Unseen Data 159 6.4 Conclusion 163 Chapter 7: Conclusion 7.1 Introduction 164 7.2 Rockburst Input Prediction 164 7.3 Rockburst Prediction Through Seismic Analysis 164 7.4 Neural Net ELOS Prediction 170 7.5 Final Remarks 174 1* List of Figures Figure Page 1-1 Multilayer Network 11 1-2 Flow Sheet on How to set up a Neural Network in Braincel 14 1-3 Simplified Schematic of a Neural Network 17 1-4 Example of Neural Network Node Operation 19 1-5 Parabolic Example of Back Propogation Error Estimation 20 1-6 Single and Multi-node Error-Weight Curves 22 1- 7 Flow Chart to Develop a Best Net 23 2- 1 A.W. White Mine Location 26 2- 2 Span Definition Plan View 29 3- 1 Brunswick Mining and Smelting Location 42 3- 2 Rockburst Location Prediction Input Factor Weightings 51 4- 1 The Evolution of the M P 250 System at Brunswick Mining 54 4-2 Energy Index Concept 58 4-3 Seismic Source Movement Patterns 59 4-4 Energy Index vs Bump 65 4-5 Apparent Volume vs Bump 66 4-6 Slope Va vs Bump 67 4-7 Activity Rate vs Bump 68 4-8 Diffusion vs Bump 69 4- 9 Nested Neural Network Layout , 78 5- 2 Relative Input Weightings for the Log N Series 89 5-3 Relative Input Weightings for the Linear R M R Series 92 5-4 Surface Summary Charts , 95 5-5 Depth Summary Charts 97 5-6 Dip Summary Charts 99 5-7 Adjusted Dip Summary Charts 101 5-8 Width Summary Charts 103 5-9 Strike Length Summary Charts 105 X 5-10 Stope Height Summary Charts 107 5-11 Height/Length Ratio Summary Charts 109 5-12 Surface Character Summary Charts 111 5-13 Undercut Severity Summary Charts 113 .5-14 Blasthole Diameter Summary Charts 115 5-15 Blasthole Length Summary Charts 117 5-16 Hole/String Diameter Summary Charts 119 5-17 Spacing/Burden Ratio Summary Charts 121 5-18 Offset Distance Summary Charts 123 5-19 Powder Factor Summary Charts 125 5-20 Blasthole Layout Summary Charts 128 5-21 Stope Support Summary Charts 130 5-22 Time Between Initial Blast and Stope Survey Summary Charts 132 5-23 Number of Stope Blasts Summary Charts 134 5-24 Mining in Vicinity Summary Charts 136 5-25 Blasthole Diameter, Blasthole Length, Strike Length, RMR, and HR 138 • 5-26 RMR, Blasthole Diameter, Blasthole Length, Blasthole Layout, & H R 140 5-27 RMR, Adjusted Dip, Blasthole Diameter, Blasthole Length & H R 142 5-28 RMR, Surface, Blasthole Diameter, Blasthole Length, Blasthole Layout, Time Before Survey, & HR 144 5-29 Log Neural Net: RMR, Strike Length, Stope Height, Offset Distance, Time Before Survey, & HR 146 5-30 Log Neural Net: N , Surface, H/L Ratio, Offset Distance, Powder Factor, Time Before Survey, & HR 148 5-31 RMR, H R Manual Design Curves/Neural Network ELOS Predictions 152 5- 32 N , HR Manual Design Curves/Neural Network ELOS Predictions 153 6- 1 Average Neural Net/Formula Error Over Actual Average Dilution 162 7- 1 Flowchart for Using Rockburst Input Neural Network 167 7-2 Flowchart for Using Neural Network Seismic Monitoring 169 7-3 Conceptual Neural Network Flowsheet for ELOS Design 173 X I List of Tables Table Page 1.1 Example Table of Gold Grade Predictions 17 2.1 Finalized Stress Criteria 30 2.2 Two Layer Network Training Data Predictions 33 2.3 Rockburst Network Predictions 34 2.4 Single Layer Network Training Data Predictions 36 2.5 Single Layer Network Unseen Data Predictions 3 7 2.6 Input Factor Weightings 39 2.7 Finalized Stress Criteria 40 3.1 Unseen Data Predictions 48 3.2 Neural Network Training Data Predictions 49 3.3 Neural Network Unseen Data Predictions 49 4.1 Graph Input and Output Values 63 4.2 Input and Output Factor Adjustments 72 4.3 Following Day Bump Input Factor Rankings 74 4.4 Cumulative Week Input Factor Ratings 75 5.1 Log N , HR, & Input Summary Sheet 86 5.2 Log R M R , HR, & Input Summary Sheet 88 5.3 Linear RMR, HR, & Input Summary Sheet 91 6.1 Dilution Database Neural Net Summary 158 6.2 Rib Stope Dilution Test Data Error Comparison 160 6.3 Echelon Stope Dilution Test Data Error Comparison 161 6.4 Isolated Stope Dilution Test Data Error Comparison 161 XII Acknowledgments I would like to express thanks and gratitude to Rimas Pakalnis for his continuous patience, technical advice, data and encouragement throughout my graduate studies. I would also like to thank Lyndon Clarke, Don Petersen, and Peter Mah for providing data and technical support for this analysis. Support from the entire Department of Mining and Mineral Process Engineering has been appreciated greatly and thanks are extended to the entire department. Appreciation and gratitude are also extended to Sandy Sveinsen for her dedicated support in Introduction to Mining, Mine Design, and Mine Services. Lastly, I would like to thank my sister Elizabeth and friend Cathy for assistance in the preparation of this thesis. 1 Chapter 1 Introduction, Neural Network Description 1.1 Introduction - Neural Networks Recently, successful research has been devoted to having computers learn through positive reinforcement utilizing neural network programs. Computers work in a serial manner -letting them conduct specific processes in an extremely fast, efficient and concise manner. Humans, however, think in a parallel manner, conducting several thought processes at once to come to a decision. This is a much slower, less accurate method, but is versatile in that several factors are analyzed to make a decision. While still at a primitive stage, neural networks can be trained to analyze data in a manner similar to the parallel thinking of a human, in which there is not one correct answer but an expert approximation. This study will introduce the basic concepts on how a neural net operates, how to set up and "train, a neural network, and show both successful and unsuccessful examples of how neural nets may be used in the mining industry. 1.1.1 Introduction - Rockburst Input Prediction The first example of a potential situation where neural networks could be useful in the mining industry is the prediction of rockbursts through physical inputs. To quote directly from the Ontario Ministry of Labour "...we do not have the ability to predict when and where rockbursts will occur, and the experts in the field agree that we are not close to make such predictions" ( Caron, 1995 ). Between 1984 and 1993 eight underground 2 miners were killed in Ontario due to rockbursts. This accounted for approximately 10% of underground fatalities during this period. If neural networks were to have success in predicting where rockbursts would occur additional ground support, remote equipment, and/or design modifications could reduce or possibly eliminate fatalities due to rockbursting. As safety is the primary responsibility of mining engineers, the potential for neural networks to assist in predicting rockburst inputs should be investigated. In 1995, a joint project was completed by Goldcorp Inc. and Canmet called "Development of Empirical Design Techniques in Burst Prone Ground at A. W. White t Mine" ( Mah, 1995 ). Part of the study was to collect input information on rockburst, caving, ground wedge, and roof fall failures at the A. W. White Mine between 1992 and 1995. This resulted in a failure database consisting of 88 ground failures with corresponding inputs for each failure. The six inputs collected for each failure were R M R ( Bieniawski, 1982 ), Q ( Barton, Lien, Lunde, 1974 ), span, S R F ( Mah, 1995 ), R M R adjustment, and depth. These input factors were set up and run in a neural network with 73 examples being used for training and 15 examples being used to test the network. The neural network results were compared, with the actual results to see how well the neural net predicted the failure. An analysis into which input factors had the greatest influence upon rockbursts was also conducted. Another neural network rockburst input investigation was conducted on data collected from the Brunswick Mining and Smelting ( B M S ) # 12 Mine. The B M S # 12 Mine experiences a severe rockburst hazard on a daily basis. This hazard has been increasing 3 due l.o increasing depth, stress, and large open stope spans. An in-house empirical study was conducted by B M S to try and determine which input factors are relevant in determining where rockbursts occur. A database consisting of 54 examples of rockburst and stable headings was collected to correlate inputs and rockbursts. The 10 input factors collected are; rock type, orebody factors, fault or dyke, event mechanism, stress, ground problems, mining effects, other, and index. A neural network was run on this database to see how well the neural network could predict rockbursts using this database. Forty examples were used to train the neural net and 14 examples were used to test the neural net. An investigation into the relevance of each input with respect to rockbursts was also conducted. 1.1.2 Introduction - Neural Network Seismic Analysis BMS, at the #12 Mine, have installed microseismic system to automatically record seismic events and locations. This was done to help provide a warning system to evacuate workers as seismic activity increases. However, even with microseismic monitoring, the time when rockbursts occur, as of yet, cannot be predicted. BMS, for an in-house study, collected seismic information from six seismically active headings of which were compiled into one database. The inputs collected from each heading consisted of number of daily events, energy index, apparent volume, slope of apparent volume, activity rate, and diffusion. This data was run on a neural network to see if the time that a rockburst occurred could be predicted. A limited investigation was also conducted into which inputs had the greatest relevance in predicting when a rockburst would occur. 1.1.3 Introduction - Neural Network ELOS Estimation 4 ELOS represents equivalent linear overbreak/slough from the footwall and hanging wall of open stopes. This figure will allow dilution to be calculated for stopes of varying widths. Dilution increases mining and milling costs, increases lost opportunity costs, and reduces production through the handling of large block sizes. As a result, mining companies must be able to estimate ELOS for different stope widths, excavation dimensions, rock quality and other input factors. Through a joint U B C Canmet research project ELOS data was collected from 75 different open stope walls with 23 corresponding inputs for each wall (Clark, 1998). The ELOS database was collected from six different Canadian mines, namely: Detour Lake Mine, Trout Lake Mine, Ruttan Mine, H-W Mine, Contact Lake Mine,, and Lupin Mine. The majority of samples were collected from the Lupin Mine which is a steeply dipping narrow vein gold operation located in the Northwest Territories. Therefore, the database consists primarily of narrow vein stopes. A greater number of wide stopes with varying rock quality, dip and excavation dimensions will eventually need to be collected to get a representative database for analysis. As conventional statistics had no success in finding correlation between individual inputs and the corresponding ELOS output, this neural net study was conducted to assist in determining which input factors had the greatest influence upon ELOS. ELOS estimation charts were also developed comparing neural net predictions with manual design lines for rock quality and HR versus ELOS ( Clark, 1998 ). 5 1.1.4 Introduction - Neural Net/Formula Dilution Prediction Comparison In ;in effort to compare neural net results with conventional formulae estimates, neural net predictions were compared with three formulas developed through a database collected from the Ruttan Mine. The formulas being compared are from three stope configurations: isolated stopes, echelon stopes, and rib stopes (Pakalnis, 1986). Isolated Stopes (61 obs) Dil(%) = 5.9 - 0.08(RMR) - 0.010(ER) + 0.98(HR) Echelon Stopes (44 obs) Di!(%) = 8.8 - 0.12(RMR) - 0.18(ER) + 0.8(HR) Rib Stopes (28 obs) Dil(%) = 16.1 - 0.22(RMR) - 0.1 ITER) + 0.9(HR) where: DLL(%) - Stope Dilution (%), ie. 10%, DIL(%) =10 R M R - CSIR Rock Mass Rating (%), ie. 60%, R M R = 60 ER - Exposure Rate as Volume removed (metres cubed)/mth/stope width (m) HR - Hydraulic Radius (m) of exposed stope wall Neural nets were developed from the same databases that these formulae were developed. The neiiral net predictions on unseen data (not in the original databases) were compared with the formulae estimates. This was done to provide insight into the effectiveness of neural net predictions compared with statistically developed formulae estimates. 6 1.2 History of Artificial Neural Networks The initial concept of artificial neural networks (ANN) was perceived in 1943 by a neuro-biologist who wrote " A Logical Calculus of Ideas Imminent in Nervous Activity" (McCullough, 1943). This article eventually led to speculation that artificial intelligence may be developed through the use of computers. A N N software development began in the 1950's with the introductory optical pattern recognizance program, developed by Frank Rosenblatt in 1957, called the Perceptron Model (Minsky, Papert 1969). This program was designed to have an output respond if the inputs exceeded a threshold value: It could also train itself to improve its' response. Two other major software developments were the Adaptive Linear Neuron (ADALINE) and Multi-Adaline (MADALINE) programs by B. Widrow at Stanford University (Widrow, 1960). These programs are also pattern recognizance programs which were used in industry to reduce phone line static and echo. These programs provided a framework from which modern A N N programs were developed. Despite suggestions, in an article named Perceptrons, (Minsky, Papert, 1969) that ANN's were not worth further research, J. Anderson of Brown University developed a model called the Linear Associator (Anderson, 1972). This model is based on correlation between input and output in which an increase in correlation will result in an increase in weighting for the particular input. During the 1970's, at Helsinki Technical University, Finland, a system that could analyze data using adaptive rules and unsupervised competitive learning was developed 7 (Kohonen, 1970). Adaptive rules refer to input weightings continually changing based on previous weightings, pre-synaptic values, and post-synaptic values. Synaptic values refer to input weightings used for each run and calculation for each input. It acts similar to electrical firing of the synapse connections in human brains. Unsupervised competitive learning refers to automated positioning of inputs with inputs continually adjusting their weightings where an increase in one weighting will result in a decrease in influence of the other weightings. This allows, with the addition of data, the inputs to "compete" for influence. The Kohonen model has proved to be particularly successful in neural nets involving spatial relationships such as the robotic industry. A major breakthrough occurred during the 1980's with the Hopfield Model being developed at Caltech University, U.S.A., (Hopfield, 1982). The Hopfield model allows the inputs and their weightings to interact in combination by utilizing multiple hidden layers and nodes The hidden nodes are weighted through the combined interaction of input nodes and a bias node. A bias node essentially moves the starting origin of the neural network in a positive or negative direction according to the accuracy of the predictions. For example, if the output is 100 and the inputs are predicting 90, the bias node will shift the starting origin positive towards 10. The input weightings are also back calculated from the output back to the input to improve predicting ability. 8 1.3 Current Research During the 1990's there has been great interest and work dedicated to applying A N N ' s to engineering problems in a wide variety of fields. Most of the current neural networks are based upon the Hopfield model, however, many problems related to spatial relationship use the Kohonen model. Two papers which describe systems based on the Kohonen model are; "Controlling an arbitrary plant using a modified Kohonen Map" (Grassi, Grunewald, 1992) and "Learning Multiple Arm Postures of a Real Arm Robot" (Campos, Schulten, 1992): Numerous examples of applications based on the Hopfield model are described in the 1992 A S M E Press publication " Intelligent Engineering Systems Through Artificial Neural Networks" (Dagli, Burke, Shin, 1992). 1.4 Basic Single Layer Neural Network Program The simplest example of a Hopfield model neural net would be a neural net having a single layer without additional hidden nodes (with the exception of a bias node). Thresholds are maximum and minimum values for both inputs and outputs. Weights and weightings refer to the numerical value applied to the influence an input has on the output. A basic single layer learning paradigm can be summarized as follows: 9 • set the weights and thresholds randomly • present an input • calculate the actual output by taking the thresholded value of the weighted sums of the input • alter the weights to reinforce correct decisions and discourage incorrect decisions - i.e. reduce the error • present the next input etc. This paradigm can be programmed with the following basic algorithm:1 1. Initialise weights and threshold Define wi(t), (0 < i < n), to be the weight from input i at time t, and 0 to be the threshold value in the output node. Set wo to be -0, the bias, and xo to be always 1. Set wi(0) to small random values, thus initialising all the weights and the threshold. 2 . Present input and desired output Present input xo, x l , x2, xn and desired output y(t). 3. Calculate actual output. y(t) = fh(I wi(t)xi(t)) 1 BEALE, R., JACKSON, T., 1990, Neural Computing an Intro d u c t i o n ; pg 48. 10 4. Adapt weights if correct Z wi(t+l) = Z wi(t) if output 0, should be 1 (class A) I wi(t+l) = Z wi(t) + xi(t) if output 1, should be 0 (class B) Z wi(t+l) = Z wi(t) - xi(t). While there are several modifications to this basic algorithm, neural computing bases itself on this algorithm. The output for adapting weights is a binary number, zero or one, which tells which direction, increase or decrease, the weightings. If the output is zero the predicted answer is less than the actual number and the weightings will be increased. If the output is one the predicted answer is greater than the actual answer and the weightings will be decreased. This algorithm shows great success and exhibits learning features on a single layer prospectus. It therefore has success for simple, linear models. To solve more complex problems, a multilayer process of analyzing data had to be developed. 1.5 Multilayered Neural Networks Multilayer data processing allows a computer to find a correct pattern for evaluating data for a particular input and then back-propagating the data and error weights back to the previous layer. This adjusts the weighting between the layers, reduces the error function, and is how the network learns. Figure 1-1 shows a multilayer network. 11 Figure 1.1: Multilayer Network Input Output Uytr Network WITHOUT a hidden layer Input 1*1* Kitten Uytr Network WITH a hidden layer Output The multilayer network improves predicting ability by allowing input combinations having an influence on the output. A hidden node represents combinations of input, bias, and other hidden nodes from previous layers and will have a weighted influence on the output and other hidden nodes of following layers. Hidden layers represent levels of hidden nodes which are combinations of nodes from previous layers and will influence the output and nodes of any following hidden layer. The greater the number of hidden nodes and hidden layers increase the learning ability of the neural net. However, the increased learning ability may result in over training where the neural net will begin to find irrelevant correlation that are not useful for future predictions. Over training will actually decrease 12 the predicting ability of a neural net for unseen data while improving the predicting ability on training data. The basic multilayer learning algorithm2 is listed below. 1. Initialise weights and thresholds Set all weights and thresholds to small random values. 2. Present input and desired output Present input xp = xo, x l , x2, xn-1 and target output Tp = tO, t l , tm-1 where n is the number of input nodes and m is the number of output nodes. Set wo to be -0, the bias, and xo to be always 1. For pattern association, Xp and Tp represent the patterns to be associated. For classification, Tp is set to zero except for one element set to 1 that corresponds to the class that Xp is in. 3. Calculate actual output Each layer calculates ypj = f ( E w i x i ) and passes that as input to the next layer. The final layer outputs value opj. 4. Adapt weights Start from the output layer, and work backwards. Ewij(t+1)= Ewij(t)+ pjopj wij(t) represents the weights from node i to node j at time t, is a gain term, and pj is an error term for pattern p on node j . 2 BEALE, R. , JACKSON, T . , 1990, N e u r a l Comput ing an I n t r o d u c t i o n , p g 7 3 . 13 For output units pj = Lkopj(l-opj)(tpj-opj) For hidden units pj = EkopjO - opj) pkwjk where the sum is over the k nodes in the layer above node j . Braincel uses a multilayer algorithm, backperc, which is based on the above algorithm. 1.6 Braincel Neural Network Program Braincel® (Promised Land Technologies, 1990) is the neural net software program used for this thesis. This software program, based on the Hopfield model, is designed to create an expert system in situations where many data inputs make it difficult or impossible to create formulas to accurately analyze the data. Furthermore, Braincel works well when formulas are continually being modified as the information database increases. Some examples of situations where Braincel would be effective in the mining industry are dilution estimates, productivity estimates, and pillar failure criteria. For Braincel to work effectively there must be enough data available to analyze patterns and relations between numbers as it cannot distinguish why these relations occur. Moreover, increased data will diminish the effect of an outlier pattern that is an exception to the norm. Braincel, using trial and error, finds relationships between inputs and outputs, and evaluates their relative importance on a scale from zero to one (or negative one) to predict an answer. Figure 1-2 represents a flow sheet summarizing the procedures in setting up a neural network. 14 Figure 1.2 Flow Sheet on How to set up a Neural Network in Braincel Choose a problem i Collect historical data on problem. I Load the data into an Excel worksheet. 1 Divide the data into inputs and outputs. Define three data sets: training, test, and predicting ranges. i Open a New Expert. 1 Train the Expert with the Training Range. 'I Test the Expert on the Test Range Test the Expert on the Predict Test Range Use the Expert on new data by defining a range and using the Ask Expert command. 15 Once a neural network problem has been set up, the Braincel program will continue calculating until it reaches a specified error, time, cycle, or completes an automated best net search. The error is specified as: Error = Average (Hist. Output - Expert Calculated Output) Standard Deviation of Historical Output One of the major problems with the Braincel algorithm is that if the error is set too low, the network will memorize the data which will inhibit the ability to analyze new data. Therefore, the error should not be set so low that every record will be individually memorized. A problem with specifying a time limit is that Braincel will conduct different setups at different rates. For example, a network consisting of three layers will take longer to complete a cycle than a network with two layers. The most consistent way to analyze networks is to limit the network to a certain number of cycles. To do this, Braincel must be set up in professional user mode as the autoexpert mode does not allow this, and other functions, to be specified. The exact mathematics of the Braincel neural network package is not available. Therefore, an example is illustrated below to try and simplify how a multilayer neural net works. 16 1.7 Braincel Example on how a Neural Net Works A simplified example of a neural net analysis would be the gold grade associated with other mineralization. Suppose the grade of gold in a particular gold vein has been correlated with the presence of galena, sphalerite, and pyrrhotite. An increase in gold grade has been associated with an increase in the amounts of galena and sphalerite. Conversely, an increasing presence of pyrrhotite has been associated with decreasing grades of gold. As many development headings send muck as ore or waste based upon visual estimation of the grade, a decision to develop a neural net based upon the visual estimation of the amounts of galena, sphalerite, and pyrrhotite is made by a mining company. The mine geologist visually estimates the amounts of galena, sphalerite, and pyrrhotite and records these estimates on a scale from zero to five. Zero represents no mineralization of that particular mineral is present. Numbers one to five represent increasing amounts of each mineral. Corresponding gold grades from each heading would then be assayed with the values correlated with the visual input values. The geologist would then build a table in Excel similar to Table 1-1. 17 Table 1.1 Example Table of Gold Grade Prediction V I S U A L E S T ' V I S U A L E S T V I S U A L E S T ' A S S A Y E D H E A D I N G G A L E N A S P H A L E R I T E P Y R R H O T I T G O L D g / t o n n 1 1 2 4 3 2 4 3 2 9 3 3 2 1 8 9 9 5 4 0 1 5 1 0 0 1 2 5 2 After approximately 100 headings are recorded, the geologist should begin training a neural network to predict the gold grade based upon these three inputs. To help illustrate how a neural net operates, Figure 1-3 provides a simplified schematic. Figure 1-3 Simplified Schematic of a Neural Network Galena, X Sphalerite, Y Neural Network Analysis Gold Grade, Q Pyrrhotite, Z 18 The neural network will put the influence of each input into nodes which then combine to influence the final answer. Nodes resemble the operation of a cell in the human brain. A neuron cell in the brain will accept varying amounts of electrical charges from synapses of other cells and will in turn provide an electrical synapse to other cells. The force of this synaptic response will vary according to the electrical responses fed into the cell. A greater input would result in an increased output. Nodes operate in a similar manner within a neural network except that numerical values are used instead of electrical responses. Numerical weightings are adjusted to responses and training similar to how the electrical synapses within the brain are adjusted as one learns by example. For example, by letting the value of galena, sphalerite, and pyrrhotite equal X , Y , and Z respectively a node value, V, may be calculated from an X , Y , and Z formula, eg. V = 7 X + 3 Y - 5Z . Another node value, W, may be calculated from the same inputs according the formula, W = 5 X + 4 Y - 4 Z . Nodes V and W would then send a response to node Q which would result in the final answer. The differences in the inputs from nodes V and W rely upon the integers multiplying the original inputs X , Y , and Z. These integers refer to weightings within a neural network. These weightings are constantly adjusted as the neural net trains and continually readjusts the weightings as training continues. The neural network node arrangement can be adjusted according to the number of inputs and hidden layers. The greater number of nodes increases the number of combination inputs from the original data. Figure 1-4 helps illustrate this principle. 19 Figure 1-4 Example of Neural Network Node Operation Answer, Q Figure 1-4 represents a very simplified neural network layout. Typical neural network layouts would typically have multiple layers and nodes similar to Figure 1-1. The learning principle, however, is the same in each case where the neural net learns through previous data, reduces the error, and zeroes in on the response. In this gold grade example, the error between the predicted grade and actual grade would be back propogated, or fed back, to the nodes X , Y , Z, V , W and the weightings would be adjusted to partially accommodate this error. To determine how much the error has to be adjusted the back propogation concept will be explained. Suppose you are asked to determine the maximum depth of water in a tailings reclaim pond. An initial logical assumption would be that the pond bottom would have the shape of a parabola with the maximum depth in the middle of the pond. If the shape were 20 a true parabola and two points were known, the slope of the parabola could be determined and all points within the parabola could be calculated. Back propogation uses the same concept by assuming the answer can be estimated according to the shape of a parabola. The pond bottom, however, is unlikely to have a perfect parabola shape. By measuring the depth of water at various points within the pond, one could eventually zero in on the pond bottom. Few measurements would be needed where the slope was steeply dipping but more measurements would be required where the slope flattens near the pond bottom. The back propogation uses the same concept. By assuming a parabolic error curve, the neural net will measure the error and if the error is large (steep slope of the parabola) the weightings will have a large adjustment and if the error is small (flat slope of the parabola) the weighting adjustment will be small. Figure 1- 5 illustrates this parabola concept. Figure 1 - 5 Parabolic Example of Back Propogation Error Estimation 21 The error of each input is calculated as a separate parabola so that adjustments can be made to account for the ever present discontinuities. This allows estimates to be adjusted to reflect varying error slopes and surfaces. Needless to say, the more measurements taken in the tailings reclaim pond would improve the chances of getting the exact depth. The more data that a neural net has will also improve predicting results by zeroing in on the answer with the error weightings being adjusted minimally between data points. The Braincel software package uses a training method, similar to back propogation, called back percolation developed and licensed by Jurik Research and Consulting (Jurik, 1991). The actual mathematical formulas are not available. To illustrate how back percolation works consider Figure 1.3, page 17. Suppose, in heading 1, the neural network predicted a value of five g/tonne but the actual value was three g/tonne. In back propogation the error of two would be sent back to the nodes X, Y , Z, V, and W and the error weightings would be adjusted accordingly. In back percolation, the error of two would just be sent back to nodes V and W and their errors would be adjusted. The error adjustment from nodes V and W would then each be sent back to nodes X, Y , and Z where these weightings would be adjusted to reflect the changes in error coming from nodes V and W. In essence, nodes V and W act as the answer to nodes X , Y , and Z and nodes V and W are adjusted to the actual answer. Despite the added mathematics of back percolation, performance and training time is greatly improved over conventional back propogation. Figure 1-6 compares the error-weight curves between a one node network and a complex multi-node network. 22 Figure 1.6 Single and Multi-node Error-Weight Curves P a r a b o l i c e r r o r - w e i g h t c u r v e C o m p l e x error-weight c u r v e w h e n net h a s o n l y o n e ca l l . tor ce l l s In a b i g n e t w o r k . (Promised Land Technologies, 1990) 1.8 Best Net Searches As neural networks can be over trained reducing their predicting ability, it is often difficult to know how long and detailed a neural net should be trained for a particular set of data. Best net searches train on a training range and then test the predicting ability on a second specified range with results. They will continue to train and test until an optimum predictive model is found. One shortcoming of this method is that it is tested on a specified range so, in effect, the specified testing range will almost act like a training range. The following flow chart, Figure 1- 7, summarizes how to set up a Best Net Search in Braincel. 23 Figure 1.7 Flow Chart to Develop a Best Net Define two data sets as named ranges I Training range & Unseen Data range • Select Options Setup in the Braincel menu Select Automated Best Net Search • Select Begin Search for Best Net Enter the filename, ranges of inputs and outputs Click training data in the Train Expert box, unseen data in the Unseen data box 1 Neural Net is now searching for Best Net When new unseen data is encountered, the predictive ability may not be as accurate as shown in the test range because the neural net was selected to the high correlation with 24 that particular test data. Therefore, a certain amount of skepticism should be used that the Best Net search did not produce the optimum neural net. 1.9 Steps to Produce an Optimum Braincel Expert Network Different training parameters produce varying degrees of expert predictability. The following steps should lead to an accurate expert predictor. 1. Select a database that consists of several inputs and one output relying upon each of these factors at varying weights. 2. Ensure that all parameters affecting the output are listed as input parameters. If an input parameter has little or no effect upon the output parameter, it will show up with a weighting analysis and can be manually set to zero. 3. Run a best net search with the data to find a range where an optimum range of nodes and layers can be estimated. 4. Using an approximate set of nodes and layers that the best net search comprised, use the professional expert mode to begin testing various searches using different training errors, cycles, layers, and nodes. Continue searching by increasing these parameters until the searches begin to "memorize" the data, reducing its' predicting ability on new data. 5. Analyze the node weightings to remove any factors that have a negligible effect on the output. 6. As new data becomes available, continue to use this data to train the expert to improve its' predicting ability. 25 1.10 Summary The concept of artificial neural networks, computerized thought processes, began in the early 1940's. Since then, with the development of the Hopfield and Kohonen models, neural networks are being used in many facets in industry as predictive tools by learning through previous data. Neural networks are essentially complex pattern recognizance programs that process data in a parallel manner similar to the human brain. Hidden nodes and layers allow combinations of input factors to act as input factors increasing the ability to recognize correlation within the data. This ability allows a neural net to recognize correlation with multiple inputs that a person may not recognize. One drawback of neural networks is that neural networks can over train by finding irrelevant correlation between the inputs and output. Neural nets predict answers by adjusting weightings between input data and hidden nodes to approximate an answer. The weightings, in the Braincel software package, are adjusted according to a parabolic curve where large adjustments will occur when the error is large and the adjustments will become smaller as the error decreases. Whether it be neural net, human, or conventional statistics the most important factor is to have a large enough database organized in a manner that relevant correlation can be determined. 26 Chapter 2: Neural Networks - Rockburst Prediction at the Arthur W. White Mine 2.1 Introduction 1 he Arthur W. White Mine, currently owned by Goldcorp, experienced several rockbursts since commencing production in 1948. This is an underground gold mine developed to 1700 metres below surface. It is located in Ontario's Red Lake mining district near Balmertown. See figure 2.1 \ Figure 2.1 A.W. White Mine Location 27 The mining method was initially shrinkage stoping but eventually converted to cut and fill mining with longhole sill recovery. The ore is located in a combination of quartz and quartz carbonate veins associated with simple and complex replacement sulfide ore zones. The veins dip between 45 and 90 degrees with a sulfide zone dipping as low as 20 degrees. 2.2 A.W. White Mine Empirical Rockburst Study Between 1989 to 1995, a joint research project by Goldcorp Inc. and Canmet called "Development of Empirical Design Techniques in Burst Prone Ground at A. W. White Mine" flvMi, 1995) was completed. For the purposes of the Goldcorp/Canmet research project, data was collected on 88 uncontrolled groundfalls and rockbursts during 1990-93. This data was collected to determine which factors influence rockbursts. As neural networks can analyze several factors at once a network consisting of six input factors and one output factor was run. The six input factors are; rock mass rating (Bienawski-1982), Q factor (Barton, Lien, Lunde -1974), span, stress reduction factor (SRF), R M R adjustment, and depth (feet). The output factor, stability, can be one of four failures - PUN-RF (potentially unstable roof fall), PUN-GW (potentially unstable ground wedge), BUR (rockburst), and C A V (cave). A brief description of the input and output factors are listed below. Input factors: 2.2.1 Rock mass rating ( R M R ) - The R M R system, initially developed by Bieniawski in 1973, (Bieniawski, 1984), bases rock mass quality on five parameters. These parameters are: 28 • uniaxial compressive strength of the rock • rock quality designation (RQD) • spacing of discontinuities • condition of discontinuity • ground water conditions. These factors are given a numerical value and totalled together to get an RMR value. This value will be a number between 0 and 100 with zero being very poor rock and 100 being extremely good rock. The ground water conditions were assumed to be dry conditions. 2.2.2 Q F a c t o r - The Q factor refers to the rock quality tunnelling index. Developed in 1974, by Barton, Lien and Lunde, from the Norwegian Geotechnical Institute, the Q factor is based on six factors, which are: • RQD - rock quality designation • Jn - joint set number • Jr - joint roughness number • Ja - joint alteration number • Jw - joint water reduction factor • SRF - stress reduction factor. The actual Q formula is Q = RQD/Jn x Jr/Ja x Jw/SRF. 29 The Jw/SRF factor was assumed to be 1.0 for this study because dry conditions are assumed stress is factored through modelling and strain measurements. The Q factor ranges on a logarithmic scale ranging from 0.001 to 1,000 where 0.001 is extremely poor rock and 1,000 is virtually perfect rock. 2.2.3 Span - the meaning of span refers to the width of an underground opening in plan view. Span can be determined through the largest diameter of a circle within an underground excavation. (Lang et al.,1991). Figure 2.2 provides an example of span definition. Figure 2.2 - Span Definition Plan View (Pakalnis et. al, 1993) 30 2.2.4 Stress Reduction Factor (SRF') - refers to the adjusting of RMR values relative to stress ratios and previous history of ground conditions. It does not refer to SRF used in the calculation of Q. Stress criteria is based upon the ratio of induced stress over unconfined compressive strength (UCS) of the rock.. Refer to Table 2.1 (Mah, 1995). Table 2.1 - Finalized Stress Criteria al/UCS Burst Rockmass Response Q' SRF RMR Potential Adjustment Adjustment >0.5 HIGH substantial bursting 10 -20% 0.4-0.5 MEDIUM ground work, some bursting 5 -15% 0.3-0.4 LOW major ground work 3 -10% <0.3 NTT. minor ground work 1 — 2.2.5 Output Factors Burst - refers to a stope in which a rockburst has occurred. A rockburst is an instantaneous rock failure in or about an excavated area characterized/accompanied by a shock or tremor in the surrounding rock (Hedley, 1992). PUN-RF - refers to potentially unstable ground with respect to a roof fall. A stope is considered potentially unstable if any of the following conditions occur (Mah, 1995): 31 • the opening may exhibit strong discontinuities having orientations that form potential wedges in the back • extra ground support may have been installed to prevent a potential fall of ground • instrumentation installed in the stope has recorded continuing movement of the stope back • there may be an increased frequency of ground working or scaling. PUN-GW - refers to a stope considered potentially unstable due to the likelihood of a ground wedge failure. This is a subset of PUN-RF collected separately to identify areas where jointing may result in wedge failures. Cave - refers to when uncontrolled ground failures result in caving. 2.2.6 Historical Problem Areas The inputs and outputs consist of data collected from "historical problem areas" at the A W . White Mine. A problem area (Mah, 1995) is an excavation where: • an unusual occurrence has been observed or • the potential for an unusual occurrence exists based on a historical occurrence and/or rock mechanics investigations. 32 Information from stable excavations was not available for comparison. Therefore the neural net could only distinguish between the type of stope failure or potential failure. 2.3 Two Layer Neural Network Analysis The above inputs and outputs were run on a neural network to see if a neural network could predict output results from the input data and also to see which inputs had the greatest effect on output prediction. The six input factors RMR, Q, span, SRF adjust (Q), R M R adjust and depth are listed in the first six columns of Table 2.2. The output factor, either BUR, PUN-RF, PUN-GW or C A V , represent the actual condition in the stability column. The Braincel column is the neural net prediction based on the six input factors. A two layer network consisting of 13 nodes was run for 10105 cycles reaching a 1.69 percent error. Seventy three observations were used to train the network. The remaining 15 observations were used to test the network's predicting ability. Table 2.2 shows the results of testing the network on the data with which it trained. Table 2.2, not surprisingly, shows that the neural network correctly predicted all outputs from the input data. The reason that this is not surprising is that the network used these 73 observations for prediction training. 34 As can be seen from table 2.2, the network correctly predicted every failure that occurred. Table 2.3, below, shows results from the same neural network based on data that it has never been exposed to. These 15 observations were used to see how well a neural net can predict conditions from the input data. Table 2.3: Rockburst Network Predictions 1 1 1 S O N M I N E S L T D . A R T H U R W . W H I T E M I N E % R M R Q S P A N S R F R M R D e p t h S t a b i l i t y B r a i n c e l a d j u s t e d a d j u s t e d a d j u s t a d j u s t ( « ) 4 8 1 . 5 1 1 1 0 2 0 % 3 7 5 4 B U R B U R 5 4 3 . 0 4 2 5 3 1 0 % 3 9 3 8 P U N - G W C A V 5 4 3 . 0 4 1 4 5 0 3 1 1 5 P U N - G W P U N - R F 5 4 3 . 0 4 2 5 5 0 3 1 1 5 P U N - R F P U N - R F 5 4 3 . 0 4 3 7 5 0 3 1 5 9 P U N - G W P U N - R F 5 6 3 . 8 2 5 3 1 0 % 2 2 8 6 P U N - R F U N D E C I D 5 4 3 . 0 4 1 4 5 0 1 9 2 6 P U N - G W P U N - G W 4 8 1 . 5 11 1 0 2 0 % 3 7 5 4 B U R B U R 5 0 1 . 9 5 1 7 1 0 2 0 % 3 5 5 7 B U R B U R 5 0 1 . 9 5 2 5 1 0 2 0 % 3 5 6 1 B U R B U R 4 4 1 1 4 1 0 2 0 % 3 7 1 0 B U R B U R 4 8 1 . 5 6 8 1 0 2 0 % 1 9 4 2 B U R B U R 4 4 1 1 6 1 0 2 0 % 3 8 5 0 B U R B U R 4 0 0 . 6 5 1 0 2 0 % 1 9 6 8 B U R B U R Table 2.3 illustrates the predicting capability of a neural network to predict stope conditions based on input factors. The neural net had never trained on the above data. The network appears to have trouble distinguishing between PUN-GW and PUN-RF but predicted burst • • 35 conditions on every occasion. The fact that burst conditions were predicted on each occasion was promising with respect to the possibility that neural networks may be a useful tool for predicting rockbursts. The fact that the Q and R M R SRF adjustments were higher for burst conditions than other failure modes may have immediately caused the neural net to recognize burst conditions. 2.4 Single Layer Neural Network Analysis A single layer neural network, using the same inputs, outputs, training and predicting data, was run to compare with the two layer neural network analysis. The single layer neural network was also run for 10000 cycles but could only attain a 7.11 percent error compared to the 1.69 percent error of the two layer network. Moreover, several errors (arrowed) occurred when the network predicted data with which it trained. Table 2.4 lists the single layer neural network training data predictions. Table 2.4: Single Layer Network Training Data Predictions 37 The single layer network has only one hidden node, the bias node, and one input and one output layer. There are no input combinations with a single layer network. The fact that the single layer network had a much higher training error with outright errors with training data predictions show that the two layer network, using hidden nodes, was able to more accurately find data correlation within the training data. However, as can be seen in Table 2.5, there is little to distinguish between the unseen data network predictions. Table 2.5 Single Layer Network Unseen Data Predictions I I SON MIN ES LTD. ARTHUR W. WHITE MINE % RMR Q SPAN SRF RMR Depth Stability Braincel adjusted adjusted adjust adjust (ft) 48 1.5 11 10 20% 3754 BUR BUR 54 3.04 25 3 10% 3938 PUN-GW PROBABL 54 3.04 14 5 0 3115 PUN-GW PUN-GW 54 3.04 25 5 0 3115 PUN-RF PUN-GW 54 3.04 37 5 0 3159 PUN-GW PUN-GW 56 3.8 25 3 10% 2286 PUN-RF PROBABL 54 3.04 14 5 0 1926 PUN-GW PUN-GW 48 1.5 11 10 20% 3754 BUR BUR 50 1.95 17 10 20% 3557 BUR BUR 50 1.95 25 10 20% 3561 BUR BUR 44 1 14 10 20% 3710 BUR BUR 48 1.56 8 10 20% 1942 BUR BUR 44 1 16 10 20% 3850 BUR BUR 40 0.6 5 10 20% 1968 BUR BUR j 38 This single layer network also has trouble distinguishing between PUN-RF and PUN-GW. However, it is promising to see that burst conditions were predicted on every occasion. This supports the premise that a neural network can be overtrained reducing the predicting ability. The amount of training and number of hidden nodes and layers will vary according to the data available. 2.5 Input Factor Weightings Another objective from this neural net study was to see which factors the neural net placed the greatest emphasis for its output predictions. Input Factor weightings were printed out to try and determine which input factors are important. Table 2.6 lists the input and bias factor weightings. 39' Table 2.6 Input Factor Weightings INPUT FACTOR WEIGHTINGS 1.58 INPUT 40 According to Table 2.6, SRF' (Q) has the greatest influence upon ground conditions. Referring to the Finalized Stress Criteria Table, SRF' refers to the adjusting of the rock quality factors, Q and RMR, relative to stress ratios and previous history of ground conditions. Table 2.7 - Finalized Stress Criteria o-l/UCS Burst Rockmass Response Q' SRF RMR' Potential Adjustment Adjustment >0.5 HIGH substantial bursting 10 -20% 0.4-0.5 MEDIUM ground work, some bursting 5 -15% 0.3-0.4 LOW major ground work 3 -10% <0.3 NIL minor ground work 1 — (Man, 1995) Q, the bias node, and R M R adjustment also appear to have significant influence. The rock quality factors, particularly Q, would have lower values after previous rockbursting activity. These lower values may have helped the neural net recognize Q as an important factor. RMR, span, and depth appear to have a minor effect. The adjustment factors are high as they are based on previous stope history. It is interesting to note such a marked difference of influence between Q (0.86) and R M R (0.13). Both R M R and Q measure rock quality but Q is rated on a log scale and R M R a linear scale. It is of the author's opinion that the neural net found the greater relative difference in Q values easier to distinguish patterns than with the R M R 41 database values. It is also interesting to note how depth appeared to have such a minor influence. If a database of stable openings was included more confidence could be placed on the effect of factors such as depth or span. 2.6 Summary It appears that, from this database, SRF has the most significant effect on predicting rockbursts. The bias node, Q, and adjusted R M R are also significant while RMR, span, and depth appear to have a lesser effect. It is not surprising that SRF has the most significance as it is a factor given to rock according to it's previous history of burst proneness. A larger database with stable openings included is necessary to gain confidence in neural network predictions and the influence of input factors. These examples show that neural networks can provide an effective tool for predicting rockbursts. Further work varying error, the number of nodes, layers and cycles could improve the network using this database. However, a larger database with more input factors could make a neural network a more effective burst predicting tool. Additional inputs for each failure may include: induced stress (map3d), hydraulic radius, presence of raises, microseismic data, faults or dikes, ground support; type of heading, and if active mining is in the vicinity. 42-Chapter 3: Brunswick Mining & Smelting Rockburst Input Analysis 3.1 Introduction Brunswick Mining and Smelting [BMS] operates a large scale underground lead zinc mine, located in New Brunswick, Canada. This mine, called the # 12 Mine, has been in production since 1964. Since 1964 the mine has expanded from 4100 tonnes per day to a maximum capacity of 10500 tonnes per day making it the largest single underground mine producer of zinc in the world. The #12 Mine currently operates at approximately 7000 tonnes per day. The #12 Mine, as shown in figure 3.1, is located approximately 32 kilometres southwest of Bathurst, New Brunswick. Figure 3.1 Brunswick Mining and Smelting Location 43 The mining method at the B M S #12 mine currently consists of open stoping and sill pillar retrieval. The open stoping mining blocks are approximately 140 metres high and 900 metres long. Actual stopes are generally 30 metres high and 15-30 metres long. Primary stopes are backfilled with cemented backfill and secondary stopes are filled with a combination of raw and cemented backfill. A mining method of uphole retreat under caving backfill is used to mine the sill pillars. Increasing depth, increasing stress in decreasing pillar sizes and large open stopes has led to a marked increase in seismic activity and rockbursting in the mine. B M S , a highly mechanized open stoping operation, encounters a severe rockburst hazard on a daily basis. Empirical studies and seismic monitoring are currently used to try and predict where and when rockbursts occur. These are continuing in-house investigations into what triggers rockbursts so that eventually, for the safety of the workers, locations and times of rockbursts can be more accurately predicted. An investigation using neural networks was done to try and improve rockburst prediction. 3.2 Neural Network Physical Data Inputs An initial study was done to determine if a neural network could determine the location of a rockburst using 10 input factors. These factors are: 44 3.2.1 Rock Type There are three groups of rock within the #12 Mine. These are sulphide zones, metasediment waste rock, and intruding dyke rock. The massive sulphide zones consist of three different sub zones. The first sulphide sub zone consists of massive pyrite, pyrrhotite, chalcopyrite, magnetite, and minor amounts of sphalerite and galena. Certain areas of the massive pyrite zone are rich enough in copper to be mined. Another massive sulphide sub zone, the lead-zinc ore zone, consists of massive pyrite, significant amounts of sphalerite and galena, and varying amounts of chalcopyrite and pyrrhotite. The third sulphide sub zone is a fine-grained massive pyrite zone with minor sphalerite, galena and chalcopyrite. The sulphide zones are characterized by massive dense rock with a specific gravity of approximately 4.27 and UCS being approximately 210 Mpa. The high strength coupled with a high degree of stiffness makes this rock type very susceptible to rock bursting. Failures in this zone are often the result of stress and dynamic loading through blasting and seismicity. The metasediments are located in both the footwall and hanging wall. This rock is relatively weak and ductile (specific gravity approximately 2.9, UCS ranging from 30 to 60 MPa). These rocks present squeezing and swelling ground problems but the softness of the ore prevents a large buildup in stress preventing rockburst problems. Another zone, called the quartz-eye schist, is located primarily in the footwall, and little ground control problems are found in this zone. 45 The other rock zone is called the dyke zone. There are thick folded dykes and narrow broken up dykes. Severe seismic and ground control problems occur when mining in large dykes while fewer problems are associated with the smaller dykes. These rock zones are categorized according to burst proneness. 3.2.2 Orebody Factors Orebody factors refer to physical characteristics of the orebody. These include factors such as span, dip, and stope dimensions. These factors are analyzed and combined to provide an orebody factor which is given a ranking for relative burst proneness. 3.2.3 Fault or dyke Faults and dykes are commonly associated with zonal increases in stress and pressure. Moreover, seismic and ground control problems are also associated with dykes and faults. Therefore, the fault or dyke factor is a number to categorize the presence of faults or dykes with the chance of rockbursting and seismic activity. 3.2.4 Event Mechanism Rockbursts are often triggered through blasting, removal of ore or waste, or seismic activity. The event mechanism is maintained as a rockburst factor to categorize how much influence blasting or seismic events have on predicting rockbursts or major seismic bumps. 46 3.2.5 Stress Increased rockburst and seismic activity often occurs when stress exceeds 0.5 the UCS of the sulphide rocks and dykes. This often occurs in pillars and stope backs where the induced maximum principal stress levels are elevated. Therefore stress is categorized as a numerical rockburst risk factor ranked according to stress levels. 3.2.6 Ground problems Ground problems such as shear zones or wedges may be the failure point of seismic activity or may actually provide a weakness to initiate failure. For example, a shear zone may be a discontinuity that locks up potential energy which will later be released as a rockburst or seismic activity. The ground problem factor is used to categorize danger levels as a result of localized ground problems. 3.2.7 Mining effects Removal of ore or waste causes a stress redistribution throughout the mine. Moreover, the actual blasting damages wall rock and, in itself, causes seismic activity. Therefore, localized active mining will often increase the risk of rockbursting and increased seismic activity. Therefore, mining effects are categorized to numerically provide a risk assessment of the mining effects on rockbursting. 3.2.8 Other Any other factors that may have an effect upon rockbursting or seismic activity that are not included in the other factors are categorized in this other factor. An example of this factor would be popping sounds within a stope. 47 3.2.9 Index The index factor simply refers to the sum of the previous factors. This factor is kept to try and provide a single factor relating to the total risk assessment of rockbursting and seismic activity. 3.3 Neural Network Analysis The input factors listed above were given a numerical ranking based on their hazard rating. A zero represents no hazard while increasing increments of five represent an increasing hazard. The output, burst, is given a ranking of 10 if a burst occurred and a value of zero if no burst occurred. An initial two layer network, having a 0.09 percent training error on 545 cycles, was run to test the predicting capability. This network predicted perfectly on the training data. However, as shown below in Table 3.1, the network did not have favourable success in predicting unseen data. The location is listed on the left column, the ten inputs are in the next columns, the output (burst 10, no burst 0) is then listed and the last column is the neural network predictions. The neural network had not trained on this data. 48 Table 3.1 Unseen Data Predictions LOCATION ROCK OREBOD FAULT 0 PAST EVENT ISTRESS GROUND MINING OTHER INDEX BURST TYPE FACTORS DYKE EVENTS MECHANISM PROBLE EFFECTS Ft DRIFT ON SILL 0 0 0 0 0 0 5 20 0 25 0 0.484082 IBN'-TOPE 10 0 0 5 S 15 5 10 0 50 0 1010757 56-1 TTH 10 10 0 0 0 15 S 10 0 50 0 -0.05377 285-3 CROSSCUT 15 10 10 15 10 15 15 20 0 110 0 •0.14082 482-1 STOPE 10 0 15 15 10 10 IS 35 15 125 10 -014362 34M X/CUT (PRE FAILURE) 15 0 0 5 5 10 S 15 0 55 10 -0.14062 133-1 STOPE 10 0 0 5 5 10 5 25 0 80 0 -013966 SILL HW DRIFT 0 0 0 0 0 15 10 35 0 eo 0 0037369 50-8 STOPE ACCESSES 15 0 10 10 10 0 10 20 0 75 10 9907395 13N-8 STOPE 10 0 10 10 10 15 s 15 0 75 0 9 8690U9 19N-4 STOPE 10 10 0 5 5 10 10 25 0 75 0 9.564843 58-5 SOUTH END (PRE FAILUR 10 0 15 10 10 15 5 20 0 B5 10 9.BS1233 2N-8 STOPE 10 0 15 10 10 15 10 20 0 eo 0 1013687 58-5 HW DP DRIFT 10 10 IS 15 10 10 15 20 0 G6 10 -0.1406; After discussions with B M S Head of Rock Mechanics, Don Peterson, it was decided to remove the index factor. The high numerical ranking was thought to skew the focus away from the individual factors and place too much emphasis on the combined total. Tables 3.2 and 3.3 show that this network had perfect results in both the training and unseen data. This suggests that neural networks may prove to assist in predicting rockbursts. 49 Table 3.2 Neural Network Training Data Predictions 725 13N-6 STOPE 1C 0 10 10 10 15 5 15 0 0 0.002259 725 19N-4 STOPE 10 10 0 5 5 10 10 25 0 0 -000043 725 56-5 SOUTH END (PRE FAILUR 10 0 15 10 10 15 5 20 0 10 10.00207 725 2N-B STOPE 10 0 15 10 10 15 10 20 0 0 0.002218 '. 56-5 HW DP DRIFT 10 10 15 15 10 10 15 20 0 10 9999627 ..S 56-5 FW ZONE 10 0 10 10 10 15 10 25 0 10 10.00265 725 7N-6 STOPE 10 0 15 15 15 15 15 20 0 0 0.000697 725 9-6 CS STOPE 10 0 10 15 10 10 15 35 0 10 10.00146 725 BN-8 STOPE 10 0 10 15 15 15 15 25 0 10 1000085 i 725 TRUCk PARKING 0 0 15 15 15 0 15 50 0 10 10.00015 1 725 19N CROSSCUT ON SILL 10 0 10 10 10 10 15 50 0 10 99S9B44 1 850 388-5 STOPE 10 0 0 0 0 5 15 25 0 0 0001038 .850 436-3 STOPE 10 0 0 5 5 10 5 20 0 0 0.000268 850 286-3 SOUTH BROW 15 0 5 5 5 10 0 15 0 0 0.013694 B50 490-2 STOPE 15 0 10 5 5 5 0 20 0 0 -0.0101 850 490-3 (PRE-FOG) 10 0 10 5 5 10 5 30 0 10 10.00081 850 478-1 STOPE (PRE FAILURE) 10 0 10 10 10 10 5 20 0 10 9.999296 850 486-7 STOPE I 10 10 0 5 5 15 15 20 0 0 -0.0003 850 484-2 STOPE (PRE FAILURE) 10 0 15 10 10 10 5 25 0 0 0.001178 850 472-7 STOPE (PRE FAILURE) 10 10 0 • 10 0 10 10 35 0 10 10 00113 850 2N-3 CROSSCUT | 10 0 15 15 10 5 10 25 0 0 0.00146 850 480-1 ST |PE (PRE FAILURE) 15 0 10 10 10 10 5 35 0 10 10.00239 8S0 488-2 STOPE | 10 0 10 5 10 10 15 35 0 0 0001196 850 70-4 STOPE (PRE FAILURE) 10 10 10 10 10 15 10 25 0 10 9.998742 1000 1 SUB REGIONAL PILLAR 15 0 0 10 5 5 5 25 0 0 •O.C0022 1000 341-7 STOPE 10 0 0 10 5 10 10 25 0 0 •0.00032 1000 153-8 STOPE 10 0 • 0 • 5 5 10 10 30 0 0 0.000263 1000 33S-8 X/CUT (PRE FAILURE) 15 0 15 5 5 10 5 20 0 10 10.00186 1000 338-8 X/CUT (PRE FAILURE) 15 0 15 5 5 10 5 20 0 10 10.001BE 1000 347-8 STOPE 10 0 0 10 5 10 5 35 0 0 O0OI39J 1000 151-8 STOPE 15 0 0 5 5 10 10 30 0 0 000102' 1000 3348 STOPE 15 0 5 5 0 5 15 40 0 0 0.0022 IE 1000 349-8 STOPE 10 0 0 5 5 10 15 50 0 10 10.0003 1000 135-9 STOPE 10 10 0 5 0 10 15 50 0 0 -0.00031 00 235-8 STOPE 15 0 15 10 10 10 10 30 0 to 100008E .JOO 235-8 STOPE 15 0 15 15 10 10 15 25 0 10 10.00121 1000 238-8 CROSSCUT 15 0 15 15 10 10 10 30 0 0 000161! 1000 3 SUB HW DRAWPOINTS 10 10 15 .15 10 10 15 35 0 0 -00006-1000 237-8 STOPE 15 0 15 15 10 10 15 40 0 10 10 0005! 10001 335-8 STOPE 15 0 15 15 10 10 15 60 0 10 10.000Z Table 3.3 Neural Network Unseen Data Predictions LEVEL LOCATION ROCK OREBOD FAULT O PAST EVENT jSTRESS GROUND MINING EFFECTS OTHER BURST NEURAL TYPE FACTORS DYKE EVENTS MECHANISM PROBLE NET 8.11E-05 0000331 .723 f 1 DRIfT ON SILL 0 0 0 0 0 0 5 20 0 0 725 16N-6 STOPE 10 0 0 S 5 IS 5 10 0 0 725 56-6 NORTH 10 10 0 0 is 0 15 15 5 10 0 0 0.001575 -0.00463 850 28S-3 CROSSCUT tS 10 10 10 15 20 0 0 850 48Z-» STOPE ID 0 15 ts H> J O 15 35 15 10 999908 10.00043 1000 340-8 X/CUT (PRE FAILURE) 15 0 0 5 5 10 5 15 0 10 1000 133-1 STOPE 10 0 0 5 5 10 5 25 0 0 0 002684 725 SILL HW DRIFT 0 0 0 0 0 15 10 35 0 0 0.001432 725 58-6 STOPE ACCESSES I S 0 10 10 10 0 10 20 0 10 10.00558 50 3.4 Physical Characteristic Input Rankings There simply is not enough data available to precisely measure the influence of each input factor. However, in an attempt to see which inputs the neural network placed the greatest emphasis, a rank inputs chart was run on this neural network. Figure 3.2 shows the relative ranking of each input factor. Ranking of factors from greatest to least influence on the network are: Past events - input 4 Stress - input 6 Ground problems - input 7 Orebody factors - input 2 Other - input 9 Event mechanism - input 5 Fault or dyke - input 3 Rock type - input 1 Mining effects - input 8 51 Figure 3.2 Rockburst Prediction Input Factor Weightings Inputs Not surprisingly past events and stress, known strong indicators of rockbursting, had a major influence upon the neural net. The rock type had a minor influence due to the fact thav *,he headings provided were all located within rockburst prone rock. If headings were sent from stable footwall rock the rock type factor would probably have greater influence upon the neural net. It is interesting to note that mining effects had a minor influence upon rockbursting. The input orebody factors had an average influence. As this factor can be controlled through engineering it may be wise to break this factor down into 52 separate factors such as span, stope length, etc. This would provide further information on what excavation factors could be altered to reduce the risk of rockbursting. 3.5 Summary The initial success of this neural net to predict rockbursts based upon physical characteristic inputs shows promising results that neural nets may provide assistance in rockburst prediction. However, much more data is required before confidence can be placed upon a neural net prediction. Future data collection should break down opening factors and include physical dimensions such as span, hydraulic radius, stope height and stope length. This would allow insight into which factors, controllable through design, have the greatest influence upon rockbursts. S3 Chapter 4: BMS Seismic Data Analysis 4.1 Introduction Rockbursting and ground control problems related to mine seismicity began to become a serious problem at the B M S #12 Mine in the late 1980's. The seismic events were increased due to the implementation of open stope mining rather than mechanized cut and fill, increased depth and the mining of sill pillars. In 1994, more than 30,000 seismic events were recorded in the mine and over 150 of those events were large enough to be felt on surface. A seismic event is defined as the ground motion or vibration caused by the release of stored energy within a rock mass and/or physical ground movement. 4.2 BMS Microseismic Recording Systems Beginning in 1986, Brunswick Mining began installing microseismic systems to automatically record source location and a rough measure of event size of seismic activity. The initial system, called the M P 250, was expanded from a 32 channel system in 1986 to a 64 channel system by 1992. This system was a mine wide system which could locate a major seismic source to within approximately 10 metres. Figure 4.1 shows the development of the M P 250 system at BMS #12 Mine. 54 Figure 4.1 - The Evolution of the MP 250 System at Brunswick Mining M P 2 5 0 i==m n id i i i m i 16 CHANNELS 575 LEVEL 16 CHANNELS 725 LEVEL i l i i JoAHUolJoO 1986 MP250 i==ra i i id in FULL WAVEFORM SYSTEM ie CHANNELS 575 LEVEL m i m i 3? CHANNELS i r m r m r n 16 CHANNELS 725 LEVEL Trmrmm ie CHANNELS 850 LEVEL Mil i l i i l i i i l l i i -1990 M P 2 5 0 FULL WAVEFORM SYSTEM m i 16 CHANNELS 16 CHANNELS 575 LEVEL ij i imiim ie CHANNELS 725 LEVEL l i i i i l i U i i l i i i i 1987 M P 2 5 0 FULL WAVEFORM SYSTEM m i m i 64 CHANNELS 16 CHANNELS 575 LEVEL i r m r m r n 1 6 CHANNELS 725 LEVEL rniimrm 16 CHANNELS 850 LEVEL nrmnrm 16 CHANNELS 1000 LEVEL m r T m r r m r n - 1992 55 To have the M P 250 system work effectively on a mine wide scale, it is necessary to filter out minor and unimportant levels of seismic activity to prevent an overload of data input into the system. Therefore, through visual filtering and minimizing trigger levels the system collects between 100 and 200 of the possible 800 daily seismic events which can be recorded. In 1994, B M S installed a more state-of-the-art digital seismic system called the Integrated Seismic System (ISS) developed by a South African company, Anglo American (Hudyma, 1995). The ISS system does not provide seismic information on a mine wide scale but provides more detailed localized information from various separate seismic monitoring stations. Each separate station consists of a triaxial sensor and a sealed computer located within 15 metres of the sensor. Each station digitally records and transmits seismic information , automatically calculates the source, the type and orientation of movement. B M S has 18 ISS stations situated in seismically active regions within the mine sending digital information to a central U N I X system on surface. The ISS localized system has advantages over the M P 250 system in determining the mechanism of seismic events, determining true magnitudes of seismic events, and can more accurately-investigate the-source parameters of seismic events. These advantages help improve the understanding of seismic events at the mine. This localized information is extremely helpful in mine planning decisions such as localized rock support, excavation sizing, and manpower evacuation decisions. 56 The ISS and MP 250 systems provide immediate information of seismic locations and magnitudes allowing constant monitoring for mine operators to make immediate short term decisions regarding mine safety and personnel evacuation decisions. The MP 250 system operates on a mine wide scale while the ISS system provides more detailed seismic information in localized areas. Both of these systems make up a state of the art seismic monitoring system but, at this time, cannot predict when large seismic events will occur. They currently monitor seismic activity and allow mine operators to anticipate when seismic activity reaches a level where mine safety becomes paramount and evacuation procedures can be initiated. 4.3 Neural Network Seismic Data Inputs BMS sent seismic information from three burst prone headings which were compiled into one database for a neural network to analyze. The information consisted of six input factors: number of daily events, energy index, apparent volume, slope of apparent volume, activity rate, and diffusion. As these factors are complex, a simplified explanation of what the factors represent is shown below while a more detailed explanation and derivation can be found in "Seismic Monitoring in Mines" (Mendecki, 1994). 57 4.3.1 Number of Events The number of events is simply the number of seismic events recorded in a specified time period. In this case the time frame is one day. 4.3.2 Energy Index The energy index, EI, relates stress and the rate of stress change for a specific area. The EI works by comparing emitted energies from seismic events to other seismic events of similar moments. The EI must be compared in specific areas as energy emitted under certain moments will vary depending upon geological conditions. For example, a soft rock such as talc will yield slowly under stress producing a large seismic moment while radiating little energy whereas a hard, brittle rock such as granite would yield under a high stress event which would emit greater energy under a smaller moment. An increase in EI indicates an increase in stress. The actual formula for EI is a ratio of emitted seismic energy of the jth event compared to the average emitted seismic energies of similar seismic moments in a particular area. EIj = Ej * EAV(Mj) EIj = energy index for event j Ej .= emitted seismic energy for event j E A V = average emitted energies of similar seismic moments Mj = seismic moment at event j 58 Figure 4.2 helps to illustrate what the EI represents. Figure 4.2 Energy Index Concept 4 l o g E i o O C m e a s u r e d r V 1 > a v e r a g e 1 l o g M o m e n t (reference: Mendecki, 1994) 4.3.3 Apparent Volume The apparent volume of a seismic event refers to the volume of rock with coseismic inelastic strain. Apparent volume scales cumulative inelastic strain for an area over time. Figure 4.3 portrays movement patterns at the seismic source. 59 Figure 4.3 Seismic Source Movement Patterns Example of displacement patterns at the seismic source: a) simple shear and two tensiles hf shear over nonpUmar fault, dilation and rno tensiles c) multiple shears d) two fault related shears and bnplosion associated with slope closure el multiple shears end stope related implosions f) fault related shear, implosion associated with stope closure and shear along newly crti;:e<l discontinuity. (reference: Mendecki, 1994) Apparent volume can be estimated from the equation: V A = M la A (Mendecki, 1993) 60 V A = apparent volume M - seismic moment a A = apparent stress Apparent volume provides information on rock mass stress transfer and rates of coseismic deformation. 4.3.4 Slope V a Apparent volume is measured on a cumulative basis. Therefore, the rate of increase (or slope Va) of apparent volume can be determined. Prior to major seismic events it has been seen that cumulative energy and moment have remained constant while cumulative apparent volume had a dramatic increase prior to the event (Mendecki, 1993). A sharp increase in the slope of the cumulative apparent volume curve indicates an increase of size deformation in an area. 4.3.5 Activity Rate The activity rate, X, describes the level of seismic activity. The formula for activity rate is: A = N / T X = activity rate N = number of seismic events greater than a specified minimum magnitude. T = time span of study 61 4.3.6 Diffusion Diffusion refers to the loss or dissipation of seismic energy. Seismic diffusion, having units of m^/s, is a function of space and time. Diffusivity will increase with the seismic moment squared but will decrease as seismic viscosity increases. Velocity of diffusion, m/s, is useful to monitor microfracturing and microseismic activity in space and time. 4.4 Input and Output Factor Rankings Each input factor, besides the number of events, was assigned a value for the relative change occurring on a daily basis. The actual values and actual change were not provided but daily changes, read off graphs, were given a value for relative change. These values are: 0 = no change 1 = small increase 2 = small decrease 3 = big increase 4 = big decrease The number of events factor, factor 1, was the actual number of seismic events recorded that day. 62 4.4.1 Output Factors Seismic output was recorded in one of four methods: 0 - seismic activity of < log E 4.0 in magnitude 8 - seismic activity of > log E 4.0 and < log E 4.5 in magnitude 9 - seismic activity of > log E 4.5 and < log E 5.0 in magnitude 10 - seismic activity of > log E 5.0 in magnitude. For the purpose of neural network pattern recognition there are shortcomings with the form of both the input and output data. These shortcomings will be discussed in the remainder of the section. 4.5 In i t ia l D a y of B u m p N e t w o r k An initial network was run on the daily (24 hours) bump factors with very disappointing results. This network achieved an amazing 0.0949 % error after just 545 cycles on the training data. While the training data network results were virtually perfect, disappointingly, the network predicted none of the unseen data major bumps. The reason the network could achieve such accuracy after relatively few cycles is that there are such few seismic bumps, recorded.... This allows.an .extremely high.accuracy. as .the network just has to predict relatively few bumps. Moreover, the input data is somewhat rigid with the changes being limited to a digit classified from 0 to 4. For example, a large increase and an extremely large increase will have the number three. This helps explain the high 6 3 training accuracy with relatively few cycles and extremely poor unseen data predictions. The Initial Day of Bump input factor results are listed in Appendix A. 4.6 Input F a c t o r G r a p h Ana l ys i s The poor unseen data prediction results led to a decision to graph the trends to see if any correlation could be noticed manually. The following graphs plot bumps (8 to 10) and the relative change of the input factors (0 to 4). Specifically the graph outputs and inputs are listed in Table 4.1. Table 4.1 Graph Input and Output Values OUTPUTS • 0 for no major bump • 8 for a bump ranging from E4.0 to E4.5 • 9 for a bump ranging from E4.5 to E5.0 • 10 for a bump greater than E5.0. INPUTS • 0 for no change • 1 for a small increase • 2 for a small decrease • 3 for a large increase • 4 for a large decrease 64 These graphs, shown on Figures 4.4 - 4.8, were closely scrutinized to try and discover any trends. Disappointingly, very few trends were exhibited. In general, bumps were associated with large increases and decreases in input factors. However, one interesting observation was that on the day before the bump there was often no seismic activity at all. This corresponds with a reported "silence" that has been noticed prior to rock bursts. Figures 4.4 to 4.8 compare the output, bump, with the correlating input ranking for the input factor. The output is zero, eight, nine, or ten. Zero represents no major bump while eight, nine and ten represent major bumps of increasing orders of magnitude. The input rankings are the same as listed in Table 4.1. Figure 4.4 Energy Index vs Bump 65 E . I . [0-4] B U M P [ 8 - 1 0 ] D CO m z m 73 CD < a m X < m c H co tn n> ca 3. 3. CD n> in </> ho 66 Figure 4.5 Apparent Volume vs Bump APP. VOL. [0-4] BUMP [8-10] CA) 4*. Ol O) -4 fO CD 3 . 3 . CD CD c/> cyi 67 Figure 4.6 Slope V A vs Bump SLOPE VA [0-4] BUMP [8-10] 10 4.7 B u m p P r e d i c t i n g Ana l ys i s While it is interesting to notice trends after or when a bump or rockburst has occurred, to be of practical use, a neural network must be able to predict bumps prior to them happening. Therefore, neural networks were run on the days and hours prior to the event happening to try and correlate trends leading up to the event.. The input factors were adjusted as described below. The hourly networks had no success at all in predicting bumps as there were too few bumps per hour recorded. The following networks will be discussed: • Day of Bump • Day Before Bump • Sum of Week Before Bump 4.8 Input F a c t o r Ad jus tmen t Neural networks work most effectively by having the relative weightings of input factors set in an order of either ascending or descending order of importance . For example, as in the initial data format, zero should have the least importance and four should have the most importance with one, two and three having increasing levels of importance relative to the solution. It also could be reversed so that it decreases in importance as long as there is a continual trend in values. Moreover, a neural network will not recognize a number as representing a symbol or meaning such as "small increase". Therefore, numbers representing inputs should have a numerical weighting representing their relative 71 importance and influence. In an attempt to put the input factors into increasing relative importance, each factor was studied in reports and graphed to try and put them in an ascending order of relative importance. For example, if no activity (zero) was thought to be more important than a small increase (one), it was given a higher numerical ranking than a small increase (to three). The changes in input are listed below. Notice that the relative importance is varied between the different networks. Table 4.2 Input and Output Factor Adjustments Number of Events - If the number of events exceed 10 events, the event factor will be 10. Otherwise the actual number of events will be the event factor. Dav of Bumrj Dav Before BuniD Sum of Week Before Bumo Energy Index All input factors [except # events] are recorded as: 0 - no change 3 - no change 0 - no change 1 - small increase 1 - small increase 1 - small increase 2 - small decrease 2 - small decrease -1 - small decrease 4 - big increase 4 - big increase 3 - big increase 5 - big decrease 5 - big decrease -3 - big decrease Apparent Volume These factors are added then divided by seven. 0 - no change 3 - no change 2 - small increase or decrease 1 - small increase 5 - big increase 2 - small decrease 5 - big increase Slope Va 0 - no change 3 - no change 1 - small increase 2 - small increase 2 - small decrease 1 - small decrease 4 - big increase 5 - big increase 5 - big decrease 4 - big decrease Activity Rate 0 - no change 3 - no change 1 - small increase 2 - small increase 2 - small decrease 1 - small decrease 1 4 - big increase 5 - big increase 5 - big decrease 4 - big decrease Diffusion 0 - no change 3 - no change 1 - small decrease 2 - small increase 2 - small increase 1 - small decrease 4 - big decrease 5 - big increase 5 - big increase 4 - big decrease 73 4.9 Ad j us ted D a y of B u m p N e t w o r k Another network, using the adjusted factors, was run on the day of the bump. Unfortunately, once again, the results on the unseen data were poor with just one of the four bumps being predicted. Moreover, even the results on the training data were not exceptional with twenty-six of the thirty-five bumps being predicted. These poor results are somewhat surprising as the neural network, BRPRED2.NET, had just a 3.54% error. The reason for this discrepancy is that there were such few major bumps and the minor bumps were all classified as zero. An input rankings chart was produced to see which factors the neural network placed most emphasis on. Slope Va had the highest ranking followed in order by diffusion, activity rate, apparent volume, energy index, and number of events. 4.10 P rev ious D a y N e t w o r k After scrutinizing the graphs, two noticeable trends were seen. These are major bumps often appear to be preceded by either a day of total inactivity (no change) or a day of extreme activity in which a major bump occurred. The total seismic inactivity correlates with reports from operators that a silence in ground noise is often noticed prior to a rockburst (Mah, 1995). To try and set up the input factor relevant rankings to include the importance of no change, the zero ranking for no change was raised to a ranking of three. Another factor (factor seven - following day bump) was added in an attempt to relate the importance of bumps on the network. This factor was given the following ratings: 74 Table 4.3 Following Day Bump Input Factor Rankings • four when a bump occurs on the previous day • three when a bump occurs two days before • two when a bump occurs three days before • one when a bump occurs four days before zero when a bump does not occur within four days prior. The previous day neural network, BRPRED4.NET, had a relatively high training error of 13.21 percent after 10,000 cycles. This relatively high training error is due to the network reading of 2.99 each day there was total inactivity on the previous day. While most days of total inactivity did not result in a bump on the next day, the neural net recognized the correlation between total inactivity and a bump the following day. While the network did not outright predict any of the unseen data bumps it did provide an elevated risk assessment of 2.99 for three of the four bumps. The neural network placed, from greatest to least, emphasis on the following factors; energy index, previous bump time allowance, activity rate, slope Va, apparent volume, diffusion, and number of events. 75 4.11 C u m u l a t i v e W e e k Be fo re N e t w o r k In an attempt to gather trends with regard to the cumulative buildup of input factors prior to a bump, input factors were averaged for one week prior according to the following format: Table 4.4 Cumulative Week Input Factor Ratings • one for a small increase • negative one for a small decrease • three for a big increase • negative three for a big decrease. These input factors, as well as number of events, were added for seven days and then divided by seven to get a week's cumulative trend with which the network would recognize. No trends were recognized manually. While the neural network achieved a 7.73 percent training error after 10,000 cycles, it was unable to predict any of the four bumps on the unseen data suggesting the network could not recognize any discernible trends. The network placed, from greatest to least, emphasis on the following factors; slope Va, apparent volume, diffusion, activity rate, energy index, and number of events. 76 4.12 Incremental Value Neural Net Study -At the recommendations of Dr. John Meech (Professor, University of B.C. , Specializing in expert systems and mineral processing), the inputs were ranked incrementally according to relative change. Specifically, a large decrease was ranked 0, a small decrease lA, no change Vi, small increase 3A; and a large increase 1. Two neural nets, day of bump and day before bump, were trained over 10,000 cycles. The results, however, were also poor with no bumps being predicted in either neural net test data. 4.12 Recommendations Even though the neural networks were unable to predict bumps with any consistency, a number of changes to the format of both the input and output data should improve results. The input data, instead of using numbers to represent relative change, should consist of actual numbers with actual change over time. This would allow the network to obtain a numerical figure for the factor levels and their relative change rather than using a specific number to represent a generalized change in input factor level. Furthermore, the output data, seismic bumps, should have their actual value recorded for each specified time period. For example, if a bump network was being run on a daily basis the highest bump recorded during the day should be recorded for the location being measured. This would allow the network to predict a seismic bump level for all cases not just the very high and dangerous ones. This should improve the predicting ability of the network by having to 77 predict all measurements rather than just the dangerous ones. Currently, having few major bumps makes the network, almost by default, predict that no major bump will occur. By having a variable output each day forces the network to estimate a prediction that will not fall into the no major bump default. These changes should increase the network's training error but the predicting ability should be greatly improved. Neural networks can be nested together to improve prediction results. For example, one s neural network may be used to simultaneously analyze the results of other networks. It is suggested that a combination of neural networks be nested together similar to the following: • Weeks prior to a seismic bump • Days prior to a seismic bump • Day before a bump • Hours prior to bump • Hour before bump • Physical Characteristics The actual days, weeks, and hours are still to be determined. However, it is planned to have a separate neural network for each timeframe. Another neural network would then analyze the results obtained from these individual networks. Figure 4.9 illustrates this objective. 78 Figure 4.9 Nested Neural Network Layout Neural network input factor rankings, obtained from Brunswick Mining and Smelting, should be adjusted so that a neural network can most efficiently analyze the input factors. This will be done through reports on seismic characteristics, intuition, and any noticeable trends within the data. As of yet, the only noticeable trend has been either that often no seismic activity occurs on the day before a bump or that major seismic activity occurs before bumps. This correlates well with reports that suggest a marked decrease in seismic activity immediately before a bump (Mah, 1995). As can be expected, large changes in the 79 has been noticed on the buildup of input factors prior to a bump. Much more data must be obtained for further analysis. Moreover, it may be advantageous to use other factors such as blasting for additional information. 4.14 Summary The neural networks were unable to consistently predict when a rockburst would occur. This shows that while neural networks have advantages in recognizing correlation by using input combinations they still require adequate data presented in a manner that the neural network can optimally recognize to correctly identify patterns. Simply put, neural networks are not miracle workers but still require a database of information that correlation can be recognized. Continued seismic data collection and compiled in a manner that enhances a neural network's recognition ability should improve results of when a particular rockburst will occur. 80 Chapter 5: Neural Network Wall Slough Dilution Prediction 5.1 Introduction Wall slough in open stopes results in significant dilution and economic losses in the mining industry. A U B C Canmet joint research project (Clark 1997) collected equivalent length (m) of overbreak/slough (ELOS) data from 75 different open stopes. Twenty three input factors were recorded for each stope ELOS measurement. This neural network analysis was conducted to assist in determining which factors have the greatest influence on ELOS. 5.2 Background Through increased mining and milling costs, lost opportunity costs, and large block size costs dilution has a major economic impact on mining operations. Dilution consists of planned waste being mined within stoping blocks and unplanned waste sloughing from the stope walls or overbreak being blasted outside the ore boundaries. Mining companies need to be able to estimate how much dilution will occur in stopes depending on different excavations, rock quality and other input factors. The ELOS approach (Clark 1998) is new method of estimating the quantity of dilution that will occur in open stopes. The most commonly used methods for dilution estimation are currently the Dilution Approach (Pakalnis, 1986,1993) and the Stability Graph Method (Matthews et. al, 1981). 81 One of the drawbacks of these methods is that the design curves employ the use of qualitative terms such as potentially stable or unstable. Another drawback of these methods is that the dilution factor does not distinguish between varying stope widths. For example one metre of slough in a five metre wide stope would have 20% dilution whereas one metre of slough in a 10 metre wide stope would have a dilution of 10%. The ELOS method (Clark .1998) of measuring dilution addresses these drawbacks by measuring the wall slough in metres off stope walls. This will provide a number from which dilution can be calculated for that particular stope width and will provide a numerical estimate of dilution rather than a qualitative one. The ELOS database was developed through 47 different stope surveys from six different mines. The six mines are; Trout Lake Mine, Manitoba (four stope surveys), Ruttan Mine, Manitoba (four stope surveys), H-W Mine, B.C. (six stope surveys), Contact Lake Mine, Saskatchewan (two stope surveys), Lupin Mine, Northwest Territories (21 stope surveys), and Detour Lake Mine, Ontario (10 stope surveys). 5.3 L i n e a r N e u r a l Ne t Process ing Seventy five studies is simply not enough examples for a neural network to distinguish relative importance between 23 input factors. Therefore, to be consistent with current dilution analysis curves, hydraulic radius with either N or R M R would be used as a constant along with each other input factor. N or R M R would be used as a constant for 82 rock quality and hydraulic radius would be used as an opening shape factor. Each neural network run along with a brief description of each input factor is listed below. Current dilution estimation graphs use either N or R M R and hydraulic radius. Hydraulic radius (area/perimeter) is a shape factor while N and R M R are rock quality factors. A linear scale neural network using N , RMR, and hydraulic radius (HR) was run primarily to see which factor (N or RMR) would be most suitable for the rock quality constant input. The database was divided into 60 stopes for the training data and 15 stopes for the test data. After running several best net searches resulting in best nets ranging between 6 to 8.65 the neural networks were all trained to a seven percent error. The test neural net results, with the exception of stopes two and three, were fairly good. From the rank inputs chart the neural network relied much more upon the R M R factor (0.45) than the Q factor (0.27). Therefore, the R M R factor was initially used as a rock quality constant factor with HR as a shape factor constant when analyzing the relative importance of the remaining input factors. A two layer neural network was trained to seven percent error for each remaining input with R M R and HR as constants. The following charts were run for each neural net. • Rank inputs chart • Input test data • Test data chart • Input training data • Training data chart • Regression analysis summary sheet 83 Several neural networks using N rather than R M R were run with HR and a variable input. However, none were able to reach a seven percent error and the results could not be accurately compared with the R M R results. Therefore, the results using N on a linear scale neural net processing were not used. The reason the N data could not reach a seven percent training error on a linear scale is that the N factor is based on the Q rock quality factor ( N = Q x A x B x C ) . A,B and C are adjustment factors to relate joint orientation, dip, and stress ratios. Q is recorded on a log scale. For example, the relationship between R M R and Q is roughly R M R = 9 InQ + 44. Therefore, R M R appears more suited to linear scale processing as it is recorded in a linear scale between zero and 100. 5.4 Log Scale Neural Net Processing The Braincel software package allows for the options of linear or log scale processing of input factors. Neural nets run using N as the rock quality constant could not reach the seven percent training error with linear scale processing. However, when using log scale processing, the neural nets with N as the rock quality factor were able to reach the seven percent training error. Furthermore, neural nets using R M R as the rock quality constant showed improved test data correlation when run on the log scale. Therefore three sets of neural nets were run to analyze the effect of each input. These sets are: 84 • Each input factor with N and HR constants using log scale processing • Each input factor with RMR and HR constants using log scale processing • Each input factor with RMR and HR constants using linear processing. 5.5 Input Relevance Determination The input weighting and the R 2 correlation on the unseen data were the principal characteristics when estimating the relevance of each input on ELOS. The input weighting shows how much emphasis the neural network placed on each input factor after analyzing the training data. Theoretically, the greater relevance of an input factor should increase it's weighting. The R 2 factor is called the coefficient of determination. This factor compares the neural network ELOS prediction with the actual measured ELOS value and results in a value from zero to one. The higher the value represents greater correlation between the ELOS values. For example, a value of one would represent perfect correlation whereas a value of zero represents no correlation. In the regression analysis the squared difference between the ELOS prediction and actual ELOS are calculated for each point and the sum of these squared differences is called the residual sum of squares. The total sum of squares is the sum of the squared differences of the actual ELOS values and the average of the ELOS values. The smaller difference between the residual sum of squares and the total sum of squares results in a larger R 2 value. As the R 2 value is essentially based upon squared differences, outlier points may have a large effect on it's value. For example, many of the neural nets run were virtually perfect on the test data with the exception of stope two ( an outlier point) resulting in a relatively low R 2 value. 85 Therefore, one should examine the actual ELOS points and predictions when determining the effect of an input rather than relying solely on the R 2 value. The greatest relevance was placed on the linear RMR, HR, and variable input data. 5.6 N, H R , and V a r i a b l e Input L o g Scale Process ing Excluding the neural net with RMR, the five neural nets showing the highest R 2 correlation are highlighted on the summary sheet immediately following. These factors are; surface, height/length ratio, offset distance, powder factor, and time before survey. In all these neural nets, however, the input factor had a relatively low input weighting with N having the greatest weighting. This implies that the input factors did not greatly influence the neural net results. The input factors ( besides R M R ) which had the greatest weightings were: Input Weighting R^ . dip 0.38 0.15 blasthole diameter 0.38 0.43 blasthole length 0.38 0.14 blasthole layout 0.38 0.24 # of blasts 0.41 0.14 With the exception of blasthole diameter the neural nets with the high input weighting did poorly on the test data correlation. This apparent contradiction of the R 2 value going 8 6 down as the input weighting goes up suggests that the rock quality factor, N, has, by far, the greatest influence on ELOS. However, as these neural nets were processed on a log scale, the input factors data do not appear suitable for log processing while the N data was suited for log processing. Therefore, the results were inconclusive in determining which input factors had the greatest influence on ELOS. Table 5.1: Log N, HR, & Input L O G P R O C E S S E D N . H R . V A R I A B L E I N P U T N E U R A L N E T S U M M A R Y S H E E T R SQUARED LOG NNET N HR INPUT CORRELATION 1 N , H R , R M R 0.19 0.32 0.49 0.75 2 R H R , S U R F A C E 0.55 0.22 0 22 0.53 3 N , H R , D E P T H 0.45 0.35 0.20 0.49 4 N , H R , DIP 0:34 0.28 0.38 0.15 5 N , H R , W I D T H 0.35 0.33 0.32 0.27 6 N , H R , S T R I K E L E N G T H 0.40 0.40 0.20 0.23 7 N , H R , S T O P E H E I G H T 0.47 0.30 0.23 0.24 8 W, H R , H E I G H T L E N G T H R A T I O 0.54 0.26 0.20 0.48 9 N, H R , S U R F A C E C H A R A C T E R 0.39 0.25 0.36 0.08 10 N , H R , U N D E R C U T T I N G S E V E R I T Y 0.43 0.33 0.25 0.35 11 N , H R , B L A S T H O L E D I A M E T E R 0.41 0.22 0.38 0.43 12 N , H R , B L A S T H O L E L E N G T H 0.38 0.24 0.38 0.14 13 N , H R , H O L E / S T R I N G D I A M E T E R 0.34 0.33 0.33 0.15 14 N , H R , S P A C I N G / B U R D E N R A T I O 0.74 0.22 0.04 0.42 IS N, H R , O F F S E T D I S T A N C E 0.44 0.29 OJ27 0.43 16 N , H R , P O W D E R F A C T O R 0.59 0 31 0 10 0.48 17 N , H R , B L A S T H O L E L A Y O U T 0.28 0.35 0.38 0.24 18 N , H R , S T O P E S U P P O R T 0.35 0.34 0.31 0.21 19 M , H R , T I M E S E P O R E S U R V E Y 0.56 0.18 0 25 0.59 20 N, H R , N U M B E R O F B L A S T S 0.36 0.22 0.41 0.14 21 N, H R , M I N I N G IN VICINITY 0.49 0.29 0.22 0.26 * T H E F I V E N E U R A L N E T S W I T H T H E H I G H E S T R S Q U A R E D H A V E B E E N H I G H L I G H T E D 87 5.7 Log Processed RMR, HR, and Variable Input Factor All the log R M R processed neural nets had very high R 2 correlation ranging from 0.58 to 0.78. The five highest R 2 correlation neural nets have been highlighted on the summary sheet on the following page. The input factors in these five neural nets are; surface, strike length, stope height, offset distance, and time before survey. The input weightings, as in all of these log R M R processed neural nets, were very low with the greatest weighting put on the R M R factor. When the input weighting and R 2 test correlation were observed no factors were conclusively determined to have a predominant influence upon ELOS prediction. As with the N log processing, while the R M R data was suitable for log processing, the input data factors do not appear to be suitable for log processing. Table 5.2: Log R M R , H R , & Input N E U R A L NET INPUT WEIGHTINGS DATA WITH TEST R S Q U A R E D S L O G P R O C E S S E D INPUT WEIGHTINGS R S Q U A R E D N E U R A L NET RMR % HR % INPUT % TEST CORRELATIO 1 RMR, HR, SURFACE 0.522472 0.320225 0.157303 0.77 2 RMR, HR, DEPTH 0.505319 0.239362 0.255319 0.75 3 RMR, HR, DIP 0.481865 0.261658 0.256477 0.66 4 RMR, HR, WIDTH 0.481865 0.264249 0.253886 0.69 5 RMR, HR. STRIKE LENGTH 0 493298 0 270777 0 235925 0.77 8 RMR, HR, STOPE HEIGHT 0 429348 0.353261 0 217391 G7£ 7 RMR, HR, HEIGHT LENGTH RATIO 0.480226 0.305085 0.214689 0.75 8 RMR, HR, SURFACE CHARACTER 0.492063 0.269841 0.238095 0.58 | 9 RMR, HR, UNDERCUT SEVERITY 0.463054 0.310345 0.226601 0.68 10 RMR, HR, BLASTHOLE DIAMETER 0.463054 0.246305 0.29064 0.74 11 RMR, HR, BLASTHOLE LENGTH 0.534759 0.278075 0.187166 0.62 12 RMR, HR, HOLE STRING DIAMETER 0.410891 0.30198 0.287129 0.76 13 RMR, HR, SPACING/BURDEN RATIO 0.458015 0.302799 0.239186 0.74 14 RMR, HR, OFFSET DISTANCE 0.450262 0.324607 0 225131 0.77 15 RMR. HR. POWDER FACTOR 0.467033 0.32967 0.203297 0.73 16 RMR, HR, BLASTHOLE LAYOUT 0.402715 0.2941 18 0.303167 0.65 17 RMR, HR, STOPE SUPPORT 0.474227 0.298969 0.226804 0.71 18 RMR, HR, TIME BEFORE SURVEY 0.491049 0.207161 0.30179 0.77 19 RMR, HR, NUMBER OF STOPE BLAS 0.463918 0.273196 0.262887 0.71 20 RMR, HR, MINING IN VICINITY 0.513228 0.301587 0.185185 0.75 21 RMR, HR, N 0.48691 1 0.319372 0.193717 0.75 THE NEURAL NETS WITH THE FIVE HIGHEST TEST CORRELATIONS HAVE BEEN HIGHLIGHTEC 89 F i g u r e 5.2: R e l a t i v e I n p u t W e i g h t i n g s for the L o g R M R Series W E I G H T I N G % o o o o o o O -» k) C*> Ol CD i o"s"m !(/) W 0) ! CD CD CD I CD CD CD i W NJ - 1 90 5.8 Linear Processed R M R , H R , and Variable Input Factor The linear processed neural net results were very promising. While the overall R 2 results were lower than the log processed neural nets, the input factors had a greater weighting and, in general, an increase in weighting correlated with an increase in the R 2 value. These reasons are why the greatest emphasis was' put on the linear processing results when determining which input factors had the greatest influence upon ELOS. The neural nets with the highest R 2 value have been highlighted on the summary sheet immediately following. These factors are; blasthole diameter, blasthole layout, blasthole length, time before survey, and surface. The five factors with the greatest input weightings are blasthole diameter, blasthole length, surface character, strike length, and blasthole layout. These results indicate that blasting characteristics have a major influence upon ELOS. Each input factor will be discussed. 9i Table 5.3: Linear R M R , H R , & Input N E U R A L NET INPUT WEIGHTINGS DATA TEST R S Q U A R E D S INPUT WEIGHTINGS 2 3 4 5 6 7 HR HR HR HR HR HR HR HR NEURAL NET 1 RMR, HR. SURFACE .DEPTH , DIP , WIDTH , STRIKE LENGTH STOPE HEIGHT , HEIGHT LENGTH RATIO , SURFACE CHARACTER , UNDERCUT SEVERITY , BLASTHOLE DIAMETER , BLASTHOLE LENGTH , HOLE STRING DIAMETER SPACING/BURDEN RATIO OFFSET DISTANCE POWDER FACTOR BLASTHOLE LAYOUT STOPE SUPPORT TIME BEFORE SURVEY NUMBER OF STOPE BLAS MINING IN VICINITY N RMR RMR RMR RMR RMR RMR 8 RMR 9 RMR 10 RMR, HR 11 RMR, HR 12 RMR. HR 13 RMR, HR 14 RMR, HR 15 RMR, HR 16 RMR, HR 17 RMR, HR 18 RMR, HR 19 RMR, HR 20 RMR, HR 21 RMR, HR RMR % 0 505882 0.479218 0.454082 0.489011 0.432292 0.412371 0.496933 0.450262 0.454106 0 417526 0 366197 0.435897 0.427861 0.50838 0.448649 0.404624 0.433604 0 460094 0.474227 0.431472 0.454545 HR % 0.270588 0.273839 0.326531 0.296703 0.21875 0.314433 0.300613 0.193717 0.299517 0175258 0 262911 0.225641 0.283582 0.251397 0.32973 0 254335 0.268293 0 225352 0.309278 0.309645 0.287879 INPUT % TEST 0.223529 0.246944 0.219388 0.214286 0.348958 0.273196 0.202454 0.356021 0.246377 0.407216 0.370892 0.338462 0.288557 0.240223 0.221622 0.34104 0.298103 0.314554 0.216495 0.258883 0.257576 S Q U A R E D CORRELATIO 0 54 0.31 0.44 0.41 0.22 0.26 0.23 0.42 0.14 0.83 0 61 0.28 0.21 0.22 0.46 0.73 0.22 0 50 0.23 0.34 0.3 THE NEURAL NETS WITH THE FIVE HIGHEST TEST CORRELATIONS HAVE BEEN HIGHLIGHTED 93 5.9 Individual Input Analysis To see which factors have the greatest influence upon ELOS each factor was studied based upon the linear RMR neural net results, log RMR neural net results, log N neural net results, and a regression analysis of the test results. The greatest emphasis was placed upon the linear neural net results. To be used for comparison, the rock quality factor and HR statistics without another input are listed below. Summary Statistics Neural Net Rock Oualitv % HR% R 2 RMR, HR ( Linear ) 0.57 0.43 0.13 RMR, HR ( Log ) 0.59 0.41 0.76 N, HR ( Log ) 0.80 0.20 0.52 5.9.1 Surface The surface term refers to either the footwall or hanging wall of the surveyed stope . Number one refers to the hanging wall and number two refers to the footwall. The surface factor, in all three runs, had a high R 2 correlation. However, it's input weighting was relatively low. On the linear scale the surface factor had an R 2 correlation of 0.54 and an input weighting of 0.22 and the factor was one of the top five R 2 correlation neural nets in each of the different types of neural nets. Therefore, it appears that the footwall or hanging wall does have an effect on ELOS but, as the weightings were relatively low, the 94 amount of significance appears to be less than some other factors. The effect of the surface factor will be largely effected by the dip of the deposit. Summary Statistics Neural Net Rock Quality % H R % Input % R 2 RMR, HR, Surface (Linear) 0.51 0.27 0.22 0.54 RMR, HR, Surface ( L o g ) 0.52 0.32 0.16 0.77 N , HR, Surface ( L o g ) 0.55 0.22 0.22 0.53 Associated with each input will be six charts - two for each neural net series. The first two will consist of a linear series input weighting chart and a linear series test data ELOS versus neural net chart. The next, or middle two, consist of the log R M R series input weightings chart and a log R M R series test data ELOS versus neural net charts. The last, or bottom two charts, consist of the log N series input weightings chart and a log N series test data ELOS versus neural net chart. Figure 5.4 Surface Summary Charts 95 Linear Series R M R , Surface, HR Linear Series R M R , Surface, HR Rank Inputs Chart INPUT E L O S vs Neural Net Test Data Predictions . STOPE L o g Series R M R , Surface, H R L o g Series R M R , Surface, H R Rank Inputs Chart INPUT ELOS vs Neural Net Test Data Predictions STOPE Log Series N , Surface, HR Log Series N , Surface, HR Rank Inputs Chart INPUT E L O S vs Neural Net Test Data Predictions STOPE 96 5.9.2 Depth The depth factor refers to the depth, in metres, of the stope being studied. This factor had an average effect on ELOS. Even though it did not reach the top five correlations it had relatively high R 2 values. The weightings, however, were relatively low. Therefore, while depth appears to have an effect on ELOS it does not seem to be a predominant factor. This result, somewhat surprising as pressure increases as depth increases, suggests that the magnitude of the tensile pressure surrounding stope walls may not have a great influence upon wall slough. • Summary Statistics Neural Net Rock Oualitv % HR% Input % R 2 RMR, HR, Depth (Linear) 48 27 25 0.31 RMR, HR, Depth ( Log ) 51 24 26 0.75 N, HR, Depth ( Log ) 45 35 20 0.49 Figure 5.5 Depth Summary Charts 97 Linear Series RMR, Depth, HR Linear Series RMR, Depth, HR Rank Inputs Chart INPUT ELOS vs Neural Net Test Data Predictions STOPE Log Series R M R , Depth, H R Log Series R M R , Depth, H R O K w ,n E j o w G O M H j a « I e » N on G S s'° Rank Inputs Chart ^1 INPUT ELOS vs Neural Net Test Data Predictions 1 2 3 4 S « 7 t » 10 11 12 U 1« * STOPE Log Series N, Depth, HR Log Series N, Depth, HR Rank Inputs Chart INPUT ELOS vs Neural Net Test Data Predictions STOPE 98 5.9.3 D i p Dip refers to the vertical angle of repose of the stope orebody. According to the neural net results dip is not an overly significant factor on ELOS. It was an average factor both in weighting and R 2 correlation. The neural net, as both hanging wall and footwall ELOS results were recorded, would have trouble interpreting results as each wall should have different amounts of ELOS. The combination with the surface factor on the multi-layered networks should have helped remedy this situation but the results may not be as significant as the actual effect on ELOS. Therefore, while the neural net does not recognize dip as having a large effect on ELOS, one must recognize the limitations of neural nets and realize that dip may actually have a larger effect on ELOS. Summary Statistics Neural Net Rock Oualitv % H R % Input % R 2 RMR, HR, Dip (Linear) 45 32 22 0.44 RMR, HR, Dip ( L o g ) 48 26 26 0.66 N , HR, Dip ( Log ) 34 28 38 0.15 Linear Series RMR, Dip, HR Rank Inputs Chart Figure 5.6 Dip Summary Charts 99 Linear Series RMR, Dip. HR ELOS vs Neural Net Test Data Predictions INPUT Log Series RMR, Dip. HR Rank Inputs Chart INPUT Log Series N. Dip. H R Rank Inputs Chart STOPE Log Series RMR, Dip. HR ELOS vs Neural Net Test Data Predictions INPUT Log Series N, Dip, HR ELOS vs Neural Net Test Data Predictions STOPE 100 5.9.4 Adjusted Dip To see if the neural net would place greater emphasis upon dip it was recorded in a manner more discernible for the neural net. Dip was recorded in the following format: • The footwall was recorded the same dip as the deposit • The hanging wall was recorded 180 minus the dip of the deposit. This format would cause a larger number being recorded for the hanging wall with the same number being recorded for the footwall. For example, a deposit having a dip of 60 degrees would have the dip recorded as 60 for the footwall and 120 for the hanging wall. This change in dip recording significantly improved the neural net correlation and dip weightings. The most emphasized improvement was theR 2 increase from 0.44 to 0.75 on the linear neural net. This example shows the importance of presenting the data in a manner that the neural net can recognize differences. It appears that dip may have a relatively significant influence upon ELOS and future stope surveys should record the dip separately for each wall in a manner similar to this adjusted dip format. As adjusted dip is an adjustment of the original database, further analysis was not conducted on this input factor. Summary Statistics Neural Net Rock Quality % H R % Input % RMR, HR, Adjusted Dip (Linear) RMR, HR, Adjusted Dip (Log ) N , HR, Adjusted Dip ( Log ) 44 48 32 23 24 33 32 28 35 0.75 0.75 0.64 Figure 5.7 Adjusted Dip Summary Charts 101 Linear Series R M R , Adjusted D i p , H R Linear Series R M R , Adjusted D i p , H R Rank Inputs Chart I N P U T E L O S vs Neural Net Test Data Predictions L o g Series R M R , Adjus ted D i p , H R L o g Series R M R , Adjus ted D i p , H R R a n k Inputs C h a r t I N P U T E L O S vs N e u r a l N e t Tes t D a t a Predic t ions L o g Series N , Adjusted D i p , H R L o g Series N , Adjusted D i p , H R Rank Inputs Chart I N P U T E L O S vs Neural Net Test Data Predictions 102 5.9.5 Width Width refers to the ore thickness of the stope being mined. The width factor had average R 2 correlation and average input weightings. Therefore, the neural net did not place special emphasis on the width factor. This is very significant with respect to dilution. A narrow stope should have approximately the same ELOS as a wide stope but, due to a narrower orebody, will have a significant increase in dilution. This is one major advantage of determining ELOS factors rather than straight dilution. The database, consisting primarily of narrow vein stopes (53 of 75 stopes < 5 metres), would be improved if a greater number of wider stopes were included. Summary Statistics Neural Net Rock Oualitv % H R % Input % R 2 RMR, HR, Width ( Linear ) 49 30 21 0.41 RMR, HR, Width ( Log ) 48 26 25 0.69 N , HR, Width ( Log ) 35 33 23 0.27 Figure 5.8 Width Summary Charts 103 Linear Series RMR, Width, HR Linear Series RMR, Width, HR 'Rank Inputs Chart INPUT ELOS vs Neural Net Test Data Predictions j i 5 « ? I 9 to »; i: • '* STOPE Log Series RMR, Width, HR Log Series RMR, Width, HR Rank Inputs Chart INPUT ELOS vs Neural Net Test Data Predictions STOPE Log Series N, Width, HR Log Series N, Width, HR Rank Inputs Chart INPUT ELOS vs Neural Net Test Data Predictions STOPE 104 5.9.6 Strike Length Strike length refers to the length of the stope parallel to the strike. Strike length, with linear processing, had a strong influence of 0.35 on the output. Furthermore, on the R M R log processing, strike length had a very high R 2 correlation of 0.77. Throughout the neural net runs FLR had a minor effect on ELOS. This minor influence of HR, compared to the other shape factors strike length and stope height, is probably due to the HR factor having smaller relative change to the other shape factors. For example, a 20 metre long by 50 metre high stope would have an H R of 7.1 but if the strike length is doubled to 40 metres long the H R only increases to 11.1. Besides HR, stope height also had a minor effect on ELOS. Therefore, it appears that the neural net recognizes strike length as the most important stope shape factor when it comes to determining ELOS. While the neural net may place extra emphasis upon strike length, this is a factor which can be controlled by design and it should be considered a very relevant factor. Summary Statistics Neural Net Rock Quality % H R % Input % R 2 RMR, HR, Strike Length (Linear) RMR, HR, Strike Length ( Log ) N , HR, Strike Length ( Log ) 40 43 49 40 27 22 24 20 35 0.22 0.77 0.23 Figure 5.9 Strike Length Summary Charts 105 Linear Series RMR, Strike Length, HR Linear Series RMR, Strike Length, HR JRank Inputs Chart INPUT ELOS vs Neural Net Test Data Predictions Log Series RMR, Strike Length, HR Log Series RMR, Strike Length, HR Rank Inputs Chart INPUT ELOS vs Neural Net Test Data Predictions STOPE Log Series N, Strike Length, HR Log Series N, Strike Length, HR Rank Inputs Chart INPUT ELOS vs Neural Net Test Data Predictions STOPE 106 5.9.7 Stope Height Stope height refers to the vertical extension of the stope. The stope height factor, in all of the runs, had a minor influence on ELOS prediction. It did however, have a high R 2 correlation of 0.78 on the log processed R M R neural net. Overall, it appears that stope height has a minor influence on ELOS. This may allow more flexibility when designing the vertical extent of stopes. Similar to strike length, stope height is more sensitive to changes in dimension than hydraulic radius. The fact that stope height appears to have a lesser effect upon ELOS than strike length raises questions as to why. More data is necessary to determine that this is the case and also as to why stope height has a lesser effect upon ELOS than strike length. Summary Statistics Neural Net Rock Quality % H R % Input % R 2 RMR, HR, Height (Linear) 41 31 27 0.26 RMR, HR, Height ( L o g ) 43 35 22 0.78 N , HR, Height ( L o g ) 47 30 23 0.24 Figure 5.10 Stope Height Summary Charts 107, Linear Series RMR, Stope Height, HR Linear Series RMR, Stope Height, HR f • Rank Inputs Chart INPUT ELOS vs Neural Net Test Data Predictions STOPE Log Series RMR, Stope Height, HR Log Series RMR, Stope Height, HR w 080 E 0J0 I « G 040 H T 0 30 I 3' N 020 G * 010 S Rank Inputs Chart INPUT ELOS vs Neural Net Test Data Predictions STOPE Log Series N, Stope Height, HR Log Series N, Stope Height, HR t. Rank Inputs Chart INPUT ELOS vs Neural Net Test Data Predictions 10 11 12 13 1« .'16 STOPE 5.9.8 Height/Length Ratio 108 The height/length ratio is a shape factor comparing stope heights divided by stope lengths. The height/length ratio factor had a minor influence on ELOS in all three runs. The input weightings were low in all three neural nets and the R 2 ratio was low for the linear and N log processed neural nets. Similar to stope height and strike length, height/length ratio is more sensitive to changes in stope dimension- The spread in value of the height/length ratio factor ranged from 0.4 to 7.4. The effect of this shape factor may have been overshadowed by H R and does not appear to have a great influence on ELOS. Summary Statistics Neural Net Rock Oualitv % H R % Input % R 2 RMR, HR, H/L Ratio ( Linear) 50 30 20 0.23 RMR, HR, H/L Ratio ( L o g ) 48 31 21 0.75 N , HR, H/L Ratio ( Log ) 54 26 20 0.48 Figure 5.11 Height/Length Ratio Summary Charts 109 Linear Series RMR, Height/Length Ratio, HR Linear Series RMR, Height/Length Ratio, HR If: It Log Series RMR, Height/Length Ratio, HR Log Series RMR, Height/Length Ratio, HR Rank Inputs Chart INPUT ELOS vs Neural Net Test Data Predictions STOPE Log Series N, Height/Length Ratio, HR Log Series N, Height/Length Ratio, HR Rank Inputs Chart 'ft: fc-INPUT ELOS vs Neural Net Test Data Predictions STOPE 110 5.9.9 Surface Character The surface character term refers to the shape of the stope wall. Number one refers to a smooth straight stope wall whereas number three refers to a disjointed, wavy, and rolling stope wall. Number two refers to a stope wall in between one and three. The linear processed neural net had a high R 2 value of 0.42 as well as a relatively large weighting of 0.36 on surface character. The log processed neural nets placed little emphasis on surface character and there was little correlation on the test results. One problem is that this data represents symbols for stope wall condition but the neural net will recognize the numbers as a scalar value. Therefore, the numbers should be represented in a scalar fashion of the wall condition. Moreover, out of 60 training stopes, only seven stopes were classified as 2, two stopes were classified as 3, and 51 stopes were classified as 1. There simply were not enough stope walls in poor condition for a neural net to assess the relevance of surface character. It is interesting to note that' stopes three and four, unlike many of the other neural nets, had accurate predictions when using the surface character factor. These stopes had a wall condition of 2. The surface character factor appears to have an important effect on ELOS. However, more data with stopes having poor walls with the wall condition being recorded in a quantitative, scalar manner will be necessary to fully assess the effect of surface character on ELOS. Summary Statistics Neural Net Rock Oualitv% HR% Input % R 2 RMR, HR, Surface Character ( Linear) 45 19 36 0.42 RMR, HR, Surface Character (Log) 49 -27 24 0.58 N , HR, Surface Character (Log) 39 25 36 0.08 l l Figure 5.12 Surface Character Summary Charts i l l Linear Series RMR, Surface Character, HR Linear Series RMR, Surface Character, HR Rank Inputs Chart m m W •St--; INPUT ELOS vs Neural Net Test Data Predictions STOPE Log Series RMR, Surface Character, HR Log Series RMR, Surface Character, HR Rank Inputs Chart INPUT ELOS vs Neural Net Test Data Predictions STOPE Log Series N, Surface Character, HR Log Series N, Surface Character, HR Rank Inputs Chart W iE I G IH t I N G S INPUT ELOS vs Neural Net Test Data Predictions STOPE 112 5.9.10 Undercutting Severity Undercutting severity refers to the amount of waste mining during stope development that undercuts the stope hanging wall or footwall. The undercutting factor was given a ranking from one to five with one representing little or no undercut and five representing severe undercut. This data is presented in a scalar fashion which is recognizable for a neural net. In all three neural nets undercut severity had a relatively minor influence on the output and the R 2 values were also comparatively low. It is important, with increased mechanization, that an increase in stope undercut and development should not have a dramatic effect on ELOS or stope dilution. One shortcoming of the recording of this factor is that there is no distinguishing between where the stope undercut occurs. For example, i f the stope undercut were to occur beneath the brow dilution would be expected to be higher than undercut occurring in the footwall. Moreover, if the rock quality is high greater undercut can be achieved without incurring a large increase in dilution. However, i f the rock quality was poor a greater amount of undercut would be expected to result in increased dilution. Further data is necessary to investigate this input factor as well as the combined influence with other input factors. Summary Statistics Neural Net Rock Oualitv % H R % Input % R 2 RMR, HR, U/C Severity (Linear) 45 . 30 25 0.14 R M R , HR, U/C Severity ( L o g ) 46 31 23 0.68 N , HR, U/C Severity ( L o g ) 43 33 25 0.35 113 Figure 5.13 Undercut Severity Character Summary Charts Linear Series RMR, Undercut Severity, HR Linear Series RMR, Undercut Severity, HR Rank Inputs Chart INPUT ELOS vs Neural Net Test Data Predictions « « IP li STOPE Log Series RMR, Undercut Severity, HR Log Series RMR, Undercut Severity, HR Rank Inputs Chart INPUT ELOS vs Neural Net Test Data Predictions STOPE Log Series N, Undercut Severity, HR Log Series N, Undercut Severity, HR Rank Inputs Chart INPUT ELOS vs Neural Net Test Data Predictions 114 5.9.11 Blasthole Diameter Blasthole diameter refers to the diameter of the holes drilled for blasting. Blasthole diameter appears to be the most significant factor upon ELOS besides rock quality. It had a high R 2 correlation and high weighting in all three neural net runs. In particular, for the linear processed neural net, blasthole diameter had an R 2 correlation of 0.83 with a weighting of 0.41. This turned out to be the highest R 2 ranking of anyof the neural nets. Only stope nine had significant deviation between the actual ELOS and the neural net ELOS. Particularly interesting was the accuracy of ELOS predictions for stopes four and five. The other neural nets had difficulty achieving accurate ELOS predictions for these stopes but, as they had large 100 mm diameter blastholes, the neural net was able to accurately predict the ELOS in these stopes using the blasthole diameter factor. Larger blasthole diameter holes are often drilled deeper than smaller diameter holes which may result in wall damage due to drill hole deviation. Moreover, larger diameter holes will cause greater wall damage through increased explosive force in the blasts. The accurate test results along with the strong influence on the output suggest that blasthole diameter is a very important factor in estimating wall slough and dilution and should be considered when selecting drills for blasthole production. Summary Statistics Neural Net Rock Quality % H R % Input % R 2 RMR, HR, Blasthole Diameter (Linear) 42 18 41 0.83 RMR, HR, Blasthole Diameter ( L o g ) 46 25 29 0.74 N , HR, Blasthole Diameter ( L o g ) 41 22 38 0.43 Figure 5.14 Blasthole Diameter Character Summary Charts 115 Linear Series RMR, Blasthole Diameter, HR Linear Series RMR, Blasthole Diameter, HR ;r|Rank Inputs Chart INPUT ELOS vs Neural Net Test Data Predictions STOPE Log Series RMR, Blasthole Diameter, HR Log Series RMR, Blasthole Diameter, HR Rank Inputs Chart INPUT ELOS vs Neural Net Test Data Predictions STOPE Log Series N, Blasthole Diameter, HR Log Series N, Blasthole Diameter, HR Rank Inputs Chart INPUT ELOS vs Neural Net Test Data Predictions STOPE 116 5.9.12 Average Blasthole Length Average blasthole length refers to the length of holes drilled for blasting. Blasthole length also appears to have a significant effect upon ELOS. For the linear processed neural net blasthole length had a very high input weighting of 0.37 with a very high R 2 correlation of 0.61. Similar to blasthole diameter, with the exception of stope nine, all test results were extremely accurate including stopes three, four, and five. Stopes four and five had long blastholes of 25 metres which contributed to the ELOS. The accurate test results coupled with the strong influence on the output indicate that blasthole length is a significant factor when determining ELOS estimates. This makes sense as an increase in drillhole length should result in increased drillhole deviation blast damage which would increase ELOS. Once again, the effect of blasthole length on ELOS should be considered when selecting drilling equipment. Summary Statistics Neural Net Rock Oualitv % HR% Input % RMR, HR, Blasthole Length (Linear) 37 26 37 0.61 RMR, HR, Blasthole Length ( Log ) 53 28 19 0.62 N , HR, Blasthole Length ( Log ) 38 24 38 0.14 ; i n Figure 5.15 Blasthole Length Character Summary Charts Linear Series RMR, Blasthole Length, HR Linear Series RMR, Blasthole Length, HR Rank Inputs Chart W E I G H INPUT ELOS vs Neural Net Test Data Predictions STOPE Log Series RMR, Blasthole Length, HR Log Series RMR, Blasthole Length, HR Rank Inputs Chart INPUT ELOS vs Neural Net Test Data Predictions STOPE Log Series N, Blasthole Length, HR Log Series N, Blasthole Length, HR Rank Inputs Chart ) M — -w :w E IK I G 040 • — -H T 310 I 010 N G 010 — S 000 INPUT ELOS vs Neural Net Test Data Predictions STOPE 5.9.13 Hole/String Diameter Ratio Hole/string diameter ratio refers to the diameter of the drill bit divided by the diameter of the drill steel or drill rod. Theoretically, a greater ratio would lead to greater drillhole deviation increasing (or decreasing) wall slough. The test results were not as good as the other drill factors blasthole length and blasthole diameter with signiffcant deviation in stopes five, six, and nine. This may be due to the range in values recorded in the database being between 1.3 to 2.0 resulting in a small spread between values in the input. While the linear processed neural net placed a high input weighting of 0.34 R 2 correlation was a low 0.28. Overall, the hole/string diameter ratio factor had a minor influence on the ELOS. Summary Statistics Neural Net Rock Oualitv % HR% Input % R 2 RMR, HR, Hole/String Diameter (Linear) 44 23 34 0.28 RMR, HR, Hole/String Diameter (Log ) 41 30 29 0.76 N , HR, Hole/String Diameter (Log ) 34 33 33 0.15 Figure 5.16 Hole/String Diameter Ratio Character Summary Charts 119 Linear Series RMR, Hole/String Ratio, HR Linear Series RMR, Hole/String Ratio, HR Rank Inputs Chart INPUT ELOS vs Neural Net Test Data Predictions STOPE Log Series RMR, Hole/String Ratio, HR Log Series RMR, Hole/String Ratio, HR Rank Inputs Chart INPUT ELOS vs Neural Net Test Data Predictions STOPE Log Series N, Hole/String Ratio, HR Log Series N, Hole/String Ratio, HR Rank Inputs Chart INPUT ELOS vs Neural Net Test Data Predictions STOPE 120 5.9.14 Spacing/Burden Ratio Spacing/burden ratio refers to the ratio of the length between drillholes in a row divided by the length between drillhole rows. The spacing/burden ratio factor appears to have little effect on ELOS. In all three neural nets the R 2 factor was relatively low and little weighting was placed on the spacing/burden ratio factor. This low weighting may partly be attributed to the spacing/burden ratio values ranging from 0.8 to 1.5. Theoretically, a greater burden should increase wall damage through increased confinement on the blast Summary Statistics Neural Net Rock Oualitv % H R % Input % R 2 RMR, HR, Spacing/Burden (Linear) 43 28 R M R , HR, Spacing/Burden (Log ) 46 30 N , HR, Spacing/Burden ( L o g ) 74 22 29 0.21 24 0.74 4 0.42 Figure 5.17 Spacing/Burden Ratio Character Summary Charts 121 te i i Linear Series RMR, Spacing/Burden Ratio, HR Linear Series RMR, Spacing/Burden Ratio, HR •Rank Inputs Chart 5 § l>1 INPUT f VI ELOS vs Neural Net Test Data Predictions STOPE Log Series RMR, Spacing/Burden Ratio, HR Log Series RMR, Spacing/Burden Ratio, HR Rank Inputs Chart INPUT ELOS vs Neural Net Test Data Predictions STOPE Log Series N, Spacing/Burden Ratio, HR Log Series N, Spacing/Burden Ratio, HR Rank Inputs Chart •1 II f | m' m. INPUT ELOS vs Neural Net Test Data Predictions STOPE 122 5.9.15 Offset Distance Offset distance refers to the distance from the drillhole bottoms to the stope wall rock. Many mines use offset distance to reduce blast damage to the wall waste rock and use the assumption that much of the ore that was not blasted will slough into the stope. It is essentially a balance between revenue lost through reduced ore recovery and the reduced cost of less dilution of waste. As ELOS surveys measure the distance between the difference of the planned blasting excavation and the actual excavation, offset distance should measure changes in the amount of slough of ore left behind compared with waste wall rock. This difference would largely be dependent upon differences in rock quality between the ore and waste rock as well as parallel jointing that is often found in the waste rock. While on the log neural nets offset distance had an R 2 of 0.77, little weighting was placed on offset distance. Moreover, for the linear processed neural net, a low R 2 weighting of 0.21 was achieved with a low weighting of 0.24. Therefore, it appears that offset distance has a minor effect on ELOS. It is not surprising that offset distance has a minor influence upon ELOS as this factor just measures differences between ore and waste slough. Summary Statistics Neural Net Rock Oualitv % H R % Input % R 2 RMR, HR, Offset Distance (Linear) 51 - 25 24 0.22 R M R , HR, Offset Distance ( L o g ) 45 32 23 0.77 N , HR, Offset Distance ( L o g ) 44 29 27 0.49 123 Figure 5.18 Offset Distance Character Summary Charts Linear Series RMR, Offset Distance, HR Linear Series RMR, Offset Distance, HR Rank Inputs Chart •l«r i rt « % W 110 I I E I )* 1 n G H J« <t< T a« 4 I G °* s • INPUT ELOS vs Neural Net Test Data Predictions STOPE Log Series RMR, Offset Distance, HR Log Series RMR, Offset Distance, HR Rank Inputs Chart INPUT ELOS vs Neural Net Test Data Predictions STOPE Log Series N, Offset Distance, HR Log Series N, Offset Distance, HR Rank Inputs Chart INPUT ELOS vs Neural Net Test Data Predictions STOPE 124 5.9.16 Powder Factor Powder factor refers to the amount of explosives (kg/tonne) per tonne of ore being blasted. Typically, a higher powder factor is required for narrow stopes with hard tough ore. The neural net test results were fairly accurate with relatively high R 2 values and significant deviation only in stopes four and five. Little emphasis, however, was placed on powder factor indicating only a moderate influence on ELOS. This may be due to narrow vein stopes usually being hosted in competent waste rock which will resist blast damage counteracting blast damage that would occur in less competent rock. Summary Statistics Neural Net Rock Oualitv % H R % Input % R 2 RMR, HR, Powder Factor (Linear) RMR, HR, Powder Factor ( Log ) N , HR, Powder Factor ( L o g ) 45 33 22 0.46 47 33 20 0.73 59 31 10 0.48 w w i f . Figure 5.19 Powder Factor Summary Charts 125 Linear Series RMR, Powder Factor, HR Linear Series RMR, Powder Factor, HR Rank Inputs Chart M. •If. m %• INPUT ELOS vs Neural Net Test Data Predictions STOPE Log Series RMR, Powder Factor, HR Log Series RMR, Powder Factor, HR Rank Inputs Chart INPUT ELOS vs Neural Net Test Data Predictions STOPE Log Series N, Powder Factor, HR Log Series N, Powder Factor, HR Rank Inputs Chart INPUT ELOS vs Neural Net Test Data Predictions E L O s 126 5.9.17 Blast Layout Adjacent to Contact Blast layout adjacent to contact refers to the drillhole direction relative to the wall rock or dip of the orebody. As shown below, a one refers to drilling parallel to the hanging wall or footwall, a two refers to drilling semi-parallel to the hanging wall or footwall, and a three refers to fan drilling often perpendicular to the wall rock. Despite the majority of the stopes having parallel drilling, all three neural nets placed elevated emphasis upon the blasthole layout factor and a very high R 2 test correlation of 0.73 on the linear processed neural net. It is interesting that stope nine, an outlier point with a high ELOS of six, was fan drilled which was distinguished successfully with this input factor. This is an important result as the direction of drilling is a design controlled variable. The three factors blasthole diameter, blasthole length, and blasthole layout, Parallel Drilling (1) Semi-parallel Drilling (2) Ring Drilling (3) 127 through high input weightings and high test R 2 correlations, are important factors in ELOS determination and help explain the test data outlier points. Summary Statistics Neural Net Rock Oualitv % H R % Input % R 2 RMR, HR, Blasthole Layout (Linear ) 40 25 34 0.73 RMR, HR, Blasthole Layout ( Log ) 40 29 30 0.65 N , HR, Blasthole Layout ( Log ) 28 35 38 0.24 128 Figure 5.20 Blasthole Layout Summary Charts Linear Series RMR, Blasthole Layout, HR Linear Series RMR, Blasthole Layout, HR I I f Rank Inputs Chart f 1 I? I INPUT ELOS vs Neural Net Test Data Predictions STOPE Log Series RMR, Blasthole Layout, HR Log Series RMR, Blasthole Layout, HR Rank Inputs Chart 9> I I •f W E I G H T I N G s : INPUT ELOS vs Neural Net Test Data Predictions STOPE Log Series N, Blasthole Layout, HR Log Series N, Blasthole Layout, HR Rank Inputs Chart S. I I t 1 INPUT ELOS vs Neural Net Test Data Predictions STOPE 129 5.9.18 Stope Support Stope support refers to whether or not artificial support (point anchor bolting or cable bolting) was used in the stope walls. A one represents stope support whereas a two represents no support. The stope support factor had moderate input weightings coupled with relatively low R 2 values. While stope support initially appears to have a minor effect on ELOS, this test may not accurately represent the importance of stope support as only 12 of the 75 stopes in the study had artificial stope support. Summary Statistics Neural Net Rock Oualitv % H R % Input % R 2 RMR, HR, Stope Support (Linear) 43 27 30 0.22 RMR, HR, Stope Support ( Log ) 47 30 23 0.71 N , HR, Stope Support ( L o g ) 35 34 31 0.21 130 •.; Figure 5.21 Stope Support Summary Charts Linear Series RMR, Stope Support, HR Linear Series RMR, Stope Support, HR Rank Inputs Chart t INPUT ELOS vs Neural Net Test Data Predictions STOPE Log Series RMR, Stope Support, HR Log Series RMR, Stope Support, HR Rank Inputs Chart INPUT ELOS vs Neural Net Test Data Predictions STOPE Log Series N, Stope Support, HR Log Series N, Stope Support, HR Rank Inputs Chart INPUT ELOS vs Neural Net Test Data Predictions 131 5.9.19 Time Between Initial Blast and CMS Survey The time between initial blast and CMS survey factor refers to the number of days following the initial stope blast and the CMS survey picking up the final open stope contours. The ELOS neural net test results all had accurate R 2 values and the time factor had high input weightings. The time factor, clearly an important and relevant factor, can be controlled to a certain extent by rapid and immediate removal of broken ore. This factor may also provide insight into the rate of wall slough. Summary Statistics Neural Net Rock Oualitv % H R % Input % R 2 RMR, HR, Time Before Survey ( Linear ) 46 RMR, HR, Time Before Survey ( Log ) 49 N , HR, Time Before Survey ( Log ) 56 23 21 18 31 25 30 0.77 0.56 0.59 132 Figure 5.22 Time Between Initial Blast and Stope Survey Summary Charts Linear Series R M R , T i m e Before Survey, H R Linear Series R M R , T i m e Before Survey, H R Rank Inputs Chart I N P U T E L O S vs Neural Net Test Data Predictions S T O P E L o g Series R M R , T i m e Before Survey, H R L o g Series R M R T i m e Before Survey, H R R a n k Inputs C h a r t I N P U T E L O S vs N e u r a l N e t Tes t D a t a Predic t ions S T O P E L o g Series N , T i me Before Survey, H R L o g Series N , T ime Before Survey, H R Rank Inputs Chart I N P U T E L O S vs Neural Net Test Da ta Predictions 133 5.9.20 Number of Stope Blasts required to Complete Stope The number of stope blasts refers to the number of blasts required to create the surveyed stope. The linear processed neural net resulted in a low input weighting of 0.22 and a low R 2 correlation of 0.23. This input factor, for the N log neural net, had a high input weighting of 0.41 but a very low R 2 correlation of 0.14. This factor appears to have a minor influence upon ELOS. This is not particularly surprising as larger stopes usually have larger blasts so there is often little difference in the number of stope blasts between a large and small stope. Summary Statistics Neural Net , Rock Quality % H R % Input % R 2 RMR, HR, # Stope Blasts (Linear) 47 31 22 0.23 RMR, HR, # Stope Blasts ( L o g ) 46 27 26 0.71 N , HR, Stope Support ( L o g ) 36 22 41 0.14 134 ; Figure 5.23 Number of Stope Blasts Summary Charts Linear Series RMR, Number of Stope Blasts, HR Linear Series RMR, Number of Stope Blasts, HR Rank Inputs Chart INPUT ELOS vs Neural Net Test Data Predictions STOPE Log Series RMR, Number of Stope Blasts, HR Log Series RMR, Number of Stope Blasts, HR Rank Inputs Chart ,«s. U: INPUT ELOS vs Neural Net Test Data Predictions STOPE Log Series N, Number of Stope Blasts, HR Log Series N, Number of Stope Blasts, HR Rank Inputs Chart INPUT ELOS vs Neural Net Test Data Predictions WT' 135 5.9.21 Mining in Vicinity Mining in vicinity refers to whether or not active mining was taking place near the surveyed stope while the stope was being mined. A one refers to no active mining in the immediate area while a two refers to active mining in the immediate area. The neural net test results indicated average input weightings and R 2 correlations. The mining in vicinity factor appears to have a moderate effect upon ELOS. This effect is probably the result of stress redistribution following local blasting. Summary Statistics Neural Net Rock Quality % HR% Input % R 2 RMR, HR, Mining in Vicinity (Linear) 43 31 26 0.34 RMR, HR, Mining in Vicinity (Log) 51 30 19 0.75 N , HR, Stope Support (Log ) 49 29 22 0.26 Figure 5.24 Mining in Vicinity Summary Charts 136 Linear Series RMR, Mining in Vicinity, HR Linear Series RMR, Mining in Vicinity, HR r . Rank Inputs Chart 0 A. *=t f INPUT ELOS vs Neural Net Test Data Predictions STOPE Log Series RMR, Mining in Vicinity, HR Log Series RMR, Mining in Vicinity, HR Rank Inputs Chart r INPUT ELOS vs Neural Net Test Data Predictions STOPE Log Series N, Mining in Vicinity, HR Log Series N, Mining in Vicinity, HR Rank Inputs Chart § -St. I i V INPUT ELOS vs Neural Net Test Data Predictions STOPE 137 5.10.1 Blasthole Diameter, Blasthole Length, Strike Length, RMR, and HR Several neural networks were run with what were perceived to be the most influential input factors. With this database the maximum number of inputs recommended was five. Therefore, a neural network was run containing with what appears as the five most significant factors for determining ELOS predictions. These five factors appear to be blasthole diameter, blasthole length, strike length, RMR, and HR. This network was trained on a linear scale to a seven percent error using the same training and test stopes as the other networks. The test results were fairly good resulting in an R 2 value of 0.71. Differences in input weightings appear to be reduced as the number of inputs are increased. According to figure 5.25, R M R had the greatest influence followed by HR, blastholediameter, blasthole length and strike length. The positive neural net test results suggest that this combination of inputs may result in accurate ELOS predictions. 138 Figure 5.25 Blasthole Diameter, Blasthole Length, Strike Length, RMR, and HR Summary Charts Blasthole Diameter, Blasthole Length, Strike Length, RMR, and HR Input Weightings Chart 1 2 3 4 S Inputs ELOS vs Neural Net: Blasthole Diameter, Blasthole Length, Strike Length, RMR, and HR Test Data Chart ! ELOS VS UNEAR NNET • RMR, STRIKE LENGTH, BLASTHOLE DIAMETER, BLASTHOLE ~ ~ LENGTH & HR TEST DATA s.oo, 1 : _ : : , 139 5.10.2 RMR, Blasthole Diameter, Blasthole Length, Blasthole Layout, HR The three blasting factors, blasthole diameter, blasthole length, and blasthole layout, had the three highest R 2 correlations of the linear processed neural nets. These factors also seemed to justify the reasons for the outlier stopes. When run together the effect from the outlier points were reduced and the R 2 was 0.62. This neural net, once again, placed the greatest emphasis upon R M R followed by blasthole length, HR, blasthole diameter, and blasthole layout. This neural net was not quite as accurate on the other, more conventional, stopes. It appears that this neural net tended to overestimate the amount of ELOS for stopes 8, 9, 10, and 11. This particular neural net would be very useful for design purposes. For example, different drill diameters, lengths, and layouts could be run and the differences in ELOS could be compared to differences in productivity, capital costs, and operating costs. This could allow a company to try and optimize stope design prior to stope production. According to this report's interpretation these five factors would provide the most accurate ELOS predictions. Figure 5.26 RMR, Blasthole Diameter, Blasthole Length, Blasthole Layout, and HR Summary Charts RMR, Blasthole Diameter, Blasthole Length, Blasthole Layout, and HR Input Weightings Chart » 2 » 4 » Inputs ELOS vs Neural Net: RMR, Blasthole Diameter, Blasthole Length, Blasthole Layout and HR Test Data Chart ELOS VS NEURAL NET - RMR. BLASTHOLE DIAMETER, BLASTHOLE LENGTH BLASTHOLE LAYOUT, & HR TEST DATA 141 5.10.3 RMR, Adjusted Dip, Blasthole Diameter, Blasthole Length, & HR The RMR, adjusted dip, blasthole diameter, blasthole length, & H R neural net was run to try and include a factor relating to the dip of the deposit. The biggest problem with using the adjusted dip factor is that the database largely consists of steeply dipping deposits and little information is available in the database on stopes with a dip of less than 70 degrees. However, adjusted dip appeared to be a significant input after dip was adjusted. Moreover,, the dip of a wall would intuitively be an important factor relating to ELOS. The results from this particular neural net should be used with caution until a larger database with a greater range of dip values can be collected. The test results from this neural net were positive with an R 2 value of 0.72. The order of relative input weightings are RMR, blasthole length, HR, blasthole diameter, and adjusted dip. Figure 5.27 RMR, Adjusted Dip, Blasthole Diameter, Blasthole Length, and HR Summary Charts RMR, Adjusted Dip, Blasthole Diameter, Blasthole Length, and HR Input Weightings Chart ELOS vs Neural Net: RMR, Adjusted Dip, Blasthole Diameter, Blasthole Length, and HR Test Data Chart • T O M j 143 5.10.4 RMR, Surface, Blasthole Diameter, Blasthole Length, Blasthole Layout, Time Before Survey, HR A neural net with the five highest R 2 factors from the neural nets of the linear processed data was also run. These factors, including R M R and HR are; R M R , Surface, Blasthole Diameter, Blasthole Length, Blasthole Layout, Time Before Survey, and HR. The neural net recommends a minimum of 128 test stopes. Therefore, there is not enough data for the neural net to accurately handle seven input factors. This can be noticed by the similarity of all the factors' input weightings. The input rankings, from greatest to least, are RMR, HR, time before survey, surface, blasthole length, blasthole layout, and blasthole diameter. This neural net had an R 2 rating of 0.60. This neural net also overestimated ELOS in stopes 8, 9, 10, and 11. The inclusion of the surface factor is not recommended until more data on stope wall surfaces of poor condition is obtained and analyzed. The time before survey factor would be useful in comparing different mining and tramming rates to ELOS differences. More data is recommended before using a neural net with seven input factors. 144 Figure 5.28 RMR, Surface, Blasthole Diameter, Blasthole Length, Blasthole Layout, Time Before Survey and HR Summary Charts RMR, Surface, Blasthole Diameter, Blasthole Length, Blasthole Layout, Time Before Survey and HR Input Weightings Chart ELOS vs Neural Net: RMR, Surface, Blasthole Diameter, Blasthole Length, Blasthole Layout, Time Before Survey and HR Test Data Chart 145 5.10.5 Log Neural Net: RMR, Surface, Strike Length, Stope Height, Offset Distance, Time Before Survey, HR The neural net with the five highest R 2 factors ( plus R M R and HR ) were run from the log processed R M R neural nets. These factors are; RMR, surface, strike length, stope height, offset distance, time before survey, and HR. R M R was the predominant input factor with little distinguishing the remaining factors. Once again, there is not enough data for the neural net to accurately analyze seven input factors. Furthermore, as can be seen in nearly all the log processed neural nets, the rock quality factor has, by far, the greatest influence upon ELOS. The other inputs do not have the same influence when processed on a log scale and it appears the data is more suited to linear scale processing. It is therefore recommended to use linear scale processing when investigating the influence of inputs other than rock quality. The neural net, having an R 2 of 0.77, did not show a significant improvement upon other log neural nets of the R M R series. 146 Figure 5.29 Log Neural Net: RMR, Surface, Strike Length, Stope Height, Offset Distance, Time Before Survey and HR Summary Charts Log Neural Net: RMR, Surface, Strike Length, Stope Height, Offset Distance, Time Before Survey and HR Input Weightings Chart ELOS vs Log Neural Net: RMR, Surface, Strike Length, Stope Height, Offset Distance, Time Before Survey and HR Test Data Chart 147 5.10.6 Log Neural Net: N, Surface, H/L Ratio, Offset Distance, Powder Factor, Time Before Survey, HR A neural net with the five highest R 2 factors ( plus N and HR ) were run from the log processed N neural nets series. These factors are; N , surface, H/L ratio, offset distance, powder factor, time before survey, and HR. N was also the predominant factor but the other factors all had a similar weighting. These inputs, which had a minor influence in their respective log N neural net runs, also had a minor effect in this seven input neural net: Similar to the log R M R neural nets, the rock quality factor appears to be the only input suited to log processing. A trend in the log N neural nets is that inputs with less influence tended to result in higher R 2 values. Therefore, of the three neural net series, the log N series was considered to provide the least amount of information regarding the influence of inputs. There was a slight improvement of the R 2 value to 0.62 over the three input log N neural nets. 148 Figure 5.30 Log Neural Net: N, Surface, H/L Ratio, Offset Distance, Powder Factor, Time Before Survey and HR Summary Charts Log Neural Net: N , Surface, H/L Ratio, Offset Distance, Powder Factor, Time Before Survey and HR Input Weightings Chart Input* ELOS vs Log Neural Net: N, Surface, H/L Ratio, Offset Distance, Powder Factor, Time Before Survey and HR Test Data Chart 149 5.11 Input Analysis Summary Each neural network run showed that neural networks are able to successfully predict ELOS in open stopes with relatively few input factors. Rock quality, particularly RMR, appears to be the most relevant factor pertaining to ELOS. All the neural nets were extremely accurate when the ELOS was less than two metres - a typical ELOS. The ELOS outlier points are best explained through blasting practices. In particular blasthole diameter, blasthole length, and blasthole layout account for discrepancies between ELOS and neural net predictions. Furthermore, while conventional statistics were unsuccessful in obtaining input data relevancy, neural networks, by using different factors in combination with other factors, were able to successfully determine which factors were significant and what combinations of factors were most successful in ELOS predictions. This helps illustrate one of the major advantages of neural networks over conventional statistics - the ability to use multiple input factors and their combined interactive influence on the final result. 5.12 Design Lines Versus Neural Network Results From the ELOS database, curves were graphically developed design curves for ELOS using either N or R M R and hydraulic radius versus ELOS (Clark, 1998). In an attempt to compare these design curves to neural network predictions 112 dilution measurements from Matthew's database on dilution were used for comparison. As these were dilution, not ELOS, measurements there were no actual ELOS results to be compared with. 150 Therefore, these curves were compared to design curves developed from neural network predictions. An ELOS neural network was run on the ELOS 83 measurement database (Clark, 1998). The input factors were R M R and H R and the output was ELOS for the training database. The neural network was run until a seven percent error was reached. This neural network was then run on the 112 dilution measurements from Matthew's database. There were no measured ELOS values from this database so the results were compared with the design lines (Clarke 1998). The same process was done on a log scale using N or R M R and HR versus ELOS. These graphs, figures 5.31 and 5.32, show the similarities between the design guidelines and the neural network results. This suggests, when using two inputs, little advantage can be gained using neural networks over conventional graphing techniques. To read these charts, line up hydraulic radius with the rock quality factor ( N or R M R ) and read the number at the corresponding point. That number represents the metres of overbreak/slough in metres that can be expected from that particular stope wall. Both the hanging wall and footwall slough must be added to get a total stope wall slough in metres. Dilution can then be calculated by dividing the total stope wall slough by the average width of the designed stope. The neural net has predicted ELOS at every point of the chart. The manual design ELOS lines (Clark 1998) are restricted to points that fall within the range of the ELOS database 151 inputs. As the neural net predictions outside of this range are not based upon previous data they should be not used or used with caution as these predictions are neural net speculations as to what may happen. Moreover, the neural net ELOS predictions are overly accurate and ELOs predictions cannot be expected to be within 10 centimetres of the actual ELOS. Therefore, for practical purposes, it is recommended to use the manual design lines of these figures for ELOS expectation. Neural nets would be useful for estimating ELOS when considering additional factors other than rock quality and HR. 152 Figure 5.31 RMR, HR Manual Design Curves/Neural Network ELOS Predictions PH S PH O PH O O Q W D Q 100 0 ALL DATA SHOWING MEASURED ELOS VALUES 0 5 10 15 20 HYDRAULIC RADIUS (m) S O L I D L I N E S : N E U R A L N E T D E S I G N L I N E S D O T T E D L I N E S : L Y N D O N C L A R K E ' S M A N U A L 13 EST I I I D E S I G N LINES Reference: Clarke, 1998 153 Figure 5.32 N, HR Manual Design Curves/Neural Network ELOS Predictions MODIFIED STABILITY GRAPH SHOWING M E A S U R E D ELOS V A L U E S 1 0 0 0.1 ' : ' ' — ' ! ' 0 5 10 15 20 HYDRAULIC RADIUS (m) SOLID LINES: NEURAL NET DESIGN LINES DOTTED LINES: LYNDON CLARKE'S MANUAL BEST FIT DESIGN LINES Reference: Clarke, 1998 154 5.13 Conclusion The biggest advantage of neural nets is the ability to analyze multiple inputs as well as their combined influence on a response. This will often produce a more accurate response than conventional graphing or statistical analysis. Neural networks clearly demonstrated, through combined influences, which inputs had the greatest influence upon ELOS. Rock quality, particularly RMR, appears to be the most relevant factor pertaining to ELOS. ELOS outlier points are best explained through blasting practices. In particular blasthole diameter, blasthole length, and blasthole layout account for discrepancies between ELOS and neural net predictions. Neural net predictions were similar to manual design lines in the rock quality versus HR ELOS prediction charts (Clark, 1998). This suggests that little advantage can be gained with the use of neural nets when only two input factors are analyzed. These charts illustrate one of the disadvantages of using neural nets over conventional methods. Neural nets made ELOS predictions for all points within the charts whereas the manual design lines were limited to the range with which the database was comprised. Many of the neural net predictions outside of the database range may be very inaccurate and misleading. Much of the ELOS database comes from the Lupin Mine in the Northwest Territories. This deposit is a narrow vein, steeply dipping gold deposit situated within competent rock of the Canadian Shield. This may skew neural net predictions, using this database, to 155 make ELOS estimates that would portray results similar to a narrow vein deposit in competent rock. Prior to using neural net predictions for stopes that are not similar to the ones in the ELOS database data must be collected from a greater variety of stopes to get a more universal database. 1 5 6 Chapter 6 Neural Network / Formula Dilution Prediction Comparison 6.1 Introduction Open stope dilution estimates currently use formulas empirically derived from a database collected from the Ruttan Mine. The development of these formulas is described in the thesis "Empirical Stope Design at the Ruttan Mine, Sherritt Gordon Mines Ltd." , (Pakalnis, 1986). These formulas were derived from 133 observations of 43 stopes at the Ruttan Mine - which was, in 1986, a 6000 tonne per day operation with a multi-lensed orebody dipping at 70 degrees. To compare neural net predictions with these conventional formula predictions, a neural net analysis was conducted upon the same database from which these dilution formulas were derived using conventional statistics. The neural net predictions were compared with the formula results against actual open stope dilution on data that was not in the original database. This comparison provides insight into how well neural net predictions fare versus conventional, statistically derived formulas. 6.2 Neural Network Analysis To set up the neural network, dilution measurements, based on four inputs, were trained and tested on the neural net. The database consists of dilution measurements from three stope configurations (echelon, rib, and isolated) as well as the combined stope dilution measurements. This database was used to develop the following open stope dilution formulas (Pakalnis, 1991). 1 5 7 Isolated Stopes (61 obs) Dil(%) = 5.9 - 0.08(RMR) - 0.010(ER) + 0.98(HR) Echelon Stopes (44 obs) Dil(%) = 8.8-0.12(RMR)-0:18(ER) + 0.8(HR) Rib Stopes (28 obs) Dil(%) = 16.1 - 0.22(RMR) - 0.11(ER) + 0.9(HR) where: DIL(%) - Stope Dilution (%), ie. 10%, DIL(%) = 10 R M R - CSIR Rock Mass Rating (%), ie. 60%, R M R = 60 ER - Exposure Rate as Volume removed (metres cubed)/mth/stope width (m) HR - Hydraulic Radius (m) of exposed stope wall The stope dilution measurements were compared with four inputs - Rock Mass Rating (Bieniawski - CSIR 1984), hydraulic radius(m), exposure rate(cubic metres/month/stope width), and width(m). The neural net results are summarized in Table 6.1. 1 5 8 Table 6.1 Dilution Database Neural Net Summary Network Name %Error Obs St. Dev. R Squared Layers Cycles Rib 1 Pred. 0.74 6 7.18219 0.00561 1 938 Rib 1 Test 0.74 22 0.25513 0.99842 1 938 Rib 1 Pred. 2 0.77 6 7.20243 0.11000 2 524 Rib 1 Test 2 0.77 22 0.2428 0.99857 2 524 Rib 5 Pred. 4.38 6 4.97219 0.52342 1 44 Rib 5 Test 4.38 22 1.34731 0.84474 1 44 Formula n/a 28 5.69547 0.19924 n/a n/a Ech 1 Pred. 0.75 16 3.47756 0.48539 1 1839 Ech 1 Test 0.75 28 0.18844 0.99798 1 1839 Ech 1 Pred. 2 0.91 16 4.49021 0.14204 2 1507 Ech 1 Test 2 0.91 28 0.24505 0.99659 2 1507 Ech 5 Pred. 3.85 16 3.9905 0.27398 1 80 Ech 5 Test 3.85 28 0.79754 0.96388 1 80 Formula n/a 44 2.58970 0.64250 n/a n/a Iso 3 Pred. 2.80 11 2.60095 0.02938 2 73238 I so 3 Test 2.80 50 0.77605 1.98104 2 73238 Iso 4 Pred. 3.79 11 2.60445 0.02676 1 99999 Iso 4 Test 3.79 50 1.04082 0.96590 1 99999 Iso 5 Pred. 4.98 11 2.52904 0.08230 1 7354 Iso 5 Test 4.98 50 1.50200 0.92898 1 7354 Formula n/a 61 3.38153 0.57567 n/a n/a All 10 Pred. 9.65 18 2.33904 0.75582 1 25803 All 10 Test 9.65 115 3.00566 0.69753 1 25803 All 1000 Pred. 15.71 17 2.71424 0.67214 1 1000 All 1000 Test 15.71 116 2.82692 0.73194 1 1000 All 5000 Pred. 14.86 17 2.49914 0.72205 1 5000 All 5000 Test 14.86 116 3.12917 0.67155 1 5000 All 5000 Pred2 8.50 17 2.47886 0.72654 2 5000 All 5000 Test2 8.50 116 2.48073 0.79357 2 5000 All 5000 Pred3 7.33 17 2.69284 0.67729 3 5000 All 5000 Test3 7.33 116 2.65885 0.76286 3 5000 All 10000 Pred 9.38 17 2.38334 0.74721 1 10000 All 10000 Test 9.38 116 2.99422 0.69927 1 10000 All 10000 Pred2 6.63 17 2.97696 0.60561 2 10000 All 10000 Test2 6.63 116 2.08377 0.85435 2 10000 *Best Net Searches Best Net Pred. 12.66 17 1.68221 0.87406 1 5 Best Net Test 12.66 116 3.33379 0.62719 1 5 Best Net Pred2 13.33 17 1.74054 0.86518 2 8 Best Net Test2 13.33 100 3.51220 0.58442 2 8 Best Net 2 Unseen 13.33 16 3.20141 0.69788 2 8 Data Prediction 1 5 9 The results from Table 6.1 show that despite excellent correlation on testing the neural network for the individual stope configurations (isolated, rib, echelon), when these experts were trained on new data, they had little accuracy. Moreover, increasing the training time, cycles, layers, and decreasing the training error provided little, if any, improvement on the ability of the expert to predict new data. However, despite high standard deviations and small r squareds, the values predicted are not altogether out of line. For example, the Braincel expert predictions were within a realistic range of values despite little correlation with the actual values. 6.3 N e u r a l N e t w o r k - F o r m u l a C o m p a r i s o n on Unseen D a t a Neural nets that were also used to calculate dilution predictions on unseen data for comparison with the formulas. The neural networks were not optimized. Using one hidden layer of two unseen nodes, just one neural net was trained on the entire original database for each stope configuration. The neural net trained on the rib stope database was trained to eight percent error, the neural nets trained on the echelon and isolated stope databases were each trained to 10 percent training error. These neural nets were then used to predict dilution on new unseen data and compared with the respective formula dilution estimates. The differences in the actual dilution from the neural net and formula predictions were compared for each stope and the combined averages of the stopes for each stope configuration. Tables 6.2 to 6.4 summarize the neural net / formula prediction comparisons. Figure 6.1 charts the average percent error between the neural net and formula predictions. 1 6 0 Table 6.2 Rib Stope Dilution Test Data Error Comparison R I B STOPE D I L U T I O N T E S T DATA ACTUAL PREDICTIONS ABS. VALUE ABS. VALUE DILUTION % NEURAL NE FORMULA DIL-NNET DIL.- FORMULA 1 5 . 1 8 . 9 4 . 1 7 . 9 2 6 . 1 9 . 8 4 . 1 7 . 8 3 1 0 . 6 12 . 5 7 . 6 9 . 5 8 1 3 . 6 15 . 2 5 . 6 7 . 2 12 14 . 5 17 . 0 2 . 5 5 . 0 3 2 . 0 3 . 1 . 1 . 0 0 . 1 3 1 . 7 4 . 9 1 . 3 1 . 9 4 5 . 0 8 . 5 1 . 0 4 . 5 5 9 . 1 11 . 2 4 . 1 6 . 2 5 1 1 . 4 13 . 0 6 . 4 8 . 0 6 1 2 . 6 14 . 8 6 . 6 8 . 8 3 3 . 3 7 . 0 0 . 3 4 . 0 9 1 6 . 1 16 . 6 7 . 1 7 . 6 14 14 . 8 18 . 4 0 . 8 4 . 4 1 4 . 4 3 . 2 3 . 4 2 . 2 3 3 . 4 5 . 0 0 . 4 2 . 0 4 3 . 1 7 . 7 0 . 9 3 . 7 6 4 . 4 9 . 5 1 . 6 3 . 5 8 5 . 5 10 . 4 2. J 5 2 . 4 A V G = 5 . 3 7 . 7 1 0 . 4 3 . 2 5 . 1 161 Table 6.3 Echelon Stope Dilution Test Data Error Comparison ECHELON STOPE TEST DATA ACTUAL PREDICTIONS ABS. VALUE ABS. VALUE DILUTION % NEURAL MET FORMULA DBL-NNET DEL- FORMULA 3 3 . 0 4 . 5 0 . 0 1 . 5 3 4 . 0 6 . 1 1 . 0 3 . 1 3 6 . 3 8 . 5 3 . 3 5 . 5 . 5 7 . 7 1 0 . 1 2 . 7 5 . 1 7 9 . 7 1 1 . 7 2 . 7 4 . 7 5 5 . 5 7 . 9 0 . 5 2 . 9 5 7 . 3 9 . 5 2 . 3 4 . 5 A V G = 4 . 4 l l l l l l l l i l l ^ l l l i 8 . 3 l i i i i i i i l i i i i i i 3 . S Table 6.4 Isolated Stope Dilution Test Data Error Comparison ISOLATED STOPE TI&T DATA ACTUAL PREDICTIONS ABS. VALUE ABS. VALUE DILUTION % NEURAL NET FORMULA DIL.-NNET DIL -FORM. 4 3 . 2 3 . 4 0 . 8 0 . 6 5 4 . 1 4 . 4 0 . 9 0 . 6 A V G = 4 . 5 3 . 6 3 . 9 D . S 0 . 6 1 6 2 Figure 6.1 Average Neural Net / Formula Error Over Actual Average Dilution AVERAGE PERCENT ERROR OF NEURAL NET PREDICTION (SERIES 1) AND FORMULA PREDICTION (SERIES 2) 1.00 i LU O < LU > i i £ £ 0.60 0.90 0.80 0.70 O Q II £< LU < or LU 0.50 0.40 0.30 0.20 0.10 0.00 -\ 1 I S O L A T E D E C H E L O N 3 R I B For the unseen rib stope data, the neural net had an average error of 3.2 percent and the formula had an average error of 5.1 percent. For the echelon unseen stope data the neural net had an average error of 1.8 percent while the formula had an-average error of 3.9 percent. For the two unseen isolated stopes the neural net had an average error of 0.9 1 6 3 percent while the formula had an average error of 0.6 percent. As the rib stope and echelon unseen databases were significantly larger than the isolated stope unseen database the neural network showed a clear improvement over the formula estimates. The relatively large average percent prediction errors of both the neural net and formula predictions shown in Figure 6.1 are misleading as the actual dilution was not surveyed in but was calculated from tonnage actually pulled from the stope compared with the calculated stope tonnage. Due to variable bucket fill factors, density, different operators and other factors the actual dilution could be easily out by several percent. Therefore, the predicted dilution would be expected to be out a certain percentage than the actual dilution. The improved performance of the neural net predictions in this example over the statistically derived formulas suggest that neural nets can have better predictions than conventional formula predictions. 6.4 Conc lus ion The improved accuracy of the neural net dilution predictions over the statistically derived dilution formulas suggest that neural nets can produce more accurate estimates over conventional empirical methods. The need to having adequate amounts of input data was demonstrated as the training error improved on the smaller stope dilution databases but the accuracy of predictions on unseen data decreased for the smaller databases. Besides improved accuracy, the added neural net advantages of multiple inputs and the continuous ability to retrain the neural nets should improve empirical estimates in the mining industry. 164 C h a p t e r 7: Conc lus ion 7.1 In t roduct ion To be of practical use, mine operators must be able to use neural networks in a quick easy manner on the mine site. Neural network software is relatively inexpensive and easy to use. The difficulty for mine operators would be to obtain a suitable database from which an effective neural network could be developed. A goal of major mining companies and educational institutions should be to obtain large databases of empirical information collected in a similar manner from mining operations throughout the world. These databases could then be distributed to mining companies throughout the world allowing a similar universal database from which all mining companies could derive empirical information for neural networks. The advantage for mining companies is that more data would improve neural network results. Once these databases have been developed the following examples will show how a mine operator could use neural networks to be another tool for empirical design in the mining industry. The following examples focus on rockbursting and open stope dilution but many other areas of empirical design could also be developed. 7.2 R o c k b u r s t Input P red i c t i on The rockburst input prediction examples at the A. W. White Mine and the B M S #12 mine have already displayed the ability of neural nets to have initial success in predicting 165 rockburst locations. These databases, however, are much too small to have any confidence to be used on a universal basis. Once a database consisting of thousands of examples of both stable areas and areas where rockbursts have occurred became available a mine operator could use this information to reduce the risk of rockbursting. The big advantage of predicting rockburst inputs is mine safety. Not only would this help save lives and reduce injuries, it would improve economics by reducing compensation payments and equipment losses. An example, with respect to rockburst input prediction, where neural networks would have a significant positive effect is at the design stage of a mine. Development headings such as shafts, declines, access drifts, and raises could be inputted into a neural net to see if potential for a rockbursting problem to exist. If a neural net recognizes a potential rockburst problem changes to the design could occur prior to development. The location, shop,: size and support requirements could all easily be adjusted to eliminate or reduce the rockburst potential. The design, with respect to rockburst inputs, must also be analyzed through conventional and intuitive methods. The shortcomings of neural networks require checking neural net. responses through conventional means a necessity. Neural networks should be used as another tool for design not the final decision maker. Production-headings, with respect to-rockbursts,- can also be. optimized using neural networks. As production headings must be situated in ore, little can be done with respect to the location of production headings. However, the size, shape and excavation ratio can all be adjusted to minimize the rockburst risk. Furthermore, if the risk of rockbursting 166 cannot be eliminated through design, dangerous areas could be identified using neural networks. Decisions to increase rock support, reduce personnel exposure, or go to remote equipment in these dangerous areas could be implemented with greater confidence using neural networks. Mines that are already in production can also use neural networks to reduce the risk of rockbursting. Mine accesses that have already been excavated cannot practically have alterations to the design or location. However, neural networks can assist in identifying potential rockburst areas and decisions to increase ground support, monitoring, or reduce worker exposure through the use of remote equipment can be made more confidently. Moreover, rockburst areas with the corresponding inputs as well as stable areas and their corresponding inputs can be collected. This data can be added to a universal database as well as being used as a separate site specific database from which a specific mine neural network can be developed. As more information becomes available both the universal and mine neural networks can be retrained to improve the neural network predicting ability. Production headings from operating mines, with respect to rockbursts, can also use neural networks to improve safety. Data from headings that have rockbursts and stable headings can be collected and added to a universal database as well as a separate mine neural network. Production heading design can then continually be readjusted to reduce the risk of rockbursting or remote equipment/extra ground support can be utilized. Figure 7.1 is a flow chart summarizing the design steps involved in using neural nets for rockburst input prediction. 167 Figure 7.1 Flowchart for Using Rockburst Input Neural Network Design Opening Use Neural Net to Analyze Rockburst Potential unstable design not finalized unstable finalized design stable Alter Design to Remedy Rockburst Potential Reduce Rockburst hazards through Increased Ground Support, Remote Equipment and/or Monitoring Excavate, Monitor, Record Data From Stable and Unstable Headings Add Data to Site and Universal Databases Retrain Neural Networks for Future Design Purposes 168 7.3 R o c k b u r s t P red i c t i on T h r o u g h Seismic Ana l ys i s Even though this study was unsuccessful producing an accurate neural network that could predict when a rockburst or major seismic bump would occur, a successful neural network may be developed with more data collected with corresponding minor bumps being collected along with the major bumps. The potential safety advantages a successful seismic neural network would provide should provide incentive for mining companies and educational institutions to fully investigate the potential for developing a successful seismic neural network. If a successful seismic neural network could be developed mining companies could use the network for monitoring potential rockburst areas. To be effective, a neural network must read and analyze seismic data automatically and instantaneously. Furthermore, the neural network should automatically trigger an alarm to immediately evacuate personnel who may be in danger. Manual studies into seismic analysis should continue to try and understand the triggers of rockbursting so that checks could be made upon the neural net predictions. Neural net predictions should be used as another tool - not the sole decision maker. Figure 7.2 represents a conceptual flow chart of how a neural net could be linked to seismic sensors to trigger warning alarms. 169 Figure 7.2 Flowchart for Using Neural Network Seismic Monitoring Determine Areas where there is Potential for Rockbursts Seismically Monitor all Working Areas Where Rockburst Potential Exists Have Neural Network Online Constantly Monitoring Seismic Activity Have Neural Network Automatically Trigger Alarm When Danger Levels are Exceeded Neural Net Automatically Record Data from Stable and Unstable Headings by Collecting Both Minor and Major Seismic Bump Acivity Continue to Update Database and Retrain Neural Network 170 7.4 M ines i t e N e u r a l Ne t E L O S Pred ic t i on The ability to accurately predict ELOS would provide great financial advantages for mining companies. To illustrate the potential financial gains the following example of an open stope is listed below. Stope Width = 6 metres Strike Length = 40 metres Stope Height = 40 metres Rock Mass Rating = 58 From the R M R vs HR Elos estimation graph (Clarke, 1998), a one metre ELOS would be expected. This represents a dilution of (1 m (fw) + 1 m (hw))/6 m (ore width) = 33% dilution. This dilution would be, using a specific gravity of 2.8 g/cc:, 40 m x 40 m x ( lm + lm) x 2.8 tonnes/m3 = 8960 tonnes of waste Using $10/tonne as an approximation for haulage and milling costs, this waste represents $89600 in extra operating costs. This figure does not include reblasting, haulage delays due to large pieces of waste or lost opportunity costs. If neural networks were used to optimize the stope design by testing different drillhole lengths, diameter, layout, etc., a conservative estimate for a reduction in ELOS would be 171 ten percent. This ten percent would represent an approximate savings of $8960 in direct operating costs. This conservative savings estimate of $8960 may initially appear insignificant. However, with one metre of ELOS from each wall this stope would provide 35840 tonnes of mill feed of which 26880 tonnes would be ore. In a 1200 tonne per day operation this stope would be just one month's feed. This mere ten percent reduction in ELOS would save this operation $107520 per year. The savings over the life of mine may be the difference whether a project proceeds or is shelved. Furthermore, neural net software is inexpensive, quick, and easy to use resulting in minimal costs. The only reason not to use neural networks for ELOS predictions would be the absence of an adequate database. However, even the ELOS database used in this thesis would provide, in many situations, a starting database which could be further developed into an adequate size database for neural net ELOS predictions. A mine operator should use neural networks to optimize equipment selection prior to development. The shape and size of open stopes can be adjusted as mining continues but it is more difficult to replace equipment. The most critical factors, with respect to ELOS, to be tested prior to development would be blasthole diameter, blasthole length, undercut severity and time before survey. These factors should be run on varying stope dimensions as well as the expected variations in rock quality. The blasthole factors would assist in drill selection, the undercut severity would assist in haulage equipment selection, and the time before survey factor would assist in matching equipment productivity with increasing 172 wa'I -lough over time. It is recommended to initially use linear processing with RMR, HR, and one other factor to gain further insight into the effect of each input factor. Use multiple input factors when selecting a final design. The number of factors that could be effectively run simultaneously would be dependent upon the size of the database. With the ELOS database (Clark, 1998) no more than five inputs are recommended to be run simultaneously. When mining commences many factors can be adjusted to optimize design. Factors such as the shape, support, drillhole pattern, and powder factor can all easily be adjusted as further ELOS information becomes available. Monitoring and surveys should commence when stopes are finished, information added to the universal and minesite databases, and optimization of upcoming stopes be continually adjusted. Intuition and conventional design methods must be used in combination with the neural network approach. Figure 7.3 represents a conceptual flowsheet of how a neural network could be used for optimizing ELOS in design. 173 Figure 7.3 Conceptual Neural Network Flowsheet for ELOS Design Conventional Stope Design Neural Network Analysis to Assess Influence of Inputs on ELOS Economic Assessment Optimize Design Check Design via Conventional Means Excavate & Collect Data I Add Data to Mine & Universal Databases Retrain ELOS Neural Network 174 These flow sheets provide a conceptual basis with which neural networks could be used for rockburst prediction and dilution estimates. Neural networks could have many more applications in the mining industry. Some examples include pillar sizing, rock failure mechanisms, ore grade estimations, and economic feasibilities. 7.5 F i n a l R e m a r k s Neural networks demonstrated success in determining rockburst inputs at both the A W . White Mine and the B M S #12 Mine. From the A. W. White Mine database, the neural net determined the factors having the greatest influence upon rockbursts are SRF', Q, and adjusted RMR. Span and depth appeared to have a lesser effect. From the B M S #12 Mine database, past events and stress had the greatest influence upon rockbursting. The databases from these mines are too small to determine any conclusive results. Future work in rockburst input prediction, from both educational institutions and mining companies, is necessary to develop a large universal database that would be essential for rockburst input prediction. It would be necessary to have common inputs to be collected for the database. Additional inputs should include induced stress, hydraulic radius, presence of raises, faults, or dykes, and stope dimensions. Neural nets were unable to predict when rockbursts would occur from the B M S #12 Mine seismic database. The only noticeable trends prior to a major bump or rockburst is that a bump often follows a period of no seismic activity or a period of intense seismic 175 activity. Changes to the manner in which both the input and output data is recorded would be expected to improve results. The input data, rather than have numbers represents relative change, should consist of actual input values with the actual change over time. The output data, seismic bump, should have the actual value recorded as well as minor bump values also being recorded. These changes would allow the neural network more data with which a seismic bump estimate could be predicted improving the network predicting ability. As there is no successful method for predicting when rockbursts will occur, mining companies should begin collecting seismic data with which a universal seismic database from which a successful neural network could be developed. Perhaps a university could compile seismic data from mines around the world. It is recommended to collect data on standard format and collect seismic data for various time frames. For such a difficult prediction as when a rockburst will occur a nested neural network would be necessary. A nested neural network is used to predict the results from many other neural networks which measure many different inputs and time frames. If a successful seismic neural network is developed it is recommended to have a neural network automatically monitor mine headings and set off an alarm if dangerous seismic levels are predicted. Neural nets, where conventional statistics were unsuccessful, had success in determining which ELOS factors were relevant. By using multiple factors in different combinations, neural nets demonstrated one of the big advantages over conventional statistics - the ability to use multiple inputs in combination to measure empirical data. Rock quality, particularly RMR, had the greatest influence upon ELOS. The blasting factors, blasthole 176 diameter, length, and layout, all had significant influence upon ELOS. Shape factors such as H R and stope height had less influence upon ELOS than expected. Neural network predictions correlated well with manual design lines (Clark, 1998) in R M R versus HR ELOS estimation graphs. The advantage of neural nets is that additional factors other than rock quality and H R can be used to optimize ELOS estimates. The success neural networks had in predicting rockburst locations and ELOS estimates clearly demonstrate that neural networks can be developed into a successful empirical design tool for mining engineers. The biggest challenge to make neural nets effective is to obtain large enough empirical databases that can be made available on a world wide basis. The main shortcomings of neural networks are; • Neural Networks act in a " black-box " approach where one is never exactly sure how the response is calculated, • Neural networks can over train and find irrelevant correlation reducing the predicting ability on new data, • Data must be compiled in a manner that a neural network can recognize in a constant manner, • Similar to conventional statistics, neural nets require an adequate size database before confidence can be placed on the results, • A neural net will make predictions outside of the scope of the database from which training occurred. The major advantages neural nets have over conventional empirical approaches are; • Neural networks can easily use multiple inputs to analyze data, 177 • By using multiple hidden layers and nodes neural networks investigate the combined influence of inputs, • Neural networks can be easily retrained as new data becomes available making them a more dynamic and flexible empirical estimation approach, • Neural network software is inexpensive and easy to use, • Neural networks have demonstrated a more accurate empirical estimate over conventional methods. Besides dilution estimates and rockburst prediction neural nets could also be used for many other empirical estimates in the mining industry. Some examples include pillar strength estimation, investigation into rock failure modes such as roof falls and ground wedges, grade estimates, and economic analyses. Eventually, when world wide empirical databases are collected, neural networks should become an important tool for mining engineers to empirically analyze design, improve safety, and improve mine economics. 178 References 1. Anderson, J. A., 1972, A Simple Neural Network Generating an Interactive Memory; Mathematical Biosciences, Vol. 14, 4 pp. 2. April 1995, Mine Operation at the #12 Mine; Internal Brunswick Mining Division document. 3. Baranowski, S., et al., 1992 An Evaluation of Kohonen's Second Learning Vector Quantization Neural Networks; University of Missouri, Missouri, 6 pp. 4. Barton, Lien, Lunde, 1974, Classification of Rock Masses for the Design of Tunnel Support, Rock Mechanics Vol. 6, No. 4, 7 pp. 5. Bieniawski, Z. T., 1976, ed. Rock Classifications in Rock Engineering; Proc. Symp. On Exploration for Rock Engineering, A. A. Balkema, Rotterdam. 6. Bieniawski, Z. T., 1989, Engineering Rock Mass Classifications, New York; John Wiley & Sons. 7. Brehaut, C , 1986, Annual Report of the Canada-Ontario-Industry Rockburst Project; Canmet Special Publication. 8. Brehaut, C , 1987, Annual Report of the Canada-Ontario-Industry Rockburst Project; Canmet Special Publication 87-7E. 9. Brehaut, C , 1988, Annual Report of the Canada-Ontario-Industry Rockburst Project; Canmet Special Publication 88-22E. 10. Callatay, A., 1992, Natural and Artificial Intelligence Misconceptions about Brains and Neural Networks; North-Holland. f 179 11. Caron, M . , 1995, Ontario Ministry of Labour Rockburst Recommendations, Ontario Ministry of Labour. 12. Clark, L., 1998, Minimizing Dilution in Open Stope Mining with a Focus on Stope Design and Narrow Vein Longhole Blasting; University of British Columbia, 316 pp. 13. Godin R., 1987, Structural Model of Brunswick # 12 Mine as an Aid to Mining and Exploration. 14. Hedley, D. G. F., 1992, Rockburst Handbook for Ontario Hardrock Mines, Canmet Special Report SP 92 - DE, Ottawa. 15. Hudyma, M. , 1994. Seismicity at Brunswick Mining. Proceedings of the Quebec Mining Association Ground Control Colloque, Val d'Or, Quebec, Canada. 16. Hedley, D., 1985, Rockburst in Ontario Mines During 1985; Canmet Special Publication 87-2E. 17. Hedley, D., 1992, Rockburst Handbook For Ontario Hardrock Mines; Canmet Special Publication 92-IE. 18. Hopfield, J., 1982, Neural Networks and Physical Systems with Emergent Collective Computational Abilities; Proceedings of the National Academy of Sciences, USA, Volume 79, 5 pp. 19. Karayiannis, N . , 1992, The Fast Back Propagation Algorithm; University of Houston, Texas, 6 pp. 180 20. Keppel, G. , 1989, Data Analysis For Research Designs; University of California at Berkeley, pp. 83-95. 21. Knoll, P., ed. 1992, Induced Seismicity; Rotterdam, Netherlands, A. A. Balkema. 22. Kristof, B. , 1995. Sill Pillar Mining at the Brunswick Mining # 12 Orebody. Proceedings of the A G M of the CFM, Nova Scotia, Canada. 23. Lang, B., Pakalnis, R., Vongpaisal, S., 1991, Span Design in Wide Cut and Fill Stopes at Detour Lake Mine, 93rd A G M - C I M M , paper # 142, Vancouver. 24. Luff., B., 1995. A History of Mining in the Bathurst Area, Northern New Brunswick, Canada. CFM Bulletin, Volume 88, Number 994, pp 63-68. 25. Mah., P., 1995, Ground Control Monthly Report; Internal Document, Goldcorp. 26. Mah, P., 1995, Development of Empirical Design Techniques in Burst Prone Ground at A W . White Mine; Canmet Project No.: 1-9180. 27. May 1996, Ground Control Technical Review Committee; Internal Brunswick Mining Division Document. April 1995, Mine Operation at the #12 Mine, Internal Brunswick Mining Division document. 28. Meech, J., Kumar, S., A Hypermanual on Expert Systems v. 3.0; Canmet, Energy Mines and Resources Canada. 29. Mendecki, A. , 1994, Quantitative Seismology and Rock Mass Stability Draft; ISS International, South Africa. 181 30. Minsky, M . , Papert, S., 1969, Perceptrons; MIT Press, Cambridge, M A , USA. 31. Mitta, D. , 1992, Human Learning and Neural Network Learning; Texas A & M University, Texas, 6 pp. 32. Muthusami, A . , et al., 1992 An Adaptive Back Propagation Learning Algorithm For Multilayer Neural Networks; Texas A & M University, Texas, 6 pp. 33. Pakalnis, R., Vongpaisal, S., 1993, Mine Design an Empirical Approach; Canmet, Energy Mines and Resources Canada. 34. Pakalnis, R, 1986, Empirical Stope Design at the Ruttan Mine, Sherritt Gordon Mines Ltd., University of British Columbia, Canada, 276 pp. 35. Promised Land Technologies, Inc., 1993, Braincel Version 2.0; New Haven, Connecticut, 144 pp. 36. Widrow B., Marcian, H , 1960, Adaptive Switching Circuits; JRE Wescon Convention Record, Part 4, 9 pp. 

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
http://iiif.library.ubc.ca/presentation/dsp.831.1-0081150/manifest

Comment

Related Items