@prefix vivo: .
@prefix edm: .
@prefix ns0: .
@prefix dcterms: .
@prefix dc: .
@prefix skos: .
vivo:departmentOrSchool "Applied Science, Faculty of"@en, "Electrical and Computer Engineering, Department of"@en ;
edm:dataProvider "DSpace"@en ;
ns0:degreeCampus "UBCV"@en ;
dcterms:creator "Yang, Ping"@en ;
dcterms:issued "2009-06-11T16:03:36Z"@en, "2009"@en ;
vivo:relatedDegree "Doctor of Philosophy - PhD"@en ;
ns0:degreeGrantor "University of British Columbia"@en ;
dcterms:description """Advances in monitoring technology have resulted in the collection of a vast amount of data that exceeds the simultaneous surveillance capabilities of expert clinicians in the clinical environment. To facilitate the clinical decision-making process, this thesis solves two fundamental problems in physiological monitoring: signal estimation and trend-pattern recognition.
The general approach is to transform changes in different trend features to nonzero level-shifts by calculating the model-based forecast residuals and then to apply a statistical test or Bayesian approach on the residuals to detect changes. The EWMA-Cusum method describes a signal as the exponentially moving weighted average (EWMA) of historical data. This method is simple, robust, and applicable to most variables. The method based on the Dynamic Linear Model (refereed to as Adaptive-DLM method) describes a signal using the linear growth model combined with an EWMA model. An adaptive Kalman filter is used to estimate the second-order characteristics and adjust the change-detection process online. The Adaptive-DLM method is designed for monitoring variables measured at a high sampling rate. To address the intraoperative variability in variables measured at a low sampling rate, a generalized hidden Markov model is used to classify trend changes into different patterns and to describe the transition between these patterns as a first-order Markov-chain process. Trend patterns are recognized online with a quantitative evaluation of the occurrence probability. In addition to the univariate methods, a test statistic based on Factor Analysis is also proposed to investigate the inver-variable relationship and to reveal subtle clinical events. A novel hybrid median filter is also proposed to fuse heart-rate measurements from the ECG monitor, pulse oximeter, and arterial BP monitor to obtain accurate estimates of HR in the presence of artifacts.
These methods have been tested using simulated and clinical data. The EWMA-Cusum and Adaptive-DLM methods have been implemented in a software system iAssist and evaluated by clinicians in the operating room. The results demonstrate that the proposed methods can effectively detect trend changes and assist clinicians in tracking the physiological state of a patient during surgery."""@en ;
edm:aggregatedCHO "https://circle.library.ubc.ca/rest/handle/2429/8932?expand=metadata"@en ;
dcterms:extent "2005198 bytes"@en ;
dc:format "application/pdf"@en ;
skos:note """Adaptive Trend Change Detection and Pattern Recognition in Physiological Monitoring by Ping Yang A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF DOCTOR OF PHILOSOPHY in The Faculty of Graduate Studies (Electrical and Computer Engineering) THE UNIVERSITY OF BRITISH COLUMBIA (Vancouver) June 2009 c© Ping Yang 2009 Abstract Advances in monitoring technology have resulted in the collection of a vast amount of data that exceeds the simultaneous surveillance capabilities of ex- pert clinicians in the clinical environment. To facilitate the clinical decision- making process, this thesis solves two fundamental problems in physiological monitoring: signal estimation and trend-pattern recognition. The general approach is to transform changes in different trend features to nonzero shifts by calculating the model-based forecast residuals and then to apply a statistical test or Bayesian approach on the residuals to detect changes. The EWMA-Cusum method describes a signal as the exponentially moving weighted average (EWMA) of historical data. This method is simple, robust, and applicable to most variables. The method based on the Dynamic Linear Model (refereed to as Adaptive-DLMmethod) describes a signal using the linear growth model combined with an EWMA model. An adaptive Kalman filter is used to estimate the second-order characteristics and adjust the change-detection process online. The Adaptive-DLM method is designed for monitoring variables measured at a high sampling rate. To address the intraoperative variability in variables measured at a low sampling rate, a generalized hidden Markov model is used to classify trend changes into different patterns and to describe the transition between these patterns as a first-order Markov-chain process. Trend patterns are recognized online with a quantitative evaluation of the occurrence probability. In addition to the univariate methods, a test statistic based on Factor Analysis is also proposed to investigate the inver-variable relationship and to reveal subtle clinical events. A novel hybrid median filter is also proposed to fuse heart- rate measurements from the ECG monitor, pulse oximeter, and arterial BP monitor to obtain accurate estimates of HR in the presence of artifacts. These methods have been tested using simulated and clinical data. The EWMA-Cusum and Adaptive-DLM methods have been implemented in a software system iAssist and evaluated by clinicians in the operating room. The results demonstrate that the proposed methods can effectively detect trend changes and assist clinicians in tracking the physiological state of a patient during surgery. ii Table of Contents Abstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ii Table of Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . iii List of Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii List of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viii Acronyms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . x Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . . . . . xii 1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.1.1 Cognitive challenges in physiological monitoring . . . 1 1.1.2 Standard alarm system . . . . . . . . . . . . . . . . . 2 1.1.3 Decision making in physiological monitoring . . . . . 4 1.2 Problem statement . . . . . . . . . . . . . . . . . . . . . . . 6 1.2.1 Aims of study . . . . . . . . . . . . . . . . . . . . . . 6 1.2.2 Scope of application . . . . . . . . . . . . . . . . . . . 7 1.2.3 Challenges . . . . . . . . . . . . . . . . . . . . . . . . 7 1.3 Outline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 2 Literature review . . . . . . . . . . . . . . . . . . . . . . . . . . 11 2.1 Overview of clinical decision-support systems . . . . . . . . . 11 2.2 Early stage trend-change detection . . . . . . . . . . . . . . . 12 2.3 Diagnostic monitoring . . . . . . . . . . . . . . . . . . . . . . 15 2.4 Temporal abstraction . . . . . . . . . . . . . . . . . . . . . . 18 2.5 Artifact removal and signal estimation . . . . . . . . . . . . . 22 2.6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 iii Table of Contents 3 Change detection techniques and study procedure . . . . . 25 3.1 Level-shift change detection . . . . . . . . . . . . . . . . . . . 25 3.1.1 Uncertainty in level shifts . . . . . . . . . . . . . . . . 25 3.1.2 Hypothesis testing . . . . . . . . . . . . . . . . . . . . 26 3.1.3 Sequential probability ratio test . . . . . . . . . . . . 27 3.1.4 Bayesian approaches . . . . . . . . . . . . . . . . . . 30 3.2 Study procedure . . . . . . . . . . . . . . . . . . . . . . . . . 31 3.2.1 Data acquisition . . . . . . . . . . . . . . . . . . . . . 31 3.2.2 Data annotation . . . . . . . . . . . . . . . . . . . . . 31 3.2.3 Model selection . . . . . . . . . . . . . . . . . . . . . 31 3.2.4 Performance evaluation . . . . . . . . . . . . . . . . . 33 4 Two-level change detection based on the EWMA model . 34 4.1 Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 4.1.1 EWMA-based forecasting model . . . . . . . . . . . . 34 4.1.2 Cusum-based multilevel change detection . . . . . . . 35 4.1.3 Abruptness of trend changes . . . . . . . . . . . . . . 36 4.1.4 Alert management . . . . . . . . . . . . . . . . . . . . 36 4.2 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 4.2.1 Example case . . . . . . . . . . . . . . . . . . . . . . 38 4.2.2 Performance evaluation . . . . . . . . . . . . . . . . . 38 4.3 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 5 Adaptive trend change detection based on the linear growth dynamic model . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 5.1 Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44 5.1.1 Linear growth dynamic linear model . . . . . . . . . . 44 5.1.2 Adaptive Kalman filter . . . . . . . . . . . . . . . . . 45 5.1.3 EWMA of the DLM predictions . . . . . . . . . . . . 48 5.1.4 Adaptive change detection . . . . . . . . . . . . . . . 49 5.1.5 Trend patterns . . . . . . . . . . . . . . . . . . . . . . 49 5.1.6 Trigg’s tracking signal . . . . . . . . . . . . . . . . . . 50 5.2 Test on heart rate trend signals . . . . . . . . . . . . . . . . 50 5.2.1 Q and R estimation . . . . . . . . . . . . . . . . . . . 51 5.2.2 Parameter tuning . . . . . . . . . . . . . . . . . . . . 52 5.2.3 Comparison with Trigg’s tracking signal . . . . . . . . 53 5.2.4 Sensitivity analysis . . . . . . . . . . . . . . . . . . . 55 5.3 Results on EtCO2, MVexp, and Ppeak . . . . . . . . . . . . 57 5.4 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58 iv Table of Contents 6 Adaptive change detection based on the GHMM . . . . . . 61 6.1 Generalized hidden Markov model . . . . . . . . . . . . . . . 63 6.2 Fixed-point switching Kalman smoother . . . . . . . . . . . . 66 6.2.1 State-space intra-segmental model . . . . . . . . . . . 66 6.2.2 Review of state estimation with the GHMM . . . . . 66 6.2.3 Fixed-point Kalman smoother . . . . . . . . . . . . . 67 6.2.4 Generalized Pseudo Bayesian algorithm . . . . . . . . 67 6.2.5 Overall procedure . . . . . . . . . . . . . . . . . . . . 70 6.2.6 Computational and space complexity . . . . . . . . . 72 6.3 Adaptive Cusum test based on the GHMM . . . . . . . . . . 74 6.3.1 Prerequisites for the intra-segmental model . . . . . . 74 6.3.2 Offline solution: extended Viterbi algorithm . . . . . 74 6.3.3 Online solution: beam search with Cusum pruning . . 78 6.3.3.1 Beam search scheme . . . . . . . . . . . . . 79 6.3.3.2 Adaptive Cusum test . . . . . . . . . . . . . 81 6.4 Application to NIBPmean monitoring . . . . . . . . . . . . . 83 6.4.1 Parameter estimation for the Markov chain regime . . 84 6.4.2 Parameter estimation for the intra-segmental models 85 6.5 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86 6.5.1 Results of the switching Kalman smoother . . . . . . 87 6.5.1.1 Results on simulated data . . . . . . . . . . 87 6.5.1.2 Results on clinical data . . . . . . . . . . . . 89 6.5.2 Results of the adaptive Cusum test . . . . . . . . . . 90 6.5.3 Summary of the results . . . . . . . . . . . . . . . . . 92 6.6 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93 7 Multivariate change detection based on factor analysis . . 97 7.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . 97 7.2 Factor analysis . . . . . . . . . . . . . . . . . . . . . . . . . . 98 7.2.1 Maximum likelihood estimation of a factor model . . 99 7.2.2 Change detection based on the FA . . . . . . . . . . . 100 7.3 Simulated test . . . . . . . . . . . . . . . . . . . . . . . . . . 102 7.3.1 Generation of simulated scenarios . . . . . . . . . . . 102 7.3.2 Case study . . . . . . . . . . . . . . . . . . . . . . . . 104 7.3.3 Performance of the Cusum test on F and E . . . . . 107 7.4 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108 v Table of Contents 8 Artifact detection and data reconciliation . . . . . . . . . . 110 8.1 Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111 8.1.1 Univariate temporal filtering . . . . . . . . . . . . . . 111 8.1.1.1 Kalman filter as MMSE estimator . . . . . . 112 8.1.1.2 Median filter as optimal L-1 filter . . . . . . 113 8.1.2 Data reconciliation with structural redundancy . . . . 115 8.1.3 Dynamic data reconciliation . . . . . . . . . . . . . . 117 8.1.3.1 Kalman filter for data reconciliation . . . . 117 8.1.3.2 A novel hybrid median filter . . . . . . . . . 118 8.2 Evaluation of performance . . . . . . . . . . . . . . . . . . . 121 8.2.1 Simulation test . . . . . . . . . . . . . . . . . . . . . . 121 8.2.2 Case studies with clinical data . . . . . . . . . . . . . 125 8.3 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127 9 Software implementation and clinical testing . . . . . . . . 129 9.1 Software implementation: iAssist . . . . . . . . . . . . . . . 129 9.2 Clinical study . . . . . . . . . . . . . . . . . . . . . . . . . . 130 9.3 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132 10 Conclusion and future work . . . . . . . . . . . . . . . . . . . 134 10.1 Summary: work accomplished . . . . . . . . . . . . . . . . . 134 10.2 Future work: the road ahead . . . . . . . . . . . . . . . . . . 136 Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141 Appendices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157 I Covariance estimation in the adaptive Kalman filter . . . . . 157 II List of physiological variables . . . . . . . . . . . . . . . . . . 159 vi List of Tables 4.1 Detection results of the EWMA-Cusum 2-level alert method for EtCO2, MVexp, Ppeak, and NIBPmean . . . . . . . . . . 41 5.1 Change detection results of the Adaptive-DLM method and the TTS approach on three groups of HR signals . . . . . . . 56 5.2 Distribution of the Q, R estimates generated by the Adaptive- DLM method for HR, EtCO2, MVexp, and Ppeak . . . . . . 57 5.3 Area under the entire or part of the ROC curve of the Adaptive- DLM method for EtCO2, MVexp, and Ppeak . . . . . . . . . 58 6.1 Segmental states in physiological trend signals . . . . . . . . . 64 6.2 Number of the possible point states for st−1 given sk in the GHMM-based switching Kalman smoother . . . . . . . . . . . 73 6.3 Performance of change detection of the standard Cusum test, Adaptive Cusum test, and GHMM-based switching Kalman smoother on simulated data . . . . . . . . . . . . . . . . . . . 95 6.4 Performance of change detection of the standard Cusum test, Adaptive Cusum test, and GHMM-based switching Kalman smoother on clinical data . . . . . . . . . . . . . . . . . . . . 95 7.1 Detection of hemorrhage and varying depth of anesthesia us- ing the Cusum test on the statistics F and E based on a factor model . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108 8.1 Comparison of the signal estimation accuracy of the hybrid median filter and the Kalman filter for the simulated cases in Phase I: cases without artifacts . . . . . . . . . . . . . . . . . 125 8.2 Comparison of the signal estimation accuracy of the hybrid median filter and the Kalman filter for the simulated cases in Phase II: cases with artifacts . . . . . . . . . . . . . . . . . . 125 9.1 Clinical evaluation of the real-time performance of iAssist . . 131 vii List of Figures 1.1 Intraoperative information flow processed by an anesthesiologist 3 1.2 Cognitive model for physiological monitoring . . . . . . . . . 5 3.1 Probability distributions of the forecast residuals for two dif- ferent patterns . . . . . . . . . . . . . . . . . . . . . . . . . . 27 3.2 V-mask on a Cusum plot . . . . . . . . . . . . . . . . . . . . 28 3.3 Interface of an event annotation software tool eViewer . . . . 32 4.1 Change-point detection and alert management process of the EWMA-Cusum method illustrated on an EtCO2 signal . . . . 39 5.1 Schematic of the Adaptive-DLM trend-change detection method 45 5.2 Adaptive Kalman filtering process . . . . . . . . . . . . . . . 46 5.3 Q and R estimates generated by the Adaptive-DLM method for a simulated HR trend signal . . . . . . . . . . . . . . . . . 52 5.4 Process of the Adaptive-DLM method illustrated on a HR trend signal . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 5.5 Detection results of the Adaptive-DLM method and the TTS approach on two example signals . . . . . . . . . . . . . . . . 54 5.6 ROC curves of the Adaptive-DLM method for HR . . . . . . 56 5.7 Time delay of the Adaptive-DLM method for trend-change detection in HR . . . . . . . . . . . . . . . . . . . . . . . . . . 57 5.8 ROC curves of the Adaptive-DLM method for EtCO2, MV- exp, and Ppeak . . . . . . . . . . . . . . . . . . . . . . . . . . 59 6.1 Two example NIBPmean trend signals: similar patterns have different degrees of significance . . . . . . . . . . . . . . . . . 62 6.2 Graphical model for the generalized hidden Markov model . . 64 6.3 Overall procedure of the fixed-point switching Kalman smoother 71 6.4 Linked lists for compact storage of the conditional estimates in the GHMM-based switching Kalman smoother . . . . . . . 73 viii List of Figures 6.5 Design of the upper arm of the test mask for the GHMM- based adaptive Cusum test . . . . . . . . . . . . . . . . . . . 82 6.6 Estimation accuracy of the switching Kalman smoother with different smoothing window sizes . . . . . . . . . . . . . . . . 88 6.7 Signal-to-noise ratio of the change probability estimated us- ing the switching Kalman smoother with different smoothing window sizes . . . . . . . . . . . . . . . . . . . . . . . . . . . 89 6.8 Estimation error and the computational complexity of the switching Kalman smoother with different pre-thresholds hW 90 6.9 ROC curves of the GHMM-based switching Kalman smoother for simulated data and clinical data . . . . . . . . . . . . . . . 91 6.10 Online pattern recognition process of the switching Kalman smoother demonstrated on an NIBPmean trend signal . . . . 92 6.11 Change detection results of the GHMM-based adaptive Cusum test and the standard Cusum test on an NIBPmean trend signal 93 6.12 ROC curves of the standard Cusum test and the adaptive Cusum test on simulated data . . . . . . . . . . . . . . . . . . 94 6.13 ROC curves of the standard Cusum test and the adaptive Cusum test on clinical data . . . . . . . . . . . . . . . . . . . 94 7.1 Intraoperative events cause changes in a factor model . . . . 101 7.2 Variations of the magnitudes of common factors (F ) and vari- ations of the covariance structure (E) during bleeding . . . . 105 7.3 Variations of the magnitudes of common factors (F ) and vari- ations of the covariance structure (E) during light anesthesia 106 8.1 Standard, structural, and hybrid median filters . . . . . . . . 113 8.2 Measurement redundancy in the HR measurements . . . . . . 115 8.3 Kalman filter used for estimating heart rate from multiple measurement channels . . . . . . . . . . . . . . . . . . . . . . 118 8.4 Effect of the hybrid median filter on two channels of measure- ments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122 8.5 Estimation accuracy of the hybrid median filter compared with that of the Kalman filter . . . . . . . . . . . . . . . . . . 123 8.6 Performance of the hybrid median filter in clinical case (1) . . 126 8.7 Performance of the hybrid median filter in clinical case (2) . 127 9.1 Real-time performance evaluation with iAssist . . . . . . . . 131 ix Acronyms ANN Artificial Neural Network AR Autoregressive ARDS Adult Respiratory Distress Syndrome AUC Area Under a ROC Curve AUPC Area Under Part of the Curve BN Bayesian Network BP Blood Pressure CPV Cumulative Percent Variance DLM Dynamic Linear Model ECG Electrocardiogram EEG Electroencephalography EM Expectation-Maximization EWMA Exponentially Moving Weighted Average FA Factor Analysis FDA Food and Drug Administration GHMM Generalized Hidden Markov Model GLR Generalized Likelihood Ratio GPB Generalized Pseudo Bayesian HMM Hidden Markov Model HR Heart Rate ICU Intensive Care Unit MAP Maximum a Posteriori MAP Mean Arterial Pressure ML Maximum Likelihood MMSE Minimum Mean Square Error MSE Mean Square Deviation MV Moving Average OR Operating Room PCA Principal Component Analysis x Acronyms RMS Root Mean Square RMSE Root Mean Square Error ROC Receiver Operating Characteristic RSD Relative Standard Deviation SN Signal-to-Noise SPRT Sequential Probability Ratio Test SQA Software Quality Audit STD Standard Deviation TA Temporal Abstraction TTS Trigg’s Tracking Signal WLS Weighted-Least-Square xi Acknowledgements I would like to thank my supervisors, Professor Guy A. Dumont and Profes- sor J. Mark Ansermino. Working with them has been the most rewarding experience in my life so far. They have provided so much support and guid- ance, and at the same time enough freedom for me to explore my interests, throughout the course of my PhD study. This thesis could not have been completed without their help. I have learned from them not only how to conduct research, but also how to cooperate with different people and how to lead a team. They also set an example of dedication and patience that will be with me for the rest of my life. I would also like to thank Chris Brouse and William Magruder. Chris Brouse worked closely with me on the physiological monitoring project. He developed a software tool to apply some of the methods proposed in this thesis and conducted most of the clinical study. William Magruder provided me with enormous help in editing and revising my publications and the whole thesis. I deeply appreciate their help. I also want to extend thanks to Joanne Lim, Simon Ford, Natasha McCartney, and all the other people who have helped over the past five years. It has been a great pleasure working with Mande Leung, Prasad Shrawane, Ginna Ng, Pedram Ataee, Ali Shahidi Zandi, Behnam Molavi, and all my other fellow students in the laboratory of Electrical and Computer Engi- neering in Medicine. They have kindly provided me with coffee and kept me in good company while I wrote the thesis. Special thanks for that! Finally I want to thank my family for their unconditional love and sup- port all these years. They make everything I have accomplished meaningful. xii Chapter 1 Introduction Advances in technology have resulted in exponential growth in the amount of physiological data collected in the clinical environment. Despite the tech- nological advances, improvement in patient safety and surgery outcomes is limited by the imperfect simultaneous monitoring capabilities of humans. An effective physiological monitoring system is required to reduce the cog- nitive overload and facilitate the decision-making process in critical care environments. This thesis aims to estimate physiological signals from noisy measurements and detect clinically relevant trend changes. These two prob- lems are fundamental for the task of physiological monitoring and can be solved by methodologies based on statistical forecasting models. The results of trend-pattern recognition can assist clinicians in tracking variations in the physiological state of a patients and provide semantic temporal abstracts for knowledge-based diagnosis. 1.1 Motivation 1.1.1 Cognitive challenges in physiological monitoring The clinical environment has experienced significant changes in recent decades due to the development of technology. In the 1960s, most clinical environ- ments were equipped with only very basic monitoring devices, often includ- ing a blood pressure (BP) monitor, an electrocardiogram (ECG), and a flow monitor. Clinicians had to manually record measurements from these stand-alone devices [134]. Over the next two decades, sophisticated mon- itoring technologies such as pulse oximetry, spectrometrical gas analysis, and intravascular BP measurement were introduced to the clinical environ- ment. Basic medical devices have also been enhanced through the use of high-frequency measurement and electronic recording. The architecture of physiological monitoring systems was also redesigned with the development of modular communication interfaces to various de- vices (see Figure 1.1). A central console is used to integrate all the mea- surements from different monitoring devices, resulting in a flexible and ex- 1 1.1. Motivation tendable monitoring system. The first commercial integrated physiological monitor was introduced in the early 1990s [151]. Today central physiological monitors have become standard equipments in the Operating Room (OR) and Intensive Care Unit (ICU). The purpose of physiological monitoring is to identify and correct un- desirable situations and to optimize the patient’s care. Clinicians, as the operators of this process, use the information from different sources to grad- ually reduce the range of diagnostic possibilities and make final decision. For example, in the OR, to keep the patient in a stable state, the anesthesiologist undertakes multiple tasks during surgery (see Figure 1.1). The anesthesiol- ogist needs to monitor physiological variables, observe clinical signs such as eye blinks and chest movement, and exchange information with other team members. The information from different sources is integrated with expert knowledge to produce a real-time evaluation of a patient’s state. Physiological variables are the most important indicators of the patient’s physiological state and require constant vigilance and context-sensitive anal- ysis. A typical central physiological monitor provides integrated collec- tion and display of the physiological data but lacks integral data analy- sis [150]. Data analysis is still mostly performed by clinicians. According to Miller [87], when human memory is used for conscious deliberations about a topic of concern (such as a patient’s current or past states), only seven chunks of information can be remembered at any given time. It is impossible for even the most experienced clinicians to maintain constant surveillance of all the physiological variables in the current clinical environment. This cognitive overload causes fatigue in clinicians and reduces the potential of new technology in enhancing patient safety [29, 91, 138]. 1.1.2 Standard alarm system Most physiological monitors have a built-in alert system that monitors each of the variables individually. Typically, if the value of a variable strays outside a preset range, an alarm will be triggered. Unfortunately, false and unnecessary alarms are so frequent (over 90% in the survey [74]) that the alarms represent a disturbance rather than cognitive assistance in real clinical settings [23, 77, 86, 141]. In many cases, clinicians simply disable the audible alarms at the beginning of a case. Real-time analysis of physiological variables has long been recognized as the bottleneck of the information flow in the critical care environment [28]. Clinicians have demonstrated growing awareness of the benefits of automatic physiological monitoring [38, 152]. 2 1.1. Motivation Anesthesiologist HR ECG waveform Blood Pressure Temperature SpO2 Respiratory variables End-tidal CO2 Capnogram Anesthetic fraction O2 Consumption Physiological monitor Nurses Observing patient Surgeon
Students Perturbation Fatigue Airway management Anesthesia maintenance Fluid management Treatment Variables Noise, false alarms Decision support system Figure 1.1: Information flow processed by an anesthesiologist during surgery: A large amount of data is received from from different sources at a high frequency. The anesthesiologist needs to integrate all the information to make a real-time decision about the patient’s state and simultaneously apply interventions to maintain the patient in a desirable state. A decision- support system is needed to reduce the cognitive load on the anesthesiologist in the task of patient monitoring. 3 1.1. Motivation 1.1.3 Decision making in physiological monitoring The aim of physiological monitoring is to provide a context-sensitive evalu- ation of the patient’s physiological state. Clinicians analyze the variations of a patient’s physiological variables over time and the relations between these variables to derive a decision. This cognitive process can be divided according to the level of interpretation into five steps: noise reduction, trend-change detection, temporal reasoning, scenario summarization, and diagnosis (see Figure 1.2). From the bottom up, each step combines new information with the results of the previous step, and generates an abstract more relevant to the final diagnosis. The multidimensional, interrelated, time-varying numerical data are transformed into semantic descriptions of a patient’s pathophysiological states that can be further used in treatment planning. Below are the subtasks handled in each step: • Signal estimation: This step removes artifacts in the raw data and estimates the true values of each variable. • Trend-change detection: A numerical signal level does not offer a direct indication of patient status. Instead, changes in trend patterns reflect a patient’s physiological dynamics and are of greater interest in patient monitoring. This step removes random variations in a physiological trend signal and recognizes trend patterns resulting from a changed physiological state. A trend segment can be treated as a basic unit of information in the perception and analysis of trend signals. • Symptom extraction: A symptom as the basic manifestation of clinical events often consists of multiple trend segments over time and/or from different variable. Trend patterns detected in the pervious step is syn- chronized and congregated in this step according to domain knowledge to construct a meaningful trend complex. • Scenario summarization: This step prioritizes clinical symptoms, given the patient demographic information, premedication, and treatment sequence, and removes irrelevant information in the current context. For example, a decrease in Heart Rate (HR) and BP, after an increase of Propofol infusion rate, can be treated as normal responses, since Propofol is known to depress the cardiovascular system. • Diagnosis: Clinicians use their expert knowledge to analyze scenarios and derive a final evaluation of a patient’s physiological state and the condition of the life-supporting equipments. Hierarchical structures have also been adopted by other researchers to describe the cognitive process in physiological monitoring [76, 90, 127, 144]. 4 1.1. Motivation Noise reduction Trend-change detection Scenario summarization Diagnosis Anesthesiologist Symptom extraction Scenario Clean data Trends Symptoms Knowledge Demographics Premeditation Health history Treatment ... Context Dynamic features inter-variable relationship Time I
n
t
e
r
p
r
e
t
a
t
i
o
n Raw measurements Figure 1.2: The cognitive model of physiological monitoring: expert knowl- edge is used to process numerical data and generate interpretations of dif- ferent levels of clinical relevance. A decision-support system does not need to accomplish all the informa- tion processing tasks involved in the entire decision-making process. An effective support at any level of interpretation can reduce the cognitive re- quirement of the corresponding subtask and facilitate clinicians’ overall rea- soning process. Human factors studies have revealed that transferring the numerical measurements to clinicians’ awareness is the most time-consuming process [28]. Therefore automatic signal estimation and trend-change detec- tion may significantly reduce clinicians’ cognitive load [43]. Different knowledge representation techniques and reasoning paradigms should be used to solve the subtasks at different interpretation levels [143]. Signal estimation and trend segmentation are data-intensive procedures, i.e., decisions at both levels of interpretation are mostly based on sequential 5 1.2. Problem statement data. The knowledge used in these two steps is the difference of dynamic characteristics between the relevant trend changes and random variations in a physiological variable. This type of knowledge is independent of con- textual information and can easily be translated into mathematical models. The subtasks at the higher levels of interpretation are knowledge-intensive. The outcomes of these inference steps depend heavily on the availability and accuracy of the expert knowledge. The knowledge formulation and the inference paradigms should all be designed with consideration of the target clinical events. 1.2 Problem statement This project aims to solve the problems of signal estimation and trend- change detection in the clinical environment. As pointed out in Section 1.1.3, the purpose of these two subtasks is to summarize the dynamics of a physio- logical variable and provide a temporal context for the higher-level knowledge- based reasoning. The results of trend-change detection also provide early alerts for clinicians before adverse events occur. 1.2.1 Aims of study A trend is defined as a sustained, unidirectional change in a variable’s mean level [4, 85]. When a patient is experiencing a systematic physiological change, the first sign is very often that a physiological variable has changed its trend direction. Trend direction is therefore the most important feature of a trend pattern, classifying signal segments as increasing, decreasing, or steady. In addition, the trend duration and incremental rate are also used to characterize trend segments more specifically. In multivariate study, a trend pattern is used to refer to the relation between the changes in different physiological variables. The aim of this thesis is to detect clinically relevant trend changes. A clinically relevant change often has large amplitude or a long duration, or both. A clinically relevant change indicates that a patient is experiencing a systematic physiological variation that requires awareness from a clinician. Some clinically relevant changes may not require an action, but if the con- textual information is required to rule out the action, the changes are also considered clinically relevant. For example, a treatment-induced change is viewed as clinically relevant because knowledge about the treatment effect is required to suppress an action. 6 1.2. Problem statement This project aims to solve the trend-change-detection problem using methodologies based on statistical forecasting models. The knowledge about a signal’s temporal characteristics is formulated as stochastic dynamical models with the uncertainty due to measurement noise and imperfect knowl- edge represented by random variables. An appropriate forecasting model is crucial for the performance of this type of methods. The basic assumption in this thesis is that, with an appropriate forecasting model, clinically rele- vant trend changes can be transformed to statistically significant non-zero shifts in the forecast residuals, and clinically irrelevant variations, in con- trast, cause insignificant variations around zero in the residuals. A desired change-detection method not only should detect a change, recognize the change pattern, but it should also provide some quantitative evaluation of the certainty of detection. 1.2.2 Scope of application The OR and ICU are two clinical settings in severe need of decision support. Many events in both environments are life-threatening and require immedi- ate attention. However, clinicians need to attend to other tasks, therefore are not able to maintain continuous vigilance on all the variables [164]. The methods in this thesis are designed for intraoperative monitoring and validated using data collected during surgery. Physiological trend signals from the ICU have similar dynamic characteristics to the corresponding signals from the OR. Since the solutions for noise reduction and trend- change detection are independent of contextual information, the methods proposed in this thesis could be applied to the ICU setting. More than 20 physiological trend signals have been included in this study (see Appendix II for a complete list of the variables). These signals to- gether indicate the conditions of a patient’s cardiovascular, pulmonary, and metabolic systems, and the integrity of the ventilation circuit. A trend sig- nal records the time-varying levels of a physiological variable. Although waveform variables such as ECG or Electroencephalography (EEG) are sig- nificant physiological indicators, they are not included in this study. 1.2.3 Challenges Artifacts Although modern physiological monitoring devices have incor- porated basic denoising functions, the existence of artifacts still remains a significant challenge for intraoperative physiological monitoring. A recent survey [74] revealed that 1/3 of the false alarms given by current physiolog- 7 1.2. Problem statement ical monitors during postoperative monitoring of cardiac operated patient were triggered by artifacts. Artifacts are often caused by interference with the measurement system or by system errors, presenting in trend signals as numerically distant deviations from the surrounding observations. In this thesis, artifacts are grouped into two types according to their duration: Transient artifacts are very brief (usually less than 10 seconds), with energy concentrated in a frequency band higher than the relevant trend changes. Short-peak artifacts are typically sustained for a relatively long period of time (sometimes more than a few minutes). The frequency spectrum of short-peak artifacts has a considerable overlap with that of the physiologi- cal trend changes; therefore, most band-pass filters cannot remove this type of artifact. Removal of short-peak artifacts is a critical problem in physiolog- ical monitoring. Background noise is another source of measurement uncer- tainty. Background noises are small-amplitude random variations caused by physiological fluctuations and environmental noise under normal conditions. Background noise can be assumed to follow a Gaussian distribution. Most statistical methods work well in the presence of background noise. In this thesis, noise is used to refer to the overall uncertainty due to the background noise and artifacts. Intraoperative and inter-patient variability Regulated by homoeo- static mechanisms, a patient’s physiological system compensates for intra- operative interventions and can achieve homeostasis with variables at new levels. The pattern of how a physiological variable responds to a clinical event is partly determined by the nature of the event, and partly by the patient’s condition. As both factors are hardly predictable before surgery, the dynamic characteristics of trend signals, including the mean level and second-order properties, change significantly over time and across patients. Inability to adapt to the intraoperative and inter-patient variability is a ma- jor cause for the high false-alarm rate of most monitoring systems [95]. A real-time change detection method should track the variations in both the signal level and the higher-order statistics online and adjust its configuration accordingly, in order to maintain a consistent performance. However, in the presence of artifacts, fast-tracking and robust detection are two competing objectives [157]. Multidimensionality Most clinical events result in trend changes in mul- tiple variables. The relationship between the variations in different variables is highly sensitive to the contextual information. Comparing the real-time 8 1.3. Outline inter-variable relationship with the normal response in the same context can reveal events that only cause subtle changes in each individual signal. Measurement redundancy Measurement redundancy exists when a vari- able is measured from more than one independent source, or, more generally, when a set of independent measurements is related by a verified equation that remains valid during physiological variations. On one hand, measure- ment redundancy represents a problem for multivariate analysis. Redun- dancy provides no information about the physiological state of a patient, but dominates the inter-variable covariance structure and therefore may bury relevant features. Measurement redundancy should be removed before multivariate analysis. On the other hand, redundant measurements can be used improve the accuracy of signal estimation. 1.3 Outline The aim of this thesis is to recognize clinically relevant trend changes in phys- iological trend signals online in the presence of noise and intraoperative vari- ability. The thesis starts with the extension of a traditional trend-detection method, and then shifts the focus to online adaptive trend-monitoring. This thesis is organized into 10 chapters. This chapter has introduced the problems of signal estimation and trend-change detection, and highlighted the potential challenges that need to be addressed in the following chapters. Chapter 2 reviews the studies that have been done on clinical decision support from the 1960s up to 2008. Each project is reviewed in terms of the level of interpretation, methodology, and target clinical application. In spite of many advances, trend-change detection still remains as an unsolved problem. Chapter 3 introduces the approaches for level-shift detection, which form the basis for model-based trend-change detection. The study procedure and performance evaluation process used in this thesis are also described. The methods proposed in Chapters 4, 5, 6, and 7, based on different forecasting models, are designed to detect changes in different trend fea- tures. In Chapter 4 and Chapter 5, trend patterns are classified in terms of the direction of change. In Chapters 4, a trend signal is modeled as the Exponentially Weighted Moving Average (EWMA) of historical data. The Cumulative Sum (Cusum) test is used to evaluate forecast residuals and determine whether a trend change has occurred. Chapter 5 describes an adaptive trend-change detection method for monitoring the variables mea- 9 1.3. Outline sured at a high sampling rate. In this method, the signal is described using the linear growth state-space model combined with an EWMA model. An adaptive Kalman filter is used to estimate the second-order characteristics and adjust the change-detection process online. Chapter 6 addresses the intraoperative variability in the variables mea- sured at a low sampling rate, by modeling the transition relationship between different trend patterns using a generalized hidden Markov model. Based on this model, a switching Kalman filter and an adaptive Cusum test are proposed for online trend-pattern recognition. Chapter 7 investigates the potential of Factor Analysis in representing the relationship between trend changes in different variables. Based on a factor model, two test statistics are proposed to capture intraoperative events including light anesthesia and intermediate hemorrhage. In Chapter 8 the problem of artifact removal and signal estimation is addressed in the framework of data reconciliation. Measurement redundancy is utilized to obtain more robust estimates of the physiological variables than would be possible from a single sensor source. A novel hybrid median filter is proposed as an implementation of the dynamic data-reconciliation process for the HR measurements. Some of the proposed methods have been implemented in a software sys- tem called iAssist. Chapter 9 provides a brief description of the architecture of this system and its real-time performance in the clinical test. Chapter 10 summarizes the contributions of this thesis and points out the issues that require further study. Appendix I describes the derivation of the adaptive Kalman filtering process used in Chapter 5. The physiological variables studied in this project are listed in Appendix II. 10 Chapter 2 Literature review Automatic measurement and recording came into clinical use in the late 1960s. Ever since then, researchers from both biomedical and clinical fields have worked continuously toward real-time interpretation of physiological data in order to assist decision making in physiological monitoring. This chapter reviews the studies of clinical decision support from the 1960s up to the current time (2008). The author searched the online databases PubMed, IEEE Xplore, Science Direct, and ACM portal using the combinations of two groups of keywords: group A included “artifact (artefact) removal”, “trend change”, “tempo- ral abstraction”, and “decision-making support”; group B included “patient monitoring”, “physiological monitoring”, “critical care”, “anesthesia (anaes- thesia)”, and “vital sign”. The resulting publications were then checked in Citeseer, an open-access citation search engine, to find the related publi- cations. This chapter starts with a brief overview of the development of physiological monitoring, followed by a detailed review in chronological or- der. 2.1 Overview of clinical decision-support systems Physiological monitoring has been an active topic of research since the 1960s. Over the years, as the technology and the understanding of the cognitive process involved in physiological monitoring have developed, various tech- niques have been adopted in physiological monitoring to provide decision support at different levels of interpretation. Most studies in the 1960s and 1970s were focused on automatic trend- change detection. Many researchers applied the statistical tests used for Statistical Process Control (SPC) to physiological monitoring. Endresen [42] and Avent [8] reviewed the popular models and statistical tests used during this time period. Since the late 1970s, the successful application of Artificial Intelligence (AI) in industrial problems has driven some researchers to investigate the 11 2.2. Early stage trend-change detection potential of AI for decision support in the clinical environment. Many in- telligent monitoring systems were developed during this period, and most of them attempt to derive a diagnosis directly from signal measurements. However, the performance of these systems was unsatisfactory in practice. The unsuccessful application of first-generation intelligent monitoring systems has raised awareness of the important role of “time” in physio- logical monitoring [143]. Temporal abstraction has emerged as a research topic, aimed at summarizing temporal variations in physiological signals. Techniques used in AI, statistical tests based on forecasting models, and curve-fitting were used for temporal abstractions. Uckun [144] reviewed the physiological monitoring systems developed from the early 1980s to the 1990s with an emphasis on the target application. Other surveys in- clude [14, 45, 75, 129], with [45] dedicated to trend-change detection. The studies in each stage are reviewed in the following, in terms of methodology, clinical applications, and target level of interpretation. It will be demonstrated that, trend-change detection is an essential part of a clinical decision support system, and despite all the efforts dedicated to trend-change detection, this topic remains a challenge in physiological monitoring. 2.2 Early stage trend-change detection Physiological monitoring studies started in the early 1960s when data log- gers were introduced to medical applications [134]. Many methods developed during this period were based on statistical forecasting models and focused on trend monitoring. A signal model and a statistical test are the two com- ponents for a model-based change detection method. An appropriate model should translate the trend variations of interest as nonzero deviations in the filtering or forecast residuals. Testing the residuals could reveal whether a change has occurred. Many important models and test charts used in SPC were adopted in physiological monitoring. Methods based on the EWMA model The most widely used model was the EWMA model, equivalent to the Box-Jenkins Autoregressive Inte- grated Moving Average (ARIMA) (0,1,1) model. As the name indicates, an EWMA model generates predictions based on the accumulation of historical data, and the influence of historical data on the predictions attenuates ex- ponentially as time passes by (see Chapter 4 for details). An appropriately configured EWMA model can adapt to a new plateau very quickly while translating increasing and decreasing ramps into sustained non-zero residu- 12 2.2. Early stage trend-change detection als. This property makes the EWMA model very suitable for trend-change detection. The Trigg’s Tracking Signal (TTS) [139] is one of the most widely used statistics with the EWMA model for change detection. The TTS is defined as the ratio of the exponentially smoothed residual and the exponentially smoothed absolute residual. One of the desirable properties of TTS is that its value ranges from -1 to 1, and each value corresponds to a statistical significance. However, the TTS, if set to be robust against impulsive arti- facts, often generates false alarms for sustained changes of negligible am- plitude [42]. Despite this weakness, the TTS as a milestone in statistical physiological monitoring is still widely used by researchers to obtain a com- parative evaluation of their work [72, 85, 161]. Hope [62] addressed the limitations of TTS in 1973 by multiplying the value of TTS by a weighting function and referred to the new index as the Patient Condition Factor (PCF). The original weighting function used in [62] ranges from -1 to 1, and increases linearly with the magnitude of the forecast residuals. The PCF method reduced false detection rates compared with the TTS; however, the relationship between the index magnitude and level of statistical significance in TTS were also lost. Empirical testing is needed to determine the alarm limits for PCF-based testing [8]. Conventional SPC charts were also adopted in patient monitoring. Among these charts, the Cusum test [100] gained great popularity. Stoodley and Mirnia [131] modeled the HR signal using an EWMA model plus a piecewise constant incremental factor. The two one-sided Cusum test [10] was used on the EWMA forecast residuals to detect trend changes. The detected trend changes were classified using heuristics into transients, ramps, and steps. This method demonstrated considerable potential for trend monitor- ing. The Cusum test was also used [56, 120, 128]. Methods based on other integral models In addition to the standard EWMAmodel, other integral models were also used to describe physiological signals. The nonseasonal Holt-Winter model [61] extends the EWMA model by assuming that both the signal mean level and the slope of changes are time-varying and follow the EWMA processes. This model is equivalent to a Box-Jenkins ARIMA(0,2,2) model. It was used in [4] to monitor intracranial pressure. Compared with the EWMA model, the non-seasonal Holt-Winter model tends to over-fit most medical series and was only suitable for short- term forecasting [8, 42, 131]. The Exponentially Mapped Past (EMP) is another statistic based the 13 2.2. Early stage trend-change detection integration of signal history. The EMP of an input signal is generated by passing the signal through a low-pass resistor-capacitor filter or by an equiv- alent digital process [99]. When used for change detection, EMP filters with different time constants are usually connected in parallel or in series. The differences between the outputs of these filters converge to an affine linear function of the slope for a ramp change. The Student’s t-test is usually performed on the output differences to detect a trend change. The cascaded EMP system is preferred in practice because the outputs of a cascaded sys- tem are less affected by the unwanted oscillations caused by the unmatched phase shifts of different integrators if the time constants are chosen appro- priately. A cascaded digital EMP system was implemented in a bedside monitor called Patient Alarm Warning (PAW) system for detecting brady- cardia, tachycardia, and sustained hypotension or hypertension [59, 135]. Although the PAW system has great potential for trend-change detection, it tends to miss changes of a large amplitude but a short duration [42]. Fur- thermore, PAW is a very inflexible system. It is often necessary to change the time constants of the integrators for a particular variable or patient. Methods based on the ARMA model The Box-Jenkins ARMA model was used in [42] to describe HR and Mean Arterial Pressure (MAP) trend series. In an ARMA model, the present level of the variable is expressed as a linear combination of past values, and the present and past value of a random series, i.e., a combination of the autoregressive (AR) and Moving Average (MA) components. A time series described by an ARMA model is assumed to be stationary. Unfortunately, this assumption is not valid for most physiological variables, as a patient’s status varies over time in most clinical situations. If an ARMA model is used to describe a physio- logical trend signal, not only the model parameters but also the order of model have to be adjusted in real time [42]. Estimating an ARMA model requires a substantial number of observations (>50 according to [26]) and intensive computation. The ARMA model was therefore not recommended for modeling physiological trend signals [42]. Multi-state Kalman filter and Bayesian forecasting In contrast to the approaches based on hypothesis testing, the multi-state Kalman filter takes the Bayesian approach and provides a flexible framework for modeling and analyzing time series subject to abrupt changes. Initially introduced by Harrison and Stevens in 1976 for forecasting in economics [57], the multi- state Kalman filter was soon adopted by many researchers for physiological 14 2.3. Diagnostic monitoring monitoring in different clinical scenarios [8, 32, 42, 51, 121, 124]. For ex- ample, in 1983 Smith and West [124] used the method for monitoring renal creatinine levels in patients who had recently received renal transplants. In the multi-state Kalman filter approach, a signal is treated as piece- wise linear and is modeled by the linear growth model (see Chapter 5 for details). The linear growth model is a state-space model in which the final observation, signal mean, and slope of change are all subject to Gaussian disturbances. Depending on which factor is disturbed, trend changes exhibit transient, ramp, or step patterns. The prior probability of occurrence for each pattern is assumed to be constant over time and independent of the his- torical signal. Under this assumption, the multi-state Kalman filter is used in [124] to solve two problems: (1) estimating the signal in the presence of disturbances and (2) investigating the filtering residuals and estimating the posterior probability of occurrence for each pattern. Smith and West [124] also proposed the model estimation process. The multi-state Kalman filter estimates the probability of occurrence for each of the possible trend patterns online, thus providing a more informa- tive summary of a signal’s temporal behavior than the statistical testing methods. This functionality is very desirable for a diagnostic inference ap- plication. However, the method has a few drawbacks: first, the linear growth model used in the method, similar to the ARIMA(0,2,2) [4], tends to be over- sensitive to artifacts. Furthermore, the assumption of time-invariant prior occurrence probability for trend patterns is not compatible with clinicians’ cognitive processes. Most approaches proposed during this period were designed to detect variations in a signal’s mean level with very limited effort devoted to es- timating the second-order statistics. Most of the integral models reviewed in this section could be categorized as generalized exponentially-smoothing models [33], which are also the models of choice in this thesis. 2.3 Diagnostic monitoring In the late 1970s, the development of AI and its successful application to industrial processes stimulated the development of intelligent diagnostic sys- tems. MYCIN [118] (Stanford University), a rule-based expert system, was introduced in 1976, and achieved great successes in bacterial infection di- agnosis. The same group soon extended the application of expert system to patient monitoring and introduced a ventilation management consulting 15 2.3. Diagnostic monitoring system called VM [44]. VM [44] is commonly thought to be the first diag- nostic monitoring system. Ever since the introduction of VM, much effort has been directed toward intelligent diagnostic monitoring. AI technologies mimic how clinicians reason with their expertise in the presence of ambiguity and incomplete information. The knowledge used for diagnosis often has a very loose structure and therefore demands a more sophisticated framework of representation. A variety of AI techniques, such as rule-based systems, Bayesian belief networks, decision trees and artificial neural networks, have all been used in diagnostic monitoring. Knowledge representation Rule-based expert systems are widely em- ployed for diagnostic reasoning, as most knowledge and inference processes involved in clinical practice are already in the symbolic form. A complete expert system consists of three major components: a knowledge base, which stores expert knowledge in the form of “if... then...” rules; a user inter- face for knowledge acquisition and real-time data input; and an inference engine that guides the reasoning process using appropriate search mecha- nisms. The belief behind the expert system is that the inference processes adopted by different experts are the same, but the contents of knowledge they possess are different. Based on this belief, the inference engine can be standardized for a broad application domain. There are many commercial and open-access inference engines available for medical diagnosis. Many rule-based expert systems proposed during this period provided ventilatory support [44, 65, 79, 122]. VM [44] was designed for assisting me- chanical ventilation management. The Computerized Patient Advice Sys- tem (COMPAS) [122] (Stanford University) utilized the HELP information system [122] developed by the same group to assist with respiratory therapy for patients with Adult Respiratory Distress Syndrome (ARDS). Ventilatory support remained an active application problem in the early 1990s [65, 79]. Anesthesia Expert Assist (AES) was a rule-based system that offered intel- ligent alarms and therapy recommendations to anesthesiologists for moni- toring hemodynamics during aortocoronary bypass surgery [109, 114]. Al- though intuitive to use, rule-based expert systems have a very loose struc- ture, and therefore cannot be trained using validated cases. Bayesian Network (BN) was also employed to represent clinical knowl- edge. BN is a probabilistic graphical model in which nodes are linked by directed arcs to represent the inference logics. Nodes represent observed or inferred facts, i.e, symptoms or diagnosis. Arcs, qualified by a table of conditional probabilities, encode the independence between the parent and 16 2.3. Diagnostic monitoring child nodes. ALARM [15] (Stanford University) is a BN-based system for monitoring mechanical ventilation. In ALARM, 37 nodes linked by 42 arcs represent the prior knowledge. It was claimed that 71% of clinical problems were correctly recognized in the operating room during a test [15]. An Artificial Neural Network (ANN) solves the diagnostic problem from the perspective of classification. An artificial neural network is a group of interconnected simple processing units (“neurons”). The weight for each connection can be adjusted to generate a more complex global relationship to describe the relationship between input and output that flows through the network. When used for diagnosis in physiological monitoring, the inputs are the temporal features of multiple physiological variables and contextual information, while the output is the target clinical events. The first ANN- based anesthesia monitoring system known to the author was [97] (Univer- sity of Utah). In that system, 25 temporal features extracted from the com- bination of EtCO2, airway pressure, and flow monitored at the mouthpiece were used as inputs to a three-layered ANN to identify alarm conditions. As parametric models, the BN and ANN models can both be adapted to annotated data. Challenges The first challenge for diagnostic monitoring is that a com- plete collection of knowledge is difficult to acquire. The knowledge collection in a successful diagnostic system should be able to link observations to all the events of interest. Specifying all the possible scenarios can be an intractable task. For this reason, knowledge acquisition is often the most tedious step in constructing an intelligent monitoring system. The second challenge is the availability of real-time contextual informa- tion. For example, in intraoperative monitoring, the patient demographic information, sequence of treatments, and environmental disturbances are all essential for making a diagnosis decision. However, it is almost impossible to obtain some of the information without manual input. The representation of uncertainty is also an important issue for methods that operate at any level of interpretation. Fuzzy set, possibility theory, or subjective probability theory can be used to describe the information uncer- tainty in physiological monitoring [96]. For the tasks of signal estimation and trend-change detection, information uncertainty is mostly caused by measurement noise and random physiological variations, and is often repre- sented by probabilistic components. The uncertainty in diagnostic inference is commonly attributed to the inability to perceive and assess all the con- tributing factors for a particular assertion, or the expression imprecision. 17 2.4. Temporal abstraction Probability theory is often used with the ANN and BN models. Fuzzy logic was adopted in the AES system [109] and used for anesthesia monitoring. The certainty factor used in VM [44] can be seen as a type of subjective probability. Although these theories express the information uncertainty in different fashions, the operations defined in them are quite similar. In spite of significant efforts devoted to the development of diagnostic monitoring systems, most of the reviewed systems remained in the prototype stage due to their unsatisfactory performance in the clinical environment. In the late 1980s and early 1990s, some researchers reflected on the work that had been done on the intelligent physiological monitoring systems. They recognized the hierarchical nature of the task of physiological monitoring, and also pointed out that the lack of effective temporal feature extraction was one of the major reasons for the unsatisfactory performance [107]. Researchers in human factors investigated the clinicians’ cognitive pro- cesses in physiological monitoring in an effort to identify the bottleneck in this information-processing task. They found that the primary effort for de- cision makers was not at the moment of decision making, but rather in the process of transferring the numerical data to the clinician’s cognizance [28]. It has been suggested that summarizing the temporal behavior of physio- logical variables may improve clinicians’ situation awareness [43, 73]. 2.4 Temporal abstraction In the late 1980s, researchers recognized the critical role of “time” in phys- iological monitoring. Many studies have been conducted on sequential data in order to summarize a signal’s temporal characteristic for diagnostic mon- itoring. This process is referred to as Temporal Abstraction (TA) [70, 101]. There is no commonly accepted definition for TA in the literature. Many TA studies from the early 1990s through to 2007 [129] summarize signals at different levels of interpretation. The problem of TA, from the perspective of pattern-recognition, is often realized in two steps: feature extraction and classification. According to how the two steps are performed, the techniques used for TA can be cate- gorized as AI-based, template-fitting, and model-based statistical test. When AI techniques are adopted, measurement data are often pre-segmented into a series of fixed-length slices. The basic features of these slices, such as mean and standard deviation, are calculated and used as input for TA. The pre-segmentation step, although it reduces the temporal uncertainty, does 18 2.4. Temporal abstraction not locate the starting and end points for trend patterns. The challenge for AI-based TA is how to construct an effective classifier (model-based or rule-based) to identify the temporal patterns in these signal slices. In the template-fitting methods, pre-defined temporal patterns are represented by a mathematical function (or a set of functions for a multivariate problem) over time. Measurements are compared with the templates to find the best- fitting pattern. For model-based methods, constructing an appropriate fore- casting model is critical. If a model transforms the trend features of interest into nonzero residuals, testing the residuals can reveal trend changes. Temporal reasoning The application of AI on time series analysis is often referred to as temporal reasoning. Most AI-based techniques can in- corporate signal measurements, as well as contextual information and patho- physiological knowledge, to provide an integrated data interpretation. When used for trend-change detection, temporal reasoning is performed on the basic features of pre-sliced signal segments. Data granularity, or the length of signal segments, is one of the most important factors that influence the performance of temporal reasoning. In most studies, the segmental length is empirically set for each variable, and then the features of interest are calculated for every slice. For example, in [22], a “characteristic span” is learnt from population data, then the slope and variability of measurements in each span is calculated. According to the magnitudes of the slope and variability, signal slices can be roughly described as unstable, stable, increase or decrease before more advanced temporal reasoning is performed. Rule-based systems are widely used for representing the temporal rela- tionships between pre-sliced physiological signals. Trend patterns in a sin- gle variable can be recognized by investigating the relationship between the successive signal slices [24, 47, 113]. If used for recognizing complex signal patterns formed by multiple physiological variables, the knowledge repre- sentation and inference operations become more complicated. To facilitate the acquisition, maintenance, reuse, and sharing of expert knowledge, some researchers have proposed general frameworks for describing the primitives (events, observations, and contexts), relations, and operations that occur in the task of temporal reasoning in a standardized manner. A conceptualized framework can be applied to a variety of applications. For each application, the pre-defined functions in the framework should be implemented using spe- cific methods. Yuval Shahar (Knowledge Systems Laboratory, Stanford Uni- versity) proposed a knowledge-based temporal abstraction framework [115]. In this system, knowledge was classified into fours types: the structural, clas- 19 2.4. Temporal abstraction sification, temporal-semantic, and temporal-dynamic knowledge. The oper- ations are divided into five TA mechanisms, including context formation, contemporaneous abstraction, temporal inference, temporal interpolation, and temporal pattern matching. RESUME [116] is a realization of this on- tological system for patient monitoring. RESUME demonstrated the advan- tage of a modular, task-specific architecture for building a knowledge-based system. Another ontology system was also proposed by Uckun in [142]. A framework referred to as temporal constraint network was also used to represent the causal relations between time-stamped events. In temporal constraint networks, a node represents a time-stamped event and a directed arc represents a temporal constraint between the linked events. A temporal constraint network is often used to describe scenarios constituted by a set of trend abstracts. During monitoring, signal measurements are compared with the pre-defined temporal constraint networks to identify the scenarios of interest. This approach can be viewed as a template-fitting method that operates at a higher level of interpretation. Temporal constraint network was adopted in DYNASCENE [27], TOPAZ [106], and other systems [37, 117]. Artificial neural network is also used for TA [140, 146]. In TrendFinder (MIT Computer Science and Artificial Intelligence Laboratory), a signal is sliced with three different time resolutions [140]. The features of these slices, including mean, median, maximum, minimum, range, standard de- viation, linear regression slope, and absolute value of the linear regression slope, together describe the signal’s short, mid-term, and long-term behav- ior. TrendFinder has been tested on a wide variety of physiological vari- ables. The ANN, linear logistic regression, and decision tree were compared in [140] in terms of the performance of trend-pattern recognition. It was pointed out that the structure of a classifier only had very minimal influ- ence on the performance, and that the completeness of knowledge was of greater importance. Template fitting A template is a collection of samples that together demonstrate a meaning more relevant to the final diagnosis. Trend tem- plates are often formulated as regression functions, constrained by the tem- plate length, the model structure and parameters, and by the sampling interval. To use a template-fitting method, we need to establish a finite set of temporal patterns from medical knowledge and from the modeling of sig- nal variations. During monitoring, signal measurements are compared with all predefined templates. A template is deemed to match the data if the measure of goodness-of-fit meets certain conditions. 20 2.4. Temporal abstraction TrenDx [52, 54] (MIT Computer Science and Artificial Intelligence Labo- ratory) is a well-known trend-pattern recognition system based on template- fitting. In TrenDx, variations in trend signals are classified into seven tem- plates, each described by a regression model: a constant model represents the steady state, two linear models distinguish the increasing and decreas- ing ramps, and four quadratic models are used to represent the trend pat- terns that consist of a sharp increase or decrease followed by a plateau [52]. The duration of a template can be adjusted for a particular case in real time by repeatedly comparing signal measurements with the realizations of trend templates with different intervals. TrenDx was initially proposed as a univariate offline method and used for detecting anomalies in children’s size-weight growth curves [54]. It has been extended to recognize complex patterns (scenarios) in real time by fitting the measurements of multiple variables to the predefined template complex [53]. Online curve-fitting was also used for pre-segmentation. In [25], a trend signal is viewed as a series of connected straight lines and the Cusum test is used to determine whether a trend change has occurred and a new curve- fitting process should be started. This method has been tested on both simulated data and clinical data from an ICU setting. Fuzzy course, a concept based on the fuzzy set theory, was proposed by Steimann [130] to represent the allowable deviation in a temporal template. Fuzzy course was implemented in a system called DIAMON [130] to detect ARDS, and was also used in [78] for physiological monitoring in anesthesia. Trend detection based on statistical forecasting models The multi- state Kalman filter remains one of the most popular statistical forecasting methods for trend-change detection. In the original multi-state Kalman fil- ter, trend patterns are reflected as short-lived disturbances in the signal model, and therefore the change detection performance deteriorates sig- nificantly in the presence of artifacts. To address this problem, some re- searchers [153] have modified the signal model used in the original multi- state Kalman filter to generate sustained non-zero residuals for the trend patterns of interest. Based on the modified model, a statistical test and a Bayesian approach have been proposed to provide robust change detec- tion [153]. The Bayesian method was later used in [32] for HR monitoring. The parameter estimation process has also been described in [153] for the modified multi-state Kalman filter. Principal Component Analysis (PCA) has been used in [48] to summarize the relationship between multiple variables. Visual display of the trajectory 21 2.5. Artifact removal and signal estimation of the principal components may facilitate the perception of variations in the inter-variable relationship and therefore may result in early detection of clinical events. A modified PCA process is used in [5] for noise reduction in patient monitoring. This modified PCA appears to be more effective in extracting the inter-variable relationships than the standard PCA. A physiological monitoring system should integrate the methodologies proposed at different levels of interpretation. For example, the segmen- tation results [25] have been further processed by a rule-based temporal rea- soning system [24] to derive an abstract more relevant to the final diagnosis. YAQ [142] (Vanderbilt University) is a hybrid qualitative/quantitative on- tology for physiological monitoring. In YAQ, the quantitative models sum- marize trend signals and provide inputs for a rule-based diagnostic module; the rule-based module, on the other hand, adjusts the trend analysis process according to the contextual information. This architecture was implemented in a patient monitoring system called SIMON [145] and used for ventilatory management for the newborn. 2.5 Artifact removal and signal estimation Most of the artifact removal methods proposed to date [132] are based on the difference in the dynamic characteristics (power-spectrum distributions in the frequency domain) of clinically relevant trend changes and artifacts. As pointed out in Section 1.2.3, a band-pass filter is not effective at suppressing sustained artifacts. The presence of artifacts remains a significant problem for physiological monitoring. Measurement redundancy can be used to obtain more robust signal es- timates than would be possible from a single sensor. Feldman et al. [40] have proposed to estimate HR by fusing the HR measurements from the ECG monitor, pulse oximeter and the invasive BP monitor. This method assumes two possible noise distributions for each sensor: one for background noise and the other for artifacts. There are in total 16 hypotheses about the location of artifacts in the sensor matrix (the number of hypothesis is 2M+1, with the number of sensors M=3). The measurement with the lowest likeli- hood of artifactual interference is selected and used in the Kalman filter to derive the fused estimate. The challenge for this method is the acquisition of the noise distributions; additionally, the hypothesis selection involves sig- nificant computational effort. These limitations may prevent this method from being implemented in practice. 22 2.6. Conclusion Biomedical manufacturers have shown great interest in sensor fusion [41, 123]. A recent method [123] (GE Healthcare, Chalfont St Giles, UK) fuses the HR measurements using a rule-based approach. The method detects artifacts in the HR from the ECG monitor by testing the measurements against preset limits, and suggests using the HR measurements from other sources only if the HR from the ECG monitor exceeds the limits. The thresholds in this method are very difficult to set. If they are too small, relevant abrupt changes can be distorted; if they are too large, the method may fail to detect many corrupted measurements. 2.6 Conclusion The survey above has revealed the weaknesses and strengths of different techniques when they are used for extracting information at different levels of interpretation. These studies, if integrated appropriately, could provide essential support for decision making in the clinical environment. Trend-change detection remains an unsolved problem in physiological monitoring. AI techniques, curve-templates, and statistical forecasting mod- els can be used to represent the dynamic behaviors of physiological trend signals. Among these, statistical models that generate predictions based on historical data have demonstrated enough power of expression for the purpose of trend-change detection, and meanwhile maintain simplicity and flexibility. The methods proposed in this thesis are based on this type of statistical forecasting models. The statistical trend-change detection methods proposed in the 1970s and 1980s have provided a solid basis for this thesis. However, the mea- surement and recording technologies available in the clinical environment have improved significantly over the years. For example, HR in an early study [131] was sampled at every one minute, while the common sampling interval in the current clinical environment is 1 or 5 seconds. The sig- nals measured in the current clinical environment have different statistical properties from those measured in the previous clinical environment, posing new challenges for trend-change detection. More advanced signal processing methods need to be employed to address these challenges. The increased computational power of a bedside monitor has also made the application of advanced signal processing possible in the clinical environment. The aim of this thesis is to recognize clinically relevant trend changes in physiological variables online in the presence of noise and intraoperative variability. The thesis starts with the extension of a traditional trend detec- 23 2.6. Conclusion tion method based on the EWMA model, and then shifts its focus to online adaptive trend monitoring. 24 Chapter 3 Change detection techniques and study procedure Model-based trend-change detection are generally realized in three steps: (1) modeling : describing a signal using a stochastic dynamic model; (2) de- trending : calculating the difference between signal measurements and pre- dictions or estimates, and (3) change detection: analyzing the residuals after detrending to identify trend changes from random variations. An appropriate signal model together with the detrending step should transform the target type of trend changes (which may include a switch of trend direction or a change in the incremental rate as pointed out in Chap- ter 1) into sustained level shifts (or rotation for multivariate analysis) in the forecast residuals. It is also desired that the magnitude of forecast residuals can be categorized to identify the trend patterns. Model selection is one of the major issues for model-based change detection and pattern recognition. In the following chapters, different models are adopted according to the sig- nal characteristics and the target trend changes. This chapter is focused on detection techniques that analyze forecast residuals to determine whether a level shift has occurred. This chapter consists of three sections. The first section is focused on level-shift detection, a fundamental topic for the following chapters. The sec- ond section describes the study procedures adopted throughout the project. The last section describes the criteria used for performance evaluation. This section is focused on univariate level-shift detection. Consequently, all the variables herein are scalars. 3.1 Level-shift change detection 3.1.1 Uncertainty in level shifts The forecast residual et is defined as the difference between the observation yt and the model-based forecast ŷt|t−1. Ideally, a level shift caused by a change in the property of interest should have a clear onset and a constant 25 3.1. Level-shift change detection magnitude over the duration of the change. But in practice, relevant level shifts are often blurred by measurement noise and physiological random variations, as in Equation (3.1): e (i) t = yt − ŷ(i)t = d(i)t + u(i)t (3.1) where d(i)t is the mean of forecast residuals and carries the information about different trend patterns, and u(i)t represents the overall uncertainty in resid- uals. It is assumed that all the target trend patterns constitute a finite set Z={z(1), . . . , z(N)}. The script ∗(i)t used in Equation (3.1) indicates that the trend pattern z(i) is in effect at time t. For most signals in this thesis, the random variable u(i)t can be assumed to be independent over time and follow a Gaussian distribution with zero mean, although the variance of u may vary over time. Due to the presence of uncertainty, the residuals resulting from different patterns may overlap with one another. For example, in Figure 3.1, the distributions of a positive level shift and a residual with zero mean overlap in a range between their means. Measurements with the magnitude falling within this range are possibly generated by either distribution. The problem of trend-pattern recognition is to determine, in the presence of uncertainty, which pattern is in effect by analyzing the residual et. Detection of positive level shifts is used as an example in this chapter. The following sections introduce the application of hypothesis testing, sequential likelihood ratio test, and Bayesian estimation in trend-change detection. I will also describe how the detection performance is adjusted in each of these frameworks to compensate for the intraoperative variability. 3.1.2 Hypothesis testing A statistical hypothesis test makes the decision between a null hypothesis H0 and its alternative hypothesis H1. A test statistic must summarize the information in samples that is relevant to the hypothesis. Given a desired significance level α, the critical region [hα ∞] (or [−hα/2 + hα/2] for a two-sided test) can be calculated. One rejects H0 if the sample being tested falls in the critical region. For a hypothesis test, two sources of error may occur: • Type I error : also known as an α error, or a false positive, this is the error of accepting H1 while H0 is true. • Type II error : also known as a β error, a false negative, or a missed detection, this is the error of failing to reject H0 while H1 is true. 26 3.1. Level-shift change detection 0 H0: stable H1: level increase h α d+ Figure 3.1: Probability Distributions of the forecast residuals of two dif- ferent patterns. The two distributions concentrate around different mean values but overlap over [0 d+]. Detection errors are unavoidable due to the presence of information uncer- tainty. For example, the simplest way of detecting the positive level shift in Figure 3.1 is directly comparing the residual value et with a threshold hα. This scheme produces many false positive detections due to the overlap between the distributions for H0 and H1. 3.1.3 Sequential probability ratio test The Sequential Probability Ratio Test (SPRT) or likelihood-ratio test was developed as a hypothesis test for sequential data [148]. In the SPRT test, whether the sample being tested supports the null hypothesis H0 or the alternative hypothesis H1 is often evaluated by the log-likelihood ratio: Lt = log Pr(et|H1) Pr(et|H0) . (3.2) In the SPRT, the log-likelihood ratio is often accumulated over time. The accumulated log-likelihood ratio has the effect of emphasizing sustained changes. To detect an increase, the null and alternative hypotheses in the accumulated SPRT are:{ H0: the data from t0 up to now are equal to zero; H1: a level shift of an amplitude d+ has occurred. (3.3) 27 3.1. Level-shift change detection If the signal distributions under H0 and H1 both follow the Gaussian dis- tributions, have the same variance δ2, and are independent and identically distributed over time, the SPRT generates an important level-shift detection technique called the Cumulative Sum (Cusum) test. Cusum test and V-mask The statistic used in the Cusum test is the cumulated deviation from the target value. In the trend-change detection, the Cusum accumulates residuals e from t0, the last time the Cusum was restarted, to the current instant t, as in Equation (3.4): Ct = t∑ k=t0 ek. (3.4) A visual procedure proposed by Barnard in 1959 [10], known as the V-mask testing (see Figure 3.2) can be used with the Cusum statistic to perform change detection. A V-mask moves with the origin on the Cusum plot from the start of the sequence to the current sampling instant. At every sampling instant, the previous Cusum values are tested against the two arms of the V-mask. When the lower arm is crossed from within, a level increase is detected; when the upper arm is crossed, a level decrease is detected. The point at which the Cusum plot crosses the arm of the V-mask (t∗ in Figure 3.2) is identified as the start of the detected level shift. t
t
* h
+ { d
+ unit length Cusum plot origin current time
start of the increase
end Figure 3.2: The Cusum crosses a V-mask’s lower arm to detect an increase. 28 3.1. Level-shift change detection The shape of V-mask is determined by the critical region for the cor- responding SPRT (Equation (3.3)). Therefore, performance of the Cusum test is entirely determined by the shape of V-mask. β 1− α/2 < log Pr(et0 . . . et|H1) Pr(et0 . . . et|H0) < 1− β α/2 (3.5) where α/2 is the type-I error rate (α: the overall type-I error rate for the two-sided test which detects changes in both directions), and β is type-II error rate. With this critical region, the SPRT is carried out as follows: 1. If the likelihood ratio is greater than the upper threshold, the alterna- tive hypothesis H1 is accepted, i.e., an increase is detected; 2. If the ratio is less than the lower threshold, H0 is accepted, i.e., a steady state is detected; 3. Otherwise, the evidence is not strong enough to support either H0 or H1, and the test should keep running. To realize this SPRT, the slope of the lower arm in Figure 3.2 is set to d+/2, half the magnitude of the increase of minimum interest, and the rise distance h+α , equivalent to the upper critical threshold in Equation (3.5), are calculated as below [128]: h+α = δ2 d+ log( 1− β α/2 ) ≈ − δ 2 d+ log(α/2) (3.6) where δ 2 d+ is the normalizing factor. The approximation holds because the values of α and β are often very small. Theoretically, the lower threshold in Equation (3.5) is not reflected in the V-mask, as the rise distance associated with this threshold would be negative, but in practice, if a lower arm with a very small h+0 is not broken by any Cusum values for a considerable period of time, the signal can be declared as steady. Two one-sided Cusum The Cusum plot can cross V-masks of differ- ent rise distances. There exists a maximum h+max(t) for each instant t that guarantees all the lower-arm strokes with h+6h+max(t) will detect an in- crease. Since a rise distance is monotonically related to a false positive rate α, hmax(t) can be used as a measure of certainty for change detection. For example, the maximum h+ for the lower arm in Figure 3.2 reflects the certainty level for detecting an increase. 29 3.1. Level-shift change detection The certainty levels h±max are equivalent to Barnard’s one-sided Cusum C± [10], and can be calculated iteratively as{ C+t = max(C + t−1 + et − d + 2 , 0) C−t = min(C − t−1 + et − d − 2 , 0) (3.7) The two one-sided Cusum is compared with thresholds to detect level shifts. Adaptivity To obtain constant α- and β-rates overtime, the shape of V- mask should be adapted to the time-varying signal characteristics. The slope d should be updated if the magnitude of the change of minimum interest varies over time. The rise distances h should be adjusted according to the time-varying variance δ2t . Furthermore, if the sequential prior probability for H1 (Prt0(H1)) and for H0 (Prt0(H0)) are not always equal to one another over time, the likelihood ratio in Equation (3.5) should be modified as below: L (H1/H0) t = log Pr(et0 . . . et|H1)Prt0(H1) Pr(et0 . . . et|H0)Prt0(H0) . (3.8) The rise distance of the V-mask h should be adjusted accordingly. The Cusum test is an optimal detection method for stationary and in- dependent signals subject to abrupt changes in the mean, since the Cusum test minimizes the delay between the occurrence of a shift and its detection for any fixed false alarm rate [12]. Due to this advantage, the Cusum test is widely used in a broad variety of applications. 3.1.4 Bayesian approaches The probability density for the residual et, conditional on the fact that state z(i) is in effect at time t, can be calculated as below: Pr(et|z(i)t ) = N (i)(et; 0, δ2t ). (3.9) With a statistical model that forecasts the future based on signal history, Pr(et|z(i)t ) represents the evidence provided by the new measurement yt for z (i) t . The probability distribution N (i)(0, δ2t ) should be updated online to the time-varying signal characteristics. A Bayesian estimator integrates the prior knowledge with the informa- tion carried by observations. A fixed-point smoother Pr(z(i)t−k|yt0 . . . yt), kh − 1 >h − 2 for de- tecting decreases, to detect trend changes of two different levels of certainty. A level-1 minor increase (decrease) is detected when a C+ (C−) crosses h+1 (h−1 ) from h + 0 (h − 0 ). A level-2 major increase (decrease) is detected when C+ (C−) crosses h+2 (h − 2 ) from h + 1 (h − 1 ). If the Cusum statistics in both directions fall within [h−0 h + 0 ], the signal has reached a new plateau. The Cusum test was originally designed for applications where a detected change would demand a prompt action to reset the system to a normal level. In the case of intraoperative patient monitoring, it is neither necessary nor realistic to interrupt surgery and return the patient’s physiological state to the original level. However, allowing the detected trend to continue intro- duces two problems for change detection. First, the influence from a detected change remains in the Cusum statistics and reduces the detection sensitivity in the future. Second, the oscillations around a detected trend can trigger repetitive detections. In practice, clinicians usually ignore the measurements recorded a long time ago (empirically set to T sampling intervals in this chapter) when eval- uating the current trend dynamics. It is also assumed in this chapter that the measurements before the starting point of the most recently detected trend change has little influence on the current detection process. In order 35 4.1. Method to remove the influence of historical measurements, a length-constrained V-mask is used in the Cusum test. For example, the length of the up- per arm of the V-mask for detecting decreases should be no longer than min(T, t−t∗+1), where t∗ is the starting point of the most recent increase (see Figure 3.2 for how to locate t∗). This is equivalent to resetting the Cusum C− at max(t∗, t−T ) to zero and recalculating its values between max(t∗, t−T ) and t. To reduce redundant detections, the continuous extension of a detected trend is ignored. Repetitive detections due to oscillations around a threshold can be suppressed by comparing the starting locations of two successive changes in the same direction. If the starting locations of both changes fall within a short window of a predefined length (T/10 sampling intervals, for example), the second change is viewed as an extension of the first one, and therefore is ignored. 4.1.3 Abruptness of trend changes The time delay of detection, i.e., the distance between t∗ and t in Figure 3.2 characterizes the abruptness of the detected trend. If this distance is suf- ficiently short (less than T/10 sampling intervals in this chapter), then the trend can be recognized as an abrupt change. 4.1.4 Alert management A detected trend change does not necessarily demand immediate attention. Detected trend changes can be combined with other information to deter- mine whether an alert should be triggered. Heuristics below are used to suppress uninformative alerts: A: For most abrupt changes, a level-1 and a level-2 changes are often generated immediately one after the other. To avoid repetitive alerts in such a situation, the level-1 abrupt change is delayed for a short period (T/10 sampling intervals). If a level-2 abrupt change of the same direction is detected within this period, then the level-2 alert is displayed and the level-1 change is discarded. B: To reduce false alerts for short-peak artifacts, the alert for an abrupt change is held for a short period (T/10 sampling intervals). If an abrupt change in the opposite direction is detected within this period and the signal level returns to within 2δ around the initial values, where δ is the standard deviation of measurement noise, then the two 36 4.2. Results changes are recognized as a short-peak artifact and the alerts for both changes are suppressed. C: For some variables, trend variations are relevant only when the signal level is within a particular range (critical range). If a change is detected outside the critical range, the alert is delayed until the signal enters this range. The alert is ignored if the signal does not enter this range before changing its direction. The critical range is predetermined in this chapter, but can also be adapted to the real time data. 4.2 Results The performance of the proposed EWMA-Cusum method was tested using the trend signals of end-tidal carbon dioxide (EtCO2), peak airway pressure (Ppeak), end-tidal minute volume (MVexp), and noninvasive mean blood pressure (NIBPmean) from 40 cases. Rules A and B were applied to all the variables, and Rule C was used for Ppeak in ventilated patients. The critical range for Ppeak was set to be above 15cmH2O for increases or below 10cmH2O for decreases. The forgetting length T was set to 3 sampling intervals (9 minutes at the sampling rate of 3 minutes) for NIBPmean and 48 sampling intervals (4 minutes at the sampling rate of 5 seconds) for EtCO2, MVexp, and Ppeak. A fifth-order median filter (see Section 8.1.1.2 for details) was used to remove transient artifacts in EtCO2, MVexp, and Ppeak, and a third-order filter was used on NIBPmean. The 40 cases were divided into a training group and a test group, each containing 20 cases, and spanning approximately 25 hours of anesthesia in total. Each trend signal was firstly filtered using a two-point moving average filter. The noise variance δ2 for each physiological variable was set to the mean-square-deviation of the filtering residuals averaged over all the annotated steady segments. The forgetting parameter λ was set empirically by investigating the forecast residuals for the steady segments. With the chosen forgetting parameter, the mean absolute residual over a short period after the occurrence of a steady segment, averaged over all the annotated steady segments, should be less than 2δ. The short averaging period is set to 2 minutes for EtCO2, Ppeak and MVexp (24 data points at the sampling rate of 5 seconds) and 9 minutes for NIBPmean (3 data points at the sampling rate of 3 minutes). The thresholds ±h1 and ±h2 control the change-detection performance. The magnitude of the level-2 certainty thresholds h2 was set to a level so that the resulting false detections are less than 5% of the annotated significant 37 4.2. Results changes in the training group. h1 was set to h2/2, and h0 was set to h2/5. Following parameter tuning, the EWMA-Cusum method was tested on the remaining 20 cases. Two anesthesiologists participated in the perfor- mance evaluation. The two-level alerts generated by the method were dis- played on the unfiltered data and visually inspected by the anesthesiologists. They first evaluated the detection results independently, including the alert location, direction, and level of certainty. When the anesthesiologists’ opin- ions about a particular alert differed, these discrepancies were discussed, and the results agreed upon were used for performance evaluation. The anes- thesiologists were also asked to annotate the changes of level-1 or level-2 significance that were missed by the method. 4.2.1 Example case An EtCO2 trend signal from a pediatric patient is used here to demonstrate how the method works and to show the results of each step (Figure 4.1). The changes were displayed at the detection locations. The Cusum test as a robust change detector successfully avoided de- tecting the short-peak artifacts during intubation, but falsely detected the short-peak artefact around t=80. An increased test threshold could avoid this false detection, but would results a missed level-2 decrease at t=293. Using Rule B (see Plot 4 and Plot 5), the short-peak artifacts at t=80 was recognized and suppressed without missing the decrease at t=293. The level-1 alert generated at t=40 for an increase (see Plot 2) was very close to the level-2 alert generated at t=43 for the same increase, due to the abruptness of this change. A similar situation was also observed around t=92. Rule A is used to suppress the unnecessary level-1 alerts in these situations (see Plot 4 and Plot 5). For the long gradual decrease after t=200, the level-1 alert is a very useful early warning. 4.2.2 Performance evaluation In the 20 validation cases, 55 level-1 and 33 level-2 alerts were generated for EtCO2, 65 level-1 and 38 level-2 alerts for MVexp, 26 level-1 and 21 level-2 alerts for Ppeak, and 58 level-1 and 48 level-2 alerts for NIBPmean. The alerts produced by the EWMA-Cusum method were sorted into four types according to the correctness of the location, direction, and level of certainty: (1) alerts with the change direction and the certainty level both correctly detected, (2) alerts whose change direction was correctly de- tected but whose certainty level did not agree with the annotation, (3) alerts 38 4.2. Results A
l
e
r
t
s
o
n
E
t
C
O 2
(
%
) Intubation Decrease in anesthetic concentration Return to spontaneous respiration
End of surgery Rule A
Rule A Rule B C
43 C
251 C
251 C
43 Time (sampling interval: 5 seconds) Plot 1: EtCO
2
measurements were contaminated by transient and short-peak artifacts due to intubation and patient movement. Plot 2: Transient artifacts were removed by median filtering. The signal was then detrended by subtracting the predictions from the filtered signal. Plot 4: Three thresholds were used to in each direction to detect change points of two different levels of statistical significance and the occurrence of new plateaus. The algorithm detected level-2 increases at t=43 and 94, level-2 decreases at t=71 and 293, level-1 decreases at t=67 and 251, and a plateau at t=192. The increase and decrease in the large ellipse according to Rule B constitute a short-peak artifact. The level-1 change points in the small circles according to Rule A were redundant for the abrupt changes. Change points were also marked on plot 2. Plot 3: The Cusums of the prediction residuals were tested against different V- masks to determine the maximum rise distance that correspondeds to the certainty in Plot 4 for each direction. Plot 5: The final 2-level alerts were only displayed for the relevant trend changes. Increase Decrease Plateau Figure 4.1: The change-point detection and alert management process il- lustrated on an EtCO2 trend signal 39 4.3. Discussion with falsely detected change direction were false alerts, and Missed alerts. Missed alerts refer to the level-2 change points that were annotated by the anesthesiologists but missed by the method. The missed level-1 annotations were not treated as missed alerts, because missing an early notification was considered to be insignificant in this study. The alerts in category (1) and (2) were both considered to be true positive detections. The whole segment between two successive level-1 change points annotated by the anesthesiol- ogists was taken as a non-change segment (negative instance) and used for evaluating the false positive rate. The performance of the proposed method was evaluated by comparing the detection results in the 20 test cases with the expert annotations (see Table 4.1). The detection results and the annotations are listed in Ta- bles 4.1-(a) and (b) respectively. In Table 4.1-(c), the detection results are compared with the expert annotations in the same category. The true pos- itive rates RTPL1 and RTPL2 evaluate the correctness of the results in both the change direction and certainty level, indicating the detection accuracy in a strict sense. RTP (L1+L2) evaluate the correctness of the results in the trend direction. The direction of over 89% of the annotated changes were correctly identi- fied (see RTP (L1+L2) in Table 4.1-(c)), with the highest rate being 95.4% for NIBPmean. The certainty level of over 92% of the true positive detections in EtCO2 and NIBPmean agreed with the anesthesiologists’ evaluations (see NTypeA/NTP Table 4.1-(c)). Both the false positive rate and the rate of missed changes were minimal. 4.3 Discussion The EWMA model is widely used for forecasting physiological trend signals, as it intuitively describes the temporal responses of most physiological vari- ables to a clinical event. The simplicity of the model structure provides great robustness for trend-change detection. The EWMA-Cusum method can be used for monitoring most physiological trend signals routinely recorded in the clinical environment. An effective clinical monitoring method should utilize clinicians’ exper- tise together with statistical signal processing techniques to make decisions. In this study, heuristics about a signal’s temporal shapes are used after a statistical test to differentiate short-peak artifacts from clinically relevant trend changes. The detection results on clinical data demonstrate that ap- propriately configured heuristics can significantly improve the performance 40 4.3. Discussion Table 4.1: Detection results of the Cusum-based 2-level alert method in 20 clinical cases (a) Detection results NTypeAL1 NTypeAL2 NTypeA NTP NFP NFN EtCO2 48 29 77 83 5 3 MVexp 57 32 89 102 1 4 Ppeak 20 16 36 45 2 1 NIBPmean 52 44 96 104 2 3 NTypeAL1 (NTypeAL2): number of the level-1 (level-2) alerts with both the direction and certainty level correctly detected; NTypeA: NTypeAL1 +NTypeAL2; NTP : number of the true positive alerts with the direction correctly detected; NFP : number of alerts of both certainty levels with the direction falsely detected; NFN : number of missed level-2 change points. (b) Expert annotations NPL1 NPL2 NP (L1+L2) ÑN ÑP EtCO2 56 36 92 56 36 MVexp 72 42 114 72 42 Ppeak 27 22 49 27 22 NIBPmean 60 49 109 60 49 NPL1 (NPL2): number of the level-1 (level-2) annotated changes; NP (L1+L2): NPL1+NPL2; ÑN : number of negative instances during which no alert should be given; ÑP : number of significant trend changes. (c) Performance evaluation RTPL1 RTPL2 NTypeA/NTP RTP (L1+L2) RFP RFN EtCO2 85.7% 80.5% 92.8% 90.2% 8.9% 8.4% MVexp 79.1% 76.2% 87.3% 89.4% 2.3% 9.5% Ppeak 74.0% 72.7% 80.0% 91.8% 7.4% 4.5% NIBPmean 86.6% 89.8% 92.3% 95.4% 3.3% 6.1% RTPL1: NTypeAL1/NPL1, true positive rate of level-1 change detection; RTPL2: NTypeAL2/NPL2, true positive rate of level-2 change detection; RTP (L1+L2): NTP /NP (L1+L2), the overall true positive rate; RFP : NFP /ÑN , false positive rate; RFN : NFN/ÑP , false negative rate. 41 4.3. Discussion of this method. However, the application of heuristics introduces extra time delays. Therefore the heuristics should only be used when necessary. The EWMA-Cusum method is similar to an early-stage change-detection method proposed by Stoodley and Mirnia [131], in which the Cusum test is also used with a forecasting model for trend-change detection. In Stoodley’s method, however, the signal is modeled as the EWMA of the historical data plus a piecewise constant incremental rate estimated from the previously detected trend. The advantage of inclusion of an adaptive incremental rate to the standard EWMA model is that, the predictions follow the detected trend changes automatically and repetitive detections are therefore avoided. However, the adaptation process also makes future performance depends heavily on the accuracy of current detection. A missed or falsely detected trend could deteriorate the performance for a long time. The test results demonstrate that this method has great potential for clinical use. However, it should be noted that the raters were in full per- ception of the detection results during data annotation. Assessment of the results may thus have been susceptible to rater bias. The use of a two-level alert system alleviates the typical tradeoff between specificity and sensitivity in the single-level alert method. By introducing thresholds of a lower certainty, the two-level alert method can detect changes that would be missed by the single-level alert method. Although the two- level method may also detect minor changes that would be considered as insignificant by clinicians, these extra minor detections, if displayed appro- priately, may have much less negative effect on the user compliance of a monitoring system than false alarms. The display of multi-level alerts is an important topic that requires further study. In clinical practice, trend changes with similar incremental rates and durations may have different degrees of clinical relevance. A signal’s local variability and the shape of the preceding trend segment have great influ- ence on the clinical significance of the current measurement. However, the EWMA-Cusum method proposed in this chapter assumes constant second- order signal characteristics for all the patients during surgery. The false detections due to this unrealistic assumption can be avoided by adapting the change detection process to a signal’s intraoperative variability online. Chapter 5 and Chapter 6 will be focused on adaptive trend monitoring. 42 Chapter 5 Adaptive trend change detection based on the linear growth dynamic model Chapter 4 has demonstrated the potential of the EWMA model and the Cusum test for detecting changes in the trend direction. However, the per- formance of the EWMA-Cusum method proposed in Chapter 4 is limited due to its unrealistic assumption of a constant second-order signal prop- erty. This chapter addresses the challenge of intraoperative and inter-patient variability, with a belief that the trend-change detection performance can be improved by adapting the detection process to a signal’s second-order statistical characteristics. The signal model used in this chapter is implemented in two steps. Firstly, the linear growth Dynamic Linear Model (DLM) describes both the signal level and the incremental rate as dynamic processes. The time- varying characteristics of a physiological trend signal is reflected in the linear growth DLM as unknown parameters, which can be estimated online using an adaptive Kalman filter. Secondly, an EWMA filter is used to smooth the predictions generated by the DLM. Since the signal level and incremen- tal rate are both modeled as dynamic processes, the DLM can be used for tracking changes in both the signal level and trend speed. In this chapter, a statistical test based on the Cusum of forecast residuals is used with this two-phase model to detect changes in the trend direction. There are two prerequisites for the application of this method. First, as in the EWMA-Cusum method (Chapter 4), signals under investigation should not be heavily contaminated with artifacts, or artifacts have been effectively reduced. In addition, the signal’s second-order property, if varies, must vary at a much slower rate than the signal’s sampling rate. Most of the physiological variables measured per 1 or 5 seconds in the current clinical environment satisfy the requirement. The method was tested on the clinical data of HR, EtCO2, MVexp, and RR. As an example, the performance on 43 5.1. Method HR is compared with that of the Trigg’s Tracking Signal to demonstrate the advantage of this adaptive monitoring method. The work in this chapter has been published in IEEE Transactions on Biomedical Engineering [161] and Proceedings of the 27th Annual International Conference of the IEEE Engineering in Medicine and Biology Society [163]. 5.1 Method 5.1.1 Linear growth dynamic linear model Some physiological trend signals can be treated as piecewise linear and de- scribed by the linear growth DLM proposed by Harrison and Stevens [57]: [ ςt bt ] ︸ ︷︷ ︸ xt = [ 1 1 0 1 ] ︸ ︷︷ ︸ F [ ςt−1 bt−1 ] ︸ ︷︷ ︸ xt−1 + [ νt wt ] ︸ ︷︷ ︸ ζt ζt ∼ N (0, Qt) yt = [ 1 0 ]︸ ︷︷ ︸ H xt + ηt ηt ∼ N (0, Rt) (5.1) where the real signal level ςt and its incremental rate bt are subject to system disturbances ζt, and the signal measurement yt is subject to measurement noise ηt. The two system-disturbance elements νt and wt in ζt, and the measurement noise ηt are independent of one another, and they are all in- dependent over time and follow the Gaussian distributions. Both ηt and ζt vary greatly between patients and during surgery. Therefore ηt and ζt, in addition to the system state xt, are considered unknown in this chapter. How to solve this dual-estimation problem online represents a challenge. The linear growth DLM tends to be over-sensitive when used for trend- change detection [63]. That is to say, the predictions follow signal variations too rapidly, resulting in a short-lived non-zero shift in the forecast residuals when a trend change does occur. As pointed out in Chapter 3, short-lived residuals are difficult to detect in the presence of artifacts. The proposed adaptive trend monitoring method is realized in four steps (see Figure 5.1). In the first step, the signal level, the incremental rate, and the noise covariancesQ and R are estimated using an adaptive Kalman filter. Then, to prolong the effect of trend changes in forecast residuals, an EWMA filter is used to smooth the predictions generated by the linear growth DLM. Q and R estimates are used to adjust the thresholds in the change-detection test. This test divides the signal into a series of straight lines. Finally, the 44 5.1. Method segments of the same trend direction are combined and change points are located. This method is referred to Adaptive-DLM method in this thesis. Figure 5.1: Schematic of the Adaptive-DLM trend-change detection method 5.1.2 Adaptive Kalman filter The noise variance Q and R, and the system state x are estimated using an adaptive Kalman filter. The adaptive Kalman filter consists of a filter, a fixed-point smoother, a process that updates Vk−1,k|t (see Equation (5.5) for the definition) and a recursive Q-R estimator (see Figure 5.2). The Kalman filter is a recursive filter that estimates the system states of a state-space model from a series of noisy measurements. According to the location of estimation k relative to the current instant t, the estimator is classified as smoother for kt. In this chapter and the chapters that follow, the subscript k|t indicates that a variable at k is estimated conditionally on the observations y1...t; the sub- script t|t is simplified as t. The operators filter and smoother defined below represent the standard Kalman filtering and fixed-point smoothing processes respectively. These operators will also be used in Chapter 6 and Chapter 8. Filter This operator represents the standard Kalman filtering process [92]: (x̂t, Vt,Kt, x̂t|t−1, Vt|t−1, et, Vt,t−1|t) = filter(x̂t−1, Vt−1,Θt, yt) where Θt={Ft,Ht, Qt, Rt, µt} represents the model parameters in effect at time t. µt is the mean of the system disturbance ζt. In the DLM in Equa- tion (5.1), Ft=F , Ht=H and µt=0. The only time-varying parameters are Qt and Rt. 45 5.1. Method Figure 5.2: Adaptive Kalman filtering process EM: Expectation- Maximization algorithm The filtering operation is carried out in two steps. With the initial estimates x̂0 and V0, the one-point-ahead prediction x̂t|t−1 and its covariance Vt|t−1 are first calculated as below: x̂t|t−1 = Ftx̂t−1 + µt Vt|t−1 = FtVt−1F Tt +Qt (5.2) where Vt|t−1=E{(xt−x̂t|t−1)(xt−x̂t|t−1)}. Then the innovation et, the innovation covariance St, the Kalman gain Kt, the estimate x̂t, the estimate covariance Vt, and Vt,t−1|t are updated: et = yt −Htx̂t|t−1 St = HtVt|t−1HTt +Rt Kt = Vt|t−1HTt S −1 t x̂t = x̂t|t−1 +Ktet Vt = Vt|t−1 −KtStKTt Vt,t−1|t = (1−KtHt)FVt−1|t−1 (5.3) where Vt = E{(xt−x̂t|t), (xt−x̂t|t)} and Vt,t−1|t = E{(xt−x̂t|t−1), (xt−x̂t|t)}. Fixed-point smoother The standard fixed-point Kalman smoothing pro- cess [126] is represented as: (x̂t|t, Vk|t, Vt,k|t−1) = smoother(x̂k|t−1, Vk|t−1, Vt−1,k|t−2,Kt−1, et, Vt|t−1,Θt) 46 5.1. Method For every point in the smoothing window k∈(t−T+1 . . . t) (T is the win- dow length), the fixed-point smoother updates the system estimates x̂k|t, its covariances Vk|t, and Vt,k|t−1. The smoothing process is carried as Vt,k|t−1 = Vt−1,k|t−2(Ft − FtKt−1H)T Kk|t = Vt,k|t−1HT (Rt +HVt|t−1HT )−1 x̂k|t = x̂k|t−1 +Kk|tet Vk|t = Vk|t−1 − Vt,k|t−1HTKTk|t (5.4) where Vk|t=E{(xk−x̂k|t)(xk−x̂k|t)} and Vt,k|t−1=E{(xt−x̂t|t−1)(xk−x̂k|t−1)}. Vk−1,k|t updating In addition to the standard fixed-point Kalman smooth- ing, Vk−1,k|t, defined as Vk−1,k|t=E{(xk−x̂k|t)(xk−1−x̂k−1|t)}, is also esti- mated recursively: V(k−1,k)|t = V(k−1,k)|t−1 − Va(t, k)HTKa(k − 1, t)T (5.5) where { Va(t, k) = Va(t− 1, k)(F − FKt−1|t−1H)T Ka(t, k) = Va(t, k)HT (Rt +HVt|t−1HT )−1 (5.6) with initial condition Va(t, t)=Vt−1,t|t (see Appendix I for the proof). As shown in Figure 5.2, the operators defined above are used to estimate the system states for the previous T points. These estimates are used in the following to estimate Q and R online. Q and R estimation In the DLM, the measurements y0...t and system states x0...t carry information about the unknown model parameters Q and R. The Q and R estimates should maximize the likelihood of x0...t and y0...t. However, since x0...t are unknown, their estimates x̂t−T+1...t|t generated by the Kalman smoother are used instead, leading to a so-called Expectation- Maximization (EM) algorithm. The EM algorithm is a two-step recursive method. When used for online estimation, the E-step calculates the expectation of the log-likelihood of (Qt, Rt) conditional on the measurements y0...t and the previous estimates (Q̂t−1, R̂t−1), with respect to the hidden system states x0...t; the M-Step maximizes this expectation with respect to Q and R to derive new estimates for the next iteration at time t+1. The process is summarized as: E : f(Q,R) = E{log Pr(x0...t, y0...t|(Q,R)|y0...t, (Q̂t−1, R̂t−1)}M : (Q̂t, R̂t) = argmax Q,R f(Q,R). (5.7) 47 5.1. Method The influence of the historical measurements on the Q and R estimates decays over time. Therefore yt−T+1...t and xt−T+1...t that fall in the smooth- ing window of the Kalman smoother, instead of all the previous data, are used for updating Q and R. Q̂t and R̂t are updated recursively online as{ Q̂t = 1t {(t− 1) · Q̂t−1 + Lt|t + ∑t−1 k=t−T γk|t} R̂t = 1t+1{t · R̂t−1 + Vt|t + ∑t−1 k=t−T χk|t} (5.8) where L and V represent the influence of the forward filtering, and γ and χ represent the influence of the backward smoothing (see Appendix I for details). With the estimated Q̂ and R̂, the filtering innovations et can be treated as white Gaussian noise with zero mean. The dynamic process of the system- state estimates can be written as below [50]: x̂t = Fx̂t−1 +Ktet︸︷︷︸ ωt ωt ∼ N (0,Kt(R̂t +HVt|t−1HT )KTt︸ ︷︷ ︸ Wt ). (5.9) 5.1.3 EWMA of the DLM predictions Given the Kalman estimate x̂t at time t, the system state for any future instant j, (j>t), can be calculated as x̂j|t = F j−tx̂t. (5.10) The multi-point-ahead predictions above result in multiple prediction values for every future instant. The predictions for time j conditional on observa- tions up to different historical instants, i.e., x̂j|0 . . . x̂j|t, are averaged using an EWMA filter with a forgetting parameter λ∈(0 1). The averaged pre- diction x̃j|t, initialized with x̃j|0 = x̂j|0, can be calculated as below: x̃j|t = λx̂j|t + (1− λ)x̃j|t−1 1 6 t 6 j−1. (5.11) The averaging process described above can be realized recursively. Given x̃t−1|t−2 at time t−1, the prediction for time t is calculated as below: x̃t|t−1 = λx̂t|t−1 + (1− λ)Fx̃t−1|t−2. (5.12) Similarly, Ṽt|t−1, the covariance of the prediction x̃t|t−1, initialized with Ṽ1|0 =W0, can also be calculated recursively as below: Ṽt|t−1 = (1− λ)2FṼt−1|t−2F T +Wt. (5.13) 48 5.1. Method The filtered predictions for the measurements from t−Tf+1 to t are: [ỹt−Tf+1|t−Tf . . . ỹt|t−1] = H[x̃t−Tf+1|t−Tf . . . x̃t|t−1]. (5.14) The standard deviation of the sum of these predictions is: ht = STD( t∑ j=t−Tf+1 (ỹj|j−1)) = √ H̃Q̃tH̃T (5.15) where H̃=[H . . .H] and Q̃t represents the covariance of [x̃t−Tf+1|t−Tf . . . x̃t|t−1] with Q̃2m−1:2m,2n−1:2n = ((1−λ)F )|m−n|Ṽ(t−|m−n|)|(t−|m−n|−1), 16m,n6Tf . Since Q̃t is determined by the Q and R estimates, ht is therefore adaptive. 5.1.4 Adaptive change detection The Cusum of the differences between the EWMA predictions ỹ=Hx̃t|t−1 and the Kalman estimate ŷ=Hx̂t is tested against a threshold to decide whether a change has occurred. The cumulation window for the Cusum statistic C(n, t) is set to be no longer than the previous change point t∗i−1 with a maximum length of Tf : C(n, t) = ∑t j=t−n(ŷj − ỹj|j−1) n6min(t− t∗i−1, Tf )−1. (5.16) C(n, t) accumulates the prediction residuals from t−n + 1 to t. The test thresholds are determined by testing the hypothesis that no change has oc- curred within the cumulation window [t−Tf+1 . . . t]. Hence the test thresh- olds are scaled to the standard deviation of the sum of the predictions in the cumulation window (see Equation (5.15)), i.e., h=± σ×ht, where σ is used to adjust the detection sensitivity. A change point is detected when C(n, t) has broken either limit before reaching the end of the cumulation window. 5.1.5 Trend patterns A trend signal is segmented by the series of detected change points. Each segment is described as steady, increasing or decreasing by evaluating its average incremental rate. Successive segments of the same direction are combined, and only the initial change point triggers an alert. In summary, this method has six parameters: the initial Q0 and R0, and the smoothing window size T in the adaptive Kalman filter, the cumulation window size Tf , the forgetting parameter λ in the EWMA predictor, and the sensitivity control σ in the Cusum test. 49 5.2. Test on heart rate trend signals 5.1.6 Trigg’s tracking signal Trigg’s tracking signal [139] has been previously used for change detection in patient monitoring (see Section 2.2). TTS is the ratio between the EWMA of forecast residuals and the EWMA of absolute forecast residuals: st = (1− υ)et + υst−1 Mt = (1− υ)|e|t + υMt−1 Tt = st Mt (5.17) where e is the model-forecast residuals and υ (0–1) is the forgetting param- eter of TTS. The magnitude of TTS is related to the significance of change detection. Change points are identified when the TTS crosses the thresh- olds ±hTTS from zero (see Figure 5.5). A change point will be ignored, if it has the same trend direction with the previous change point and the TTS statistics between the previous change point and the current change point are consistently positive or negative. 5.2 Test on heart rate trend signals Heart rate is an important physiological variable that demands constant monitoring during surgery. Astute anesthesiologists often rely on subtle changes in the HR trend to evaluate the adequacy of depth of anesthesia as well as fluid administration. A seasoned anesthesiologist can perceive changes in HR of three to four beats per minute from the auditory signal [89]. The HR trend is usually sampled every 1 or 5 seconds during surgery, and can be described using the linear growth DLM. The trend signals of HR from the 40 cases used in Chapter 4 were used here to test the performance of the Adaptive-DLM method. The signal is pre-filtered using a third-order median filter (see Section 8.1.1.2 for details) before the test. The clinically relevant decreases and increases in each case were an- notated by two expert anesthesiologists independently using eViewer (see Figure 3.3). The significant changes marked by different raters falling within a two-minute window and of the same direction were treated as agreed upon annotations, and the mean location of this pair was used as the reference location. Following this criterion, the two sets of annotations showed a high ratio of agreement [7]. Besides the agreed-upon events, 3% and 6% ad- ditional events were marked by each expert. The final inter-rater agreed change points were decided by discussion between the two experts. The proposed method was tested in four steps: (1) verifying the accuracy of the Q and R estimation algorithm using simulated signals; (2) tuning the 50 5.2. Test on heart rate trend signals initial values Q0, R0 and the smoothing window size T using the training data; (3) testing the proposed Adaptive-DLM method and the TTS ap- proach using test data and comparing their performances in detecting both increasing and decreasing changes; (4) analyzing the influence of the cumu- lation window size Tf and sensitivity control σ on the performance of the Adaptive-DLM method, and comparing the performance of the Adaptive- DLM method in detecting increases to that of intraoperative detection. The purpose is to compare the performance of the Adaptive-DLM method to the best performance of attending clinicians. Since only two decreases were detected intraoperatively in the all the test cases, the decreasing instances were not included in Step (4). The expert annotations were used as the reference to evaluate the detec- tion results. Any change detected less than 2Tf sampling intervals after the expert annotation and showing the same direction was considered correct. The tolerable delay was extended to 3Tf points for the TTS approach, as a longer delay was expected. As in Chapter 4, the segment between two successive annotated changes points represents a negative instance. 5.2.1 Q and R estimation The results of Q,R estimation are illustrated in Figure 5.3 with a two-phase simulated signal. The signal is composed of a 500-point segment gener- ated using the DLM with Q0=[10 0; 0 0.1], R0=64, followed by the sec- ond 500-point segment generated with Q0=[5 0; 0 0.1], R0=16. Starting with arbitrary initial values Q0=[1 0; 0 0.01], R0=1, with T=30, the esti- mates converged within the 50% intervals of the first set of true values after less than 50 steps, and then oscillated around the true values until t=500. From then on, the true values of Q(1, 1) and R was dramatically changed while Q(2, 2) remained constant. The Q,R estimates smoothly followed the changes, and converged around the new true values after 50 iterations. The adaptive Kalman filter was tested on 20 simulated signals, each composed of one thousand samples. When initialized with values with more than 100% deviation it took 43 steps for Q(1, 1), 46 steps for Q(2, 2), and 102 steps for R (3.6, 3.8, and 8.6 minutes respectively at the sampling rate of 5 seconds) to converge within the 30% interval around the true values. No deterministic oscillation was observed after the initial steps. 51 5.2. Test on heart rate trend signals 0 100 200 Si gn al 0 10 20 Q( 1,1 ) 0 0.1 0.2 Q( 2,2 ) 0 50 250 550 750 1000 0 50 100 R Q=[10 0; 0 0.1] R=64 Q=[5 0; 0 0.1]R=16t = 500 t Figure 5.3: The results of Q and R estimation for a simulated trend signal 5.2.2 Parameter tuning Q and R estimation was applied to the training data with the Kalman smoothing window T=30. The Q and R estimates of the 50 iterations be- fore the end of surgery were averaged to get final estimates for each case. Then the final Q and R estimates of all the training cases were averaged, and the resulting values Q0(1, 1)=1.18±73.1%, Q0(2, 2)=0.092±64.0%, and R0=0.23±100.3% were used as initial settings in the further test. The forget- ting parameter used in the EWMA filter was set empirically by investigating the forecast residuals for the steady segments. With the chosen forgetting parameter, the absolute difference between the Kalman estimates and the EWMA predictions, averaged over the first 2 minutes of every steady seg- ment (24 data points at the sampling rate of 5 seconds) and then averaged over all the annotated steady segments, should be less than two times of the standard deviation of measurement noise. The forgetting parameter υ in the TTS approach was set in a similar manner. The threshold for TTS hTTS was set to minimize (sensitivity2 + (1 − specificity)2). The TTS results were smoothed using a 20-point moving average filter. 52 5.2. Test on heart rate trend signals 5.2.3 Comparison with Trigg’s tracking signal Figure 5.4 demonstrates how the Adaptive-DLM method detected change points and determined the trend directions according to the magnitude of incremental rate. As seen in the top two plots, the signal predictions and the testing limits both vary as the signal trend evolves, especially when a change is detected and the EWMA process resets. 80 100 120 140 H ea rt R at e (be ats /m in) increasing steady decreasing −100 0 100 200 CU SU M T es tin g CUSUM change points 820 830 840 850 860 870 880 890 900 0 2 4 Sl op e Time (interval:5s) slope threshold target values real data upper limit lower limit Figure 5.4: An example of how the Adaptive-DLM method detects change points in the heart rate trend signal, with Tf=10, σ=0.7. The signal pre- dictions and the testing limits both vary with the signal trend. The best performance of the Adaptive-DLM method with Tf=20 was compared with that of the TTS approach. Both approaches were tested re- peatedly on the test data, as σ or hTTS increased from 0.1 to 1.0. The optimal configurations were: σ=0.5 for the Adaptive-DLM method and hTTS=0.7 for the TTS approach. 53 5.2. Test on heart rate trend signals (a) Signal A 80 90 100 110 H ea rt R at e 800 850 900 950 1000 1050 1100 1150 1200 −1 −0.5 0 0.5 1 TT S 0.7 − 0.7 Time (interval:5s) (b) Signal B 80 100 120 140 160 H ea rt R at e 0 100 200 300 400 500 −1 −0.5 0 0.5 1 TT S 0.7 − 0.7 Time (interval:5s) increasing, by our algorithm
increasing, by T index
increasing, by annotation decreasing, by our algorithm
decreasing, by T index Top Plots: decreasing, by annotation Figure 5.5: Detection results of the proposed Adaptive-DLM method and the TTS approach on two example signals. Signal A: mean level=98.2, Q=[0.95 0; 0 0.13], R=0.37; The Adaptive-DLM method and TTS both cor- rectly detected the change points without introducing false positive results. Signal B: mean level=138.3, Q=[0.26 0; 0 0.036], R=0.16; The Adaptive- DLM method correctly detected the change points; TTS introduced 7 false positive detections. 54 5.2. Test on heart rate trend signals Figure 5.5 shows the detection results of the proposed Adaptive-DLM method and the TTS approach on two example signals. With the optimal conditions, the Adaptive-DLM method correctly detected all the change points in both examples with no false positive detections. In Signal A (Fig- ure 5.5-(a)), the TTS approach detected the same number of true positive and false positive changes as the Adaptive-DLM method. In Signal B (Fig- ure 5.5-(b)), the changes after t=100 are irrelevant, since the increase just before t=100 has a dramatically large amplitude. However, the TTS for these irrelevant variations are all close to 1, resulting in a series of false detections. This example demonstrated the typical limitation of the TTS approach. As pointed out in Section 2.2, TTS often produces false detec- tions for changes with a long duration but a negligible amplitude. The delay of the TTS results in both examples are longer than that of the proposed Adaptive-DLM method. The detection results of the TTS approach with hTTS=0.7 and the pro- posed Adaptive-DLM method with σ=0.5 are compared in Table 5.1. In Table 5.1, Q denotes the mean of Q estimates averaged over the whole case; R denotes the mean of R estimates averaged over the whole case. The 20 test cases were classified into three groups according to the magnitude of Q and R relative to the initial Q0 and R0 respectively: (1) with many abrupt changes and a high degree of noise; (2) with many abrupt changes and a low degree of noise; (3) and with gradual changes and a low degree of noise. It shows that the performance of the Adaptive-DLM method was consistent over the three different groups. For groups 1 and 2, TTS achieved compara- ble results, while for group 3, it gave more false positive results. The average delay of the change points correctly detected by both methods was 10 points for the Adaptive-DLM method and 23 points for the TTS approach, i.e., the delay of the TTS approach was 65 seconds longer. 5.2.4 Sensitivity analysis To study the influence of the size of cumulation window Tf and the sensi- tivity parameter σ on the performance of detecting increases, three ROC curves, each with a fixed Tf=10, 20 or 30, were constructed by tracing the sensitivity and specificity as σ increased from 0.1 to 1 (see Figure 5.6). Ex- pert annotated 47 increases in the test cases. The area under each ROC curve was calculated to evaluate the detec- tion performance with the corresponding Tf . The AUC is 0.916 for Tf=10, 0.928 for Tf=20, and 0.882 for Tf=30. The attending anesthesiologists correctly identified 36 (77%) increases with 11 (23%) false positive results, 55 5.2. Test on heart rate trend signals Table 5.1: Change detection results of the Adaptive-DLM method and the TTS approach on three groups of HR trend signals (in percentage) Group Number 1 2 3 Overall TP FP TP FP TP FP TP FP the TTS approach 81.3 12.7 95.8 33.3 67.2 84.5 75.5 60.2 Our Algorithm 87.5 12.7 83.3 29.2 87.9 10.3 86.7 15.1 TP: true positive; FP: false positive; Q: average of Q estimates over the whole case; R: average of R estimates over the whole case. With r1=((Q(1, 1)/Q0(1, 1)) 2+(Q(2, 2)/Q0(2, 2)) 2)/2 and r2=R/R0, the property of each group: (1) r1>1, r2>1, 4 cases, 16 events, 3712 points in total; (2) r1>1, r2<1, 7 cases, 24 events, 6024 points in total; (3) r1<1, r2<1, 9 cases, 58 events, 7960 points in total; No case with r1<1, r2 > 1. 0 0.2 0.4 0.6 0.8 1.0 0.2 0.4 0.6 0.8 1.0 1−Specificity Se ns itiv ity Tf=10, AUC=0.916 Tf=20, AUC=0.928 Tf=30, AUC=0.882 expert annotation intraoperative detection σ increasing Figure 5.6: ROC curves for trend-change detection on HR signals with different Cusum window sizes Tf , as σ increases from 0.1 to 1.0. The optimal operation point is: sensitivity=0.86 and specificity=0.89, which is obtained with σ=0.5, Tf=20. AUC: area under the ROC curve outperformed by the Adaptive-DLM method with σ of a wide range (around 0.45-0.65 for Tf=10 and 0.3-0.55 for Tf=20). The tradeoff between time delay and sensitivity was studied for Tf=10 and Tf=20. Each point in Figure 5.7 was obtained by calculating the average 56 5.3. Results on EtCO2, MVexp, and Ppeak Table 5.2: Mean and Relative Standard Deviation of the Q, R estimates generated by the Adaptive-DLM method HR EtCO2 MVexp RR Q(1,1) 1.18± 73.1% 0.023± 156.3% 0.0021± 118.0% 0.020± 29.0% Q(2,2) 0.092± 64.0% 0.001± 97.1% 0.00059± 98.8% 0.016± 67.5% R R=0.23± 100.3% 0.0038± 153.5% 0.00042± 110% 0.013± 45.1% delay of the correctly detected increasing changes. It appears that N=20 resulted in a slightly longer delay thanN=10. The delay for N=20 increased from around 8.2 points to 17.5 points, i.e., from 41 seconds to one minute and 27 seconds, as σ increased from 0.1 to 1.0, indicating that with a fixed window size, a reduced false positive rate resulted in a longer delay. 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 8 10 12 14 16 18 Sensitivity Control σ D el ay (p oin ts) Tf=20 Tf=10 Figure 5.7: Time delay of the Adaptive-DLM method on the HR signals. 5.3 Results on EtCO2, MVexp, and Ppeak The approach was also tested on EtCO2, MVexp, and Ppeak following the same procedure as for HR [163]. Table 5.2 lists the means of the Q,R estimates and their standard deviations relative to the mean, obtained from the 20 training cases. For each variable, three ROC curves, each with a different window size of Tf=10, 20, or 30, were constructed (see Figure 5.8). Both the increasing and 57 5.4. Discussion Table 5.3: Area under the entire or part of the ROC curve of the Adaptive- DLM method EtCO2(%) MVexp(%) RR(%) AUC AUPC AUC AUPC AUC AUPC Tf=10 90.3 25.3 96.2† 66.0§ 86.1 18.0 Tf=20 94.6† 54.2§ 95.6 53.7 88.3 28.5§ Tf=30 93.0 44.1 91.1 27.9 90.1† 23.6 (1) The total number of annotated change points including both increases and decreases is 81 for EtCO2, 69 for MVexp, and 51 for RR; (2) AUC: area under the entire ROC curve; AUPC: area under part of the ROC curve with sensitivity > 0.7 and specificity > 0.7; (3) †: the largest AUC among different Tf ; §: the largest AUPC among different Tf . decreasing changes were included in the performance evaluation. The entire AUC and the Area Under Part of the Curve (AUPC) with true positive rate above 0.7 and false positive rate below 0.3 were calculated for each of the ROC curves. Table 5.3 lists the AUR and AUPC of interest of each ROC curve, in percentage of the corresponding coverage of a perfect ROC curve, i.e., 1.00 for AUR and 0.09 for AUPC. The optimal cumulation window sizes Tf found by the AUC and the AUPC are consistent for MVexp and EtCO2. For RR, the AUR indicates Tf=30 is optimal but the AUPC indicates that Tf=20 is optimal. The optimal performance point, in terms of the distance to the left upper corner (0,1) of the ROC figure, is sensitivity=0.90 and speci- ficity=0.01 for EtCO2 obtained with σ=0.9 and Tf=20, sensitivity=0.95 and specificity=0.90 for MVexp obtained with σ=1 and Tf=10, and sensi- tivity=0.80 and specificity =0.88 for RR obtained with σ=0.9 and Tf=30. 5.4 Discussion The large standard deviations of the Q,R estimates for HR, EtCO2, MV- exp, and Ppeak (see Table 5.2) demonstrate that Q,R estimates are not well concentrated. This confirms that fixed, empirically chosen parameters can be biased for many patients, and using fixed values could degrade the performance of a statistical change-detection method. The Adaptive-DLM method solved this problem by estimating the signal characteristics online. The EM estimation and EWMA prediction are both recursively realized; therefore, the computation overload is acceptable for online use. 58 5.4. Discussion (a) EtCO2 1−Specificity Se ns itiv ity σ increasing Tf=10 Tf=20 Tf=30 1.3 0.1 1.0 0 1.0 0 0.1 0.2 0.3 0.7 0.8 0.9 1.0 Tf=10 Tf=20 Tf=30 (b) MVexp 1−Specificity Se ns itiv ity 0.3 Tf=10 Tf=20 Tf=30 σ increasing1.5 1.0 0 1.0 0 0.1 0.2 0.3 0.7 0.8 0.9 1.0 Tf=10 Tf=20 Tf=30 (c) RR 1−Specificity Se ns itiv ity σ increasing Tf=10 Tf=20 Tf=30 1.1 0.2 1.0 0 1.0 0 0.1 0.2 0.3 0.7 0.8 0.9 1.0 Tf=10 Tf=20 Tf=30 Figure 5.8: ROC curves with different cumulation window sizes Tf . Left: entire ROC curves after spline smoothing; Right: enlarged areas of interest with sensitivity >0.7 and specificity >0.7. The largest area under curve is 94.6% obtained with Tf=20 for EtCO2, 96.2% obtained with Tf=10 for MVexp, and 90.1% obtained with Tf=30 for RR. 59 5.4. Discussion The tests on HR demonstrated that the performance of the proposed Adaptive-DLM method is more consist between patients, compared with the TTS approach. The TTS approach tends to generate false detections for long variations with negligible amplitudes, since the mean absolute deviation and the mean deviation for a trend segment of long duration have the similar magnitude, no matter how small the change amplitude is. Hope in [62] addressed this limitation by multiplying the TTS with a weighting function related to the change amplitude. However, his solution introduced other problems (see Section 2.2 for details). The purpose of the Adaptive-DLM method is to detect changes in the trend direction. Signal segmentation as an intermediate procedure is per- formed according to the changes in both the incremental rate and signal level, therefore the statistical test used for segmentation does not reflect the degree of certainty of the final decision about the trend direction. The results of signal segmentation carry much information about other trend features in addition to the trend direction. More detailed descriptions about signal patterns, for example, “convex/concave increase”, can be derived from the results of signal segmentation. During data annotation, the anesthesiologists were blind to the detection results generated by the Adaptive-DLM method, therefore their annotations in this study were more objective than the annotations made on the detec- tion results in Chapter 4. However, the post-op annotations can only be used as the gold standard if the annotations perfectly identify true physiological changes. Although the inter-rater agreement strategy was used to improve the reliability of annotation, any inconsistency between the annotations and the true clinically relevant changes may lead to biased results. 60 Chapter 6 Adaptive change detection and pattern recognition based on the generalized hidden Markov model In the previous chapters, the EWMA-Cusum method and Adaptive-DLM method have been proposed to detect changes in the trend direction. Al- though “direction” is the most important feature of a trend segment, in some clinical situations “incremental rate” and “duration” are also essential for evaluating the patient status. In this chapter, trend variations are sorted into several patterns according to these features and two detection methods are proposed to detect pattern transition online. As pointed out in the Discussion of Chapter 4, signal segments of a simi- lar shape may have different degrees of clinical significance depending on the characteristics of the signal in the recent past. For instance, in Figure 6.1- (a), the increase at t2 and decrease at t3 in the Non-Invasive Mean Blood Pressure trend signal are both significant changes, but the changes of similar temporal shapes at t2 and t3 in Figure 6.1-(b) are most likely insignificant, given the large amplitude and long duration of the preceding decrease. This is a typical manifestation of intraoperative physiological variability. For variables measured at a high sampling rate, the intraoperative vari- ability can be handled by estimating the signal’s high-order characteristics online as in Chapter 5. However, some variables cannot be measured very frequently due to the restrictions of device design. For example, NIBP is measured during anesthesia by non-invasive occlusion of the brachial artery with a cuff. The systolic pressure is first detected by slowly releasing the cuff pressure until a distal pulse occurs. Diastolic and mean BPs are then measured by processing the turbulence in the artery. A measurement cy- cle takes 30 seconds. Since frequent inflation cause congestion of the arm, NIBP during surgery is generally measured at every 3 to 5 minutes. At such 61 Chapter 6. Adaptive change detection based on the GHMM 60 65 70 75 80 N IB Pm ea n (m m Hg ) t’1 t ’ 2 t ’ 3 t ’ 4 InsignificantSignificant t’2 t ’ 3 t ’ 4 (a) (b) t’1 Figure 6.1: Two example NIBPmean trend signals: similar patterns have different degrees of clinical significance a rate, a typical trend change is sustained only for 2-6 sampling intervals. The number of samples is not sufficient for the EM algorithm to estimate statistical properties in real time. Therefore, the Adaptive-DLM method proposed in Chapter 5 is not applicable to NIBP. Anesthesiologists usually recognize trend patterns visually, and use their expert knowledge about the transition between these patterns, together with the values of the most recent data to evaluate the probability of occurrence for every possible event. To mimic this cognitive process, a signal’s intra- operative variability should be modeled so that the information about the variability can be learnt from population data. The Hidden Markov Model (HMM) was initially introduced in the late 1960s and has been used in a wide range of applications to describe the transition between different states [105]. However, because each state in an HMM generates only one observation, the state duration follows a geometric distribution, which does not describe physiological trend signals. By allow- ing one state to generate a sequence of observations and to have an explicit duration distribution, the HMM is generalized to the Generalized Hidden Markov Model (GHMM), also known as the segmental hidden semi-Markov mode [98], or variable duration HMM [105]. The GHMM has been an active research topic since the late 1980s. Orig- inally driven by applications in the fields of speech processing and recogni- tion [34], it has attracted considerable research attention in recent years in the area of bioinformatics. The segmental structure of the GHMM pro- vides a natural framework for describing human genomic sequences and the GHMM has been used to find genes by parsing a DNA sequence into a set of putative coding segments [20, 80]. The GHMM has recently been used 62 6.1. Generalized hidden Markov model for physiological monitoring and event diagnosis [154]. In this chapter, physiological trend signals are described using the GHMM framework. Based on the GHMM, two methods are proposed for change detection and pattern recognition. In the first method, the intra-segmental variations are described using a state-space model; then a Bayesian inference process is used with the fixed-point switching Kalman smoother to estimate the probability of occurrence for each pattern, as well as to estimate the true signal values online. Change points are detected by comparing the proba- bility of change at every sampling instant with a fixed threshold. In the second method, the intra-segmental model transforms different trend pat- terns to forecast residuals of different magnitudes; then an adaptive Cusum test is used with a maximum posterior state-path finding algorithm to rec- ognize the signal patterns online. Both methods have been tested on the NIBPmean trend signals. The re- sults demonstrate that, by incorporating the pattern transition probability into a signal model, both methods performed better in change-point detec- tion than the standard Cusum test. The adaptive Cusum test has been published in Proceedings of the 28th Annual International Conference of the IEEE Engineering in Medicine and Biology Society [158]; the proposed fixed-point switching Kalman smoother has been published in International Journal of Adaptive Control and Signal Processing [159]. 6.1 Generalized hidden Markov model A physiological trend signal can be viewed as a random series of temporal shapes [25] and described using a first-order GHMM as shown in Figure 6.2- (a). According to the direction, duration, and increment rate, the segmental shapes are classified into 9 patterns (see Table 6.1). Unlike the HMM, where each state generates only one data point, in the GHMM each segmental state corresponds to a sequence of data points. The values of these data are determined not only by the pattern of the segment they belong to, but also by the historical data within or before the segment. The GHMM is a three-layer model. The top layer is a Markov chain process. It is assumed that the probability of occurrence for each pattern, given all the patterns in the past, depends only on the previous pattern. The pattern transition is modeled as a first-order Markov chain, with the parameter set {Z, pi(0), A}, where Z={z(1), . . . , z(N)} contains the N pre- defined patterns labels (N=9 in this thesis). pi(0) stores the initial prior probabilities; A : N×N is the transition matrix with Aji representing the 63 6.1. Generalized hidden Markov model Table 6.1: Segmental states in physiological trend signals increase decrease steady abrupt gradual abrupt gradual long short long short long short long short z(1) z(2) z(3) z(4) z(5) z(6) z(7) z(8) z(9) (a) Generalized Hidden Markov Model ... ... ... z 2 Segmental states Observations z 1 z m x 1 x 4 x 3 x 2 x t y 1 y 4 y 3 y 2 y t System states (b) Switching Dynamic Linear Model ... ... ...
Point states s 1 s 4 s 3 s 2 s t x 1 x 4 x 3 x 2 x t y 1 y 4 y 3 y 2 y t System states Observations Figure 6.2: Graphical model for the GHMM probability that the segmental state will change from z(j) to z(i). In this chapter, unless stated otherwise, subscripts are used as time stamps, paren- thesized superscripts represent the indices of a pattern in Z. The lower two layers of the GHMM describe the intra-segmental varia- tions. The middle layer predicts the system states xt, and the bottom layer describes how the measurement yt is observed in presence of noise. A seg- ment with pattern z(i) that starts after the end of the previous segment tp can be described as below: • Duration: l̄ ∼ D(i) l̄∈[L(i)f L(i)c ] 64 6.1. Generalized hidden Markov model • for each t∈(tp+1 . . . tp+l̄) (xt, yt) = model(Θ(i), yt−1, xt−1) (6.1) The temporal shape of state z(i) is determined by the duration l̄ and intra- segmental model. l̄ follows a distribution D(i) over the interval [L(i)c L(i)f ], which is determined solely by the current segmental state z(i). The intra- segmental models for different patterns are assumed to have the same func- tional forms. Trend patterns are differentiated by the model parameters. The complexity of this model exists in that the segment boundaries are not deterministic. For a segment with pattern z(i), the duration could be any value in the range of [L(i)c L (i) f ]. If we introduce a variable l to represent the length from the latest end point and a variable f to indicate whether the current point is an end point, a variable group s={z, l, f} will fully describe the state of each data point: the point-state label st={z(i), l, 1} indicates that a pattern z(i) ends at time t with a duration of l sampling intervals (l=l̄); st={z(i), l, 0} indicates that a pattern z(i) has sustained for l sampling intervals and is still continuing (l lt−1+1) (2) if (zt = zt−1, lt = lt−1+1, ft = 0) & (zt−1, lt−1, ft−1 = 0) then Pr(st|st−1) = D (zt−1)(d > lt−1+1) D(zt−1)(d > lt−1+1) (3) if (lt = 1, ft = 1) & (zt−1, lt−1, ft−1 = 1) then Pr(st|st−1) = Azt−1ztD(zt)(d = 1) (4) if (lt = 1, ft = 0) & (zt−1, lt−1, ft−1 = 1) then Pr(st|st−1) = Azt−1ztD(zt)(d > 1) otherwise : Pr(st|st−1) = 0 (6.4) The segment duration for a pattern z(i) is determined by the duration density D(i), and independent of the intra-segmental dynamic models. Pr(st|st−1) is used in the inference process defined in the below. In addition to Pr(st|st−1), the inputs and outputs of this inference process also include: • W sk,st−1,stt−1 (yt)=Pr(yt|sk, st−1, st, y1...t−1): the conditional probability density for the current measurement yt given the point states for the instants k, t−1, and t and the previous observations y1...t−1. This input is generated by the Kalman filter at time t via Equation (6.3). • Pr(st|st−1): the transition probability from st−1 to st at time t. 68 6.2. Fixed-point switching Kalman smoother • Wt(sk)=Pr(sk|y1...t): the probability that the point state at time k is sk given the observations y1...t. • W skt (st)=Pr(st|sk, y1...t): the probability that the point state at time t is st, given the observations y1...t and the point state at time k. • W sk,stt (st−1)=Pr(st−1|sk, st, y1...t): the probability that the point state at t−1 is st−1, given y1...t and the point states at time k and t. The notation W ba(c) above represents the probability of event(s) c condi- tional on event(s) b and the observations y1..a. The inference process is: { Wt(sk),W skt (st),W sk,stt (st−1) } = infer{ Wt−1(sk),W skt−1(st−1),Pr(st|st−1),W sk,st−1,stt−1 (yt) } W sk,st−1 t−1 (st, yt) = Pr(st|st−1)W sk,st−1,stt−1 (yt) W skt−1(yt, st, st−1) =W sk t−1(st−1)W sk,st−1 t−1 (st, yt) W skt−1(st, yt) = ∑ st−1 W sk t−1(yt, st, st−1) W skt−1(yt) = ∑ st W skt−1(st, yt) W skt (st) = W skt−1(st, yt) W skt−1(yt) Wt−1(sk, yt) =Wt−1(sk)W sk t−1(yt) Wt−1(yt) = ∑ sk Wt−1(sk, yt) Wt(sk) = Wt−1(sk, yt) Wt−1(yt) W sk,stt (st−1) = W skt−1(yt, st, st−1) W skt−1(st, yt) (6.5) The probability density W sk,st−1,stt−1 (yt) calculated in Equation (6.3) is normalized with Wt−1(yt) in this step. The output Wt(sk) is used in the further process for pattern recognition and change detection. W sk,stt (st−1) and W skt (st) are used as the weighting factors in the “collapsing” step. Collapsing Assume that for a switching DLM the previous unconditional estimate x̂t−1 follows a Gaussian distribution. After passing it through an M -state switching Kalman smother, we will get M updated estimates x̂(m)t , m∈1 . . .M , each conditional on a state label m. Since the smoothing pro- cess is a linear transformation, each of the updated estimates x̂(m)t follows a Gaussian distribution. The probability of the updated unconditional es- timate x̂t can be described using a Gaussian mixture model. Collapsing is a weighted summing process to find a Gaussian distribution with the clos- est Kullback-Leibler distance to the Gaussian mixture [92]. The collapsing 69 6.2. Fixed-point switching Kalman smoother procedure for two Gaussian random variables X and Y is defined as in [92]. Given the conditional means X̄(m)=E(X|s=m) and Ȳ (m)=E(Y |s=m), the conditional cross-variance V (m)x,y =cov(X,Y |s=m), and the probability of oc- currence for each condition W (m)=Pr(s=m), the unconditional means X̄, Ȳ and the unconditional cross-variance Vx,y are calculated as below: X̄ = collapseI(X̄(m),W (m)) X̄ = ΣmX̄(m)W (m) (6.6) Vx,y = collapseII(V (m) x,y , X̄, X̄(m), Ȳ , Ȳ (m),W (m)) Vx,y = ΣmV (m) x,y W (m) + Σm(X̄(m) − X̄)T (Ȳ (m) − Ȳ )W (m) (6.7) 6.2.5 Overall procedure From t0=1, in every iteration, the switching Kalman smoother updates the intra-segmental system state estimate x̂(sk)k|t and the probabilityWt(sk), with the new observation yt, for every previous instant k in the smoothing win- dow, k∈t−T+1 . . . t. This overall process is realized as below: —————Algorithm: fixed-point switching Kalman smoother ————— Initialization The switching fixed-point Kalman smoother is initialized us- ing x0 ∼ N (x̂0, V0) with no state regime. At t=1, conditional on the point state s1, x̂ (s1) 1 , x̂ (s1) 1|0 , V (s1) 1 , and v (s1) 1|0 are calculated using the Kalman filter: (x̂1, V1,K1, x̂1|0, V1|0, e1) = filter(x̂0, V0,Θ (s1) t , yt). The probability W (s1)1 for each of the candidate point states for t=1 and the inputs to the infer operator for the next iteration are calculated: W (s1) 1 =pi (0) s1 / ∑ i pi (0) si , W (s0) 1 =1, W (s1) 1 (s1)=1; Recursion From t=2, in every iteration, given the new observation yt, the estimates from the previous iteration, including x̂(sk,st−1)k|t−1 , x̂ (sk,st−1) t−1 , V (sk,st−1) k|t−1 , V (sk,st−1) t−1 , x̂ (sk,st−1) t−1,k|t−2, Wt−1(sk), and W (sk) t−1 (st−1) for every k∈ max(1, t−T+1) . . . t, the fixed-point switching Kalman smoother is realized following the procedure shown in Figure 6.3. The results of the Kalman smoother are conditional on (sk, st−1, st). These estimates are collapsed to (sk, st) and fed back for the next iteration at t=t+ 1. ————— end of the fixed-point switching Kalman smoother ————— 70 6.2. Fixed-point switching Kalman smoother x̂ sk,st−1 t−1 x̂ sk,st−1 k|t−1 st−1 yt x̂ sk,st−1,st k|t−1 x̂ sk,st−1,st t x̂ sk,st−1,st t|t−1 x̂ sk,st t|t−1 = collapseI(x̂ sk,st−1,st t|t−1 ,W sk,st t (st−1)) W sk,st t (st−1) Wt(sk) W skt (st) x̂ sk,st k|t = collapseI(x̂ sk,st−1,st k|t ,W sk,st t (st−1)) x̂ sk,st t = collapseI(x̂ sk,st−1,st t ,W sk,st t (st−1)) V sk,st t = collapseII(V sk,st−1,st t , x̂ sk,st−1,st t , x̂ sk,st t , x̂ sk,st−1,st t , x̂ sk,st t ,W sk,st t (st−1)) V sk,st k|t = collapseII(V sk,st−1,st k|t , x̂ sk,st−1,st k|t , x̂ sk,st k|t , x̂ sk,st−1,st k|t , x̂ sk,st k|t ,W sk,st t (st−1)) V sk,st−1,st t V sk,st,st k|t−1 V sk,st t,k|t−1 = collapseII(V sk,st−1,st t,k|t−1 , x̂ sk,st−1 k|t−1 , x̂ sk k|t−1, x̂ sk,st−1,st t|t−1 , x̂ sk,st t|t−1,W sk,st t (st−1)) V sk,st−1,st t,k|t−1 x̂sk k|t collapseII collapseI V sk,st−1 t−1,k|t−2 V sk,st−1 t−1 V sk,st−1 k|t−1 x̂ sk,st k|t W sk,st−1,st t−1 (yt) ...sk st infer Pr(st|st−1) filter, collapseIDelay smoother Delay W skt−1(st−1) Wt−1(sk) Figure 6.3: Overall procedure of the fixed-point switching Kalman smoother (the parentheses for the state labels are not displayed); Please refer to Sec- tion 6.2.5 for a detailed description of the process. To solve the problem of trend-change detection and pattern recognition, in every iteration, Wt(sk) and x̂ (sk) k|t are used to calculate the following esti- mates for every k∈max(1, t−T+1) . . . t: 1. Wt(fk=1) the certainty of change; 2. ẑk|t,fk=1 the pattern at time k given the observations y1...t, and the fact that a change point has occurred at time k; 3. ŷk|t the unconditional signal estimate at time k. 71 6.2. Fixed-point switching Kalman smoother Wt(fk=1) = ∑ sk∈{sk|fk=1} Wt(sk) (6.8) ẑk|t,fk=1 = argmax zk {W fk=1t (zk)} (6.9) = argmax zk { ∑ sk∈{sk|fk=1} Wt(sk)/Wt(fk=1)} (6.10) ŷk|t = H(i) ∑ sk x̂ (sk) k|t Wt(sk) (6.11) The level of Wt(fk=1) is compared with a threshold h. Wt(fk=1)>h indicates that a change point has occurred at time k. 6.2.6 Computational and space complexity The GPB2 algorithm has a polynomial complexity [149]. If we treat the standard fixed-point Kalman smoothing process as one operation unit, the computation overhead for every sampling instant is determined by the num- ber of possible combinations for (sk, st−1, st) and the smoothing window size T . If all the patterns have the same duration [1 L], for every sampling instant, there are 2NL possible point states. The naive state propagation could result in T (2NL)3 times of calculations, imposing a heavy demand on the processor. However, not all the point-state combinations are possible. The possible state combination (sk, st−1, st) needs to satisfy: Pr(st, st−1, sk) = Pr(sk) Pr(st−1|sk) Pr(st|st−1) > 0 (6.12) The transition st−1→st has to satisfy the conditions discussed in Equa- tion (6.4). When ft−1=0, st−1 only has two possible propagations, both are the one-point extensions of the previous state, i.e., {zt=zt−1, lt=lt−1+1, f=0 or 1}; when ft−1=1, for each st−1, the state st can be any of the 2N possible segmental patterns, i.e., {zt=z(1)...(N), lt=1, f=0 or 1}. Similarly, given sk, the number of possible point states for st−1 is less than N(t−k−1) (assuming T6L) (see Table 6.2). If the point at time k has NL possible states for fk=1 and fk=0 each, the numbers of possible (sk, st−1) combinations for all the points in the smoothing window k∈(t- T+1. . . t−1) add up to N2L(T−1)(T−2). Since a point state with the same segmental pattern zt and the same duration lt can share one Kalman smooth- ing operation, in every iteration, the computational complexity for a T -point switching Kalman filter is ≈ N3L(T−1)(T−2). 72 6.2. Fixed-point switching Kalman smoother Table 6.2: Number of the possible point states for st−1 given sk ft−1 = 1 ft−1=0 fk=1 NbL−1, t−k−1c NbL, t−k−1c fk=0 N(bL−1, t−k−2c+1) N(bL, t−k−2c+1) 0.1 0.2 0.3 0.4 1
2
0 4
3
0 1
2
1 4
3
1 .
.
. .
.
. z l l f=0 l
f s
t
Legend Table Na
i
ve Matrix
W t (
s
k
) (
s
t
,
s
t-1
,
s
k
) Mapping Table with (
s
k
,
s
t
) as key index 2 3
2
1 6 5 4 3 1 l 2 3
2 6 5 4 3 1 z z f=1 4
2 1
2 5
2 index index s
k 4
1
2 1
1
2 5
1
2 s t-1 s
k
s
t-1
s
t (
s
t-1
,
s
k
) Mapping Table with
s
k
as key 1 2 3 4 . . 1 2 3 1 2 3 .
. .
. 4 5 4
2
2 1
2
2 5
2
2 4 5 6 s t-1
Legend Table s
k
Legend Table ... ... Figure 6.4: Linked lists for compact storage of the conditional estimates in the GHMM-based switching Kalman smoother A naive way of storing the conditional estimates also requires a vast amount of space. A complete collection of the estimates conditional on (sk, st−1, st) would occupy a space as large as (2NL)3. As explained above, not all the (sk, st−1, st) combinations are possible. The “naive matrix” is therefore very sparse. Linked lists are used to obtain a compact storage. As shown in Figure 6.4, the memory is organized using a series of linked list referred to as “legend table” and “mapping table”. The “legend table” only stores the value of the non-zero estimates and their locations in the naive matrix and assigns a new index for each record. The mapping tables for the estimates conditional on (sk, st) and on (sk, st−1, st) are built in a similar manner. In the mapping tables, the entries are organized in blocks, and each block has a unique key. For example, in the mapping table for (sk, st−1, st), the entries in the same block share the same (sk, st). The scheme improves the speed of looking-up in the collapsing step. 73 6.3. Adaptive Cusum test based on the GHMM The probability of occurrence for each possible point state at every in- stant in the smoothing window Wt(sk), k∈t−T+1 . . . t, is compared with a very small threshold hW in every step to remove the unpromising point states sk before the iteration reaches the end of the smoothing window. An appropriate hW may reduce the computational and space complexity without compromising the estimation accuracy. 6.3 Adaptive Cusum test based on the GHMM 6.3.1 Prerequisites for the intra-segmental model The method presented in this section uses a SPRT method online as the heuristic to find the pattern sequence with the maximum local posterior probability approximately. To use this SPRT heuristic, the forecast residuals should have different magnitudes for different patterns. It is also desirable that the predefined patterns can be grouped according to certain properties, so that the state trellis can be narrowed down according to these proper- ties. The second condition is satisfied in this thesis, as the trend patterns defined in Table 6.1 can be grouped according to the direction of change into increasing, decreasing, and steady. 6.3.2 Offline solution: extended Viterbi algorithm The extended Viterbi algorithm is a dynamic programming method that finds the optimal segmental state sequence as well as the segment boundaries for an observation series described by the GHMM, using the Maximum a Posteriori (MAP) criterion. A sequential data structure S` is defined to store one of the candidate end-point sequences that segments the entire observation series y1 . . . y`: S` = {(zt1 , lt1)t1 . . . (ztm , ltm)tm . . . (z`, l`)tM} (6.13) where t1< . . . 0, i∈1 . . . N} (6.16) zt∈ { {z(i) : Azt−lt i > 0, i∈1 . . . N} t− lt > 1 {z(i) : pi(0)i > 0, i∈1 . . . N} t− lt = 0 (6.17) L (zt) f 6 lt 6 min(L (zt) c , t) (6.18) where L(zt)c and L (zt) f are the lower and upper bounds for the duration of zt (see Section 6.1 for details). If the admissible (lt, zt−lt , zt) set is not empty, P and Ŝ are updated. 2. P updating For every admissible (lt, zt−lt , zt), a function Γt is calculated: Γt(lt, zt−lt , zt) = Pt−lt(zt−lt)Azt−ltzt Pr{yt−lt+1...t|zt︸ ︷︷ ︸ Ps(t,lt,zt) }D(zt)(lt)︸ ︷︷ ︸ Pd (6.19) where Pt−lt(zt−lt) is previously generated at t−lt, Azt−lt ,zt is the transition probability for zt−lt→zt, Ps indicates how well the mea- surements yt−lt+1...t fit the trend shape zt, and Pd represents the probability that the current segment has a duration lt given its segmental state zt. Given the current segmental state zt, Ps is assumed to be independent of both the segmental states z0...t−lt and the system states x0...t−lt of the historical segments. Pd only depends on the duration distribution of zt. Γt(lt, zt−lt , zt) is maximized over all the admissible (lt, zt−lt)’s (see Equation (6.17-6.17)), resulting in a Pt(zt) for every possible end- point pattern zt for time t and the corresponding {ẑt−l̂t , l̂t}(zt): Pt(zt) = max lt,zt−lt Γt(lt, zt−lt , zt) {ẑt−l̂t , l̂t}(zt) = argmax lt,zt−lt Γt(lt, zt−lt , zt). (6.20) 3. Ŝ updating The newly identified {ẑt−l̂t , l̂t}(zt) is attached to the existent state path {Ŝt−l̂t−l̂t−lt , l̂t−l̂t}t−l̂t(ẑt−l̂t) that leads to ẑt−l̂t at time t−l̂t, 76 6.3. Adaptive Cusum test based on the GHMM resulting in the updated state path {Ŝt−l̂t , l̂t}t(zt): {{Ŝt−l̂t−l̂t−lt , l̂t−l̂t}t−l̂t(ẑt−l̂t), {ẑt−l̂t , l̂t}(zt)} ⇒{Ŝt−l̂t−l̂t−lt , (ẑt−l̂t , l̂t−l̂t)t−l̂t , l̂t}(zt) ⇒{Ŝt−l̂t , l̂t}t(zt). (6.21) If there is no admissible (lt, zt−lt , zt), do nothing. Let t=t+1 and go to the next iteration. Termination and backtracking Forward recursion is stopped when reach- ing the end of signal t=`. Many researches suggest calculate P`(z`) and {Ŝ`−l̂` , l̂`}`(z`) as for other observations, and find the global MAP state se- quence Ŝ` by tracking backward along the path stored in {Ŝ`−l̂` , l̂`}`(ẑ`), with ẑ` = argmaxz` P`(z`). However, this procedure may reduce the es- timation accuracy, if there is no guarantee that y` is the end of the last segment. Given the partial observations, the estimated ẑ` for the last seg- ment is very likely to be different from the true pattern. Therefore we suggest not recognizing the last pattern, and propose to estimate the MAP end-point sequence in Equation (6.13) using the following procedures. First, the admissibility of (z`−l` , l`, z`) combinations for the last observa- tion are evaluated using the conditions in Equation (6.17) and (6.17), and the duration condition in Equation (6.18) is modified to l` 6 min(L(z`)c , `), where the lower bound for l` is removed since ` is not required to be an end point. Then, for every admissible (l`, z`−l`), a function Υ is calculated: Υ(l`, z`−l`) = ∑ z` P`−l`(z`−l`)Az`−l`z` Pr{y`−l`+1...`|z`}D (z`)(d > l`). (6.22) Maximize Υ with respect to (l`, z`−l`) to generate (l̂`, ẑ`−l`): (l̂`, ẑ`−l`) = argmax l`,z`−l` Υ(lt, z`−l`). (6.23) Then, track backward along the state path stored in {Ŝ`−l̂`−l̂` , l̂`−l̂`}`−l̂`(ẑ`−l̂`) to identify the MAP end-point sequence Ŝ` for the entire observation series. ———————– end of the extended Viterbi algorithm ———————— The Viterbi algorithm is impractical for online change detection, because the backward tracking starts from the end of a signal. To use this method in 77 6.3. Adaptive Cusum test based on the GHMM real time, a stopping rule is needed to decide when to start backtracking and to initiate a new segment. In addition, the computational and space com- plexity of the Viterbi algorithm is very high with the GHMM. It propagates though all the possible (lt, zt−lt , zt)’s in every iteration, although given the observational evidence, some of these state combinations have little promise to become part of the final MAP state sequence. Including the unpromising states in the forward recursion produces little improvement for the estima- tion accuracy but unnecessarily increases the complexity. It is desirable to remove the unpromising states before reaching the end of observation series, resulting in an approximation scheme called the beam search. 6.3.3 Online solution: beam search with Cusum pruning Beam search is a widely used approximation scheme for finding the optimal path in a state trellis. In the beam search, a heuristic function is often used to evaluate the promise of each state node. Similar to how we search around a dark room following a flashlight beam, the method only searches the promising nodes included in the promising state subset. To approximate the MAP estimation described in Section 6.3.2, a beam search scheme should evaluate the promise of every (lt, zt−lt , zt) according to certain criteria, in addition to the admissible conditions, and remove the unpromising (lt, zt−lt , zt) combinations. The promise of a (lt, zt−lt , zt) combination is measured by the Γ function (see Equation (6.19)). Therefore, an appropriate criterion for selecting (lt, zt−lt , zt) should be based on the minimum requirement for each component of the Γ function (Pt−lt(zt−lt), Azt−lt ,zt , Ps and Pd). The length-constrained Cusum test (see Section 4.1.2) is used to de- fine the conditions that a (zt, lt) combination at time t should fulfill to be included in the forward recursion. In the Cusum test, the Cusum of fore- cast residuals is compared with a length-constrained test mask to determine whether a trend signal has changed its direction. Since the trend patterns defined in this chapter can be grouped according to the direction into three groups (increasing, decreasing and steady), the Cusum test, by recogniz- ing the change direction, can narrow down zt to one of these groups. The Cusum test can also roughly locate t∗, the starting point of the newly de- tected change. It is reasonable to assume that the previous segment ends around t∗, i.e., t−lt∈[t∗−∆T t∗+∆T ]. zt−lt can be narrowed down to these patterns that have the same direction as the trend change previously de- tected by the Cusum test. Furthermore, a newly detected trend change also suggests that enough 78 6.3. Adaptive Cusum test based on the GHMM observational evidence has been observed for recognizing the previous seg- ment. Therefore the backtracking process is triggered after a change point is detected by the Cusum test. The Cusum test thresholds are adjusted online to reflect the influence of the newly received measurements on the forecasting probability for the upcoming data. 6.3.3.1 Beam search scheme To realize the beam search process iteratively, an auxiliary function P̃(zt) is defined to record the approximate local maximum probability leading to the current pattern zt. P̃(zt) is similar to P(zt) used in the Viterbi algorithm, but in every iteration, P̃ is only optimized over the promising (lt, zt−lt , zt) combinations in the beam, instead of all the admissible (lt, zt−lt , zt) com- binations. Since in physiological monitoring, the signal history from more than a few minutes ago has little clinical relevance, the beam search method only tracks back to identify the pattern before the newly detected change point. Therefore the data structure Ŝt for storing the local best pathes in the Viterbi algorithm is not used. Instead, the promising patterns for the ongoing trend zt are stored in Z∗m, where m represents the total number of end points that have been detected until the current iteration; t∗ records the starting point of the ongoing trend located by the Cusum test; the candidate patterns for the previous change point in Z∗m−1 are used as the promising states for zt−lt . The beam search process is carried out recursively as below: —————– Algorithm: beam search with the Cusum pruning ————— Initialization The algorithm is initialized with P̃0=1, m=0, Z∗m=Z, and t∗=0. The Cusum test mask is also initialized according to Section 3.1.3. Recursion From t0=1, in every iteration, given y1...t, Z∗m, and P̃k, k 0, z(i)∈Z∗m−1} (6.24) zt∈ { {z(i) : Azt−lt i > 0, z(i)∈Z∗m} t− lt>1 {z(i) : pi(0)i > 0, z(i)∈Z∗m} t− lt = 0 (6.25) dL(zt)f , t−t∗−∆T e6lt6bL(zt)c , t, t−t∗+∆T c (6.26) where L(zt)c and L (zt) f are the lower and upper bounds for the duration of zt, and the tuning parameter ∆T is the allowable deviation of an end point from the location determined by the Cusum test. If the beamt is not empty, P̃ is updated and the Cusum test is adjusted. 2. P̃ updating The function Γ̃t (similar to Γt in Equation (6.19), but with Pt(zt) substituted with P̃t(zt)) is calculated for every (lt, zt−lt , zt) in beamt. Optimizing Γ̃t(lt, zt−lt , zt) over all the can- didate (lt, zt−lt) combinations in the beamt, we get the updated P̃t(zt) and the corresponding {ẑt−l̂t , l̂t}(zt). P̃t(zt) = max (lt,zt−lt )∈beamt Γ̃t(lt, zt−lt , zt) {ẑt−l̂t , l̂t}(zt) = argmax (lt,zt−lt )∈beamt Γ̃t(lt, zt−lt , zt) (6.27) 3. Pattern recognition If a change is detected by the Cusum test, the previous trend pattern ẑt−l̂t and the location of its end point t−l̂t are identified. 80 6.3. Adaptive Cusum test based on the GHMM The sensitivity of the Cusum test is configured to detect the oc- currence of a trend before it reaches the segmental end. As in the backward state-tracking step in the Viterbi algorithm, given the partial observations, we do not select the single best pattern for the ongoing trend. Instead, the posterior probability of the state-pathes are summarized over the promising patterns for the ongoing trend following the procedure as below. First, the promising (zt−lt , lt, zt) combinations are identified and recorded in beam`, using the conditions in Equation (6.24), (6.25) and (6.26), with the lower bound constraint dropped for lt , since the current instant t might not be the end point of the ongoing trend. Then, a function Υ̃t(lt, zt−lt) summarizes the local maxi- mum posterior probability for an end-point sequence which ends at t−lt with the last pattern being zt−lt , given y1...t. Υ̃t(lt, zt−lt) = ∑ zt P̃t−lt(zt−lt)Azt−ltzt Pr{yt−lt+1...t|zt}D(zt)(d > lt) (6.28) Maximize Υ̃t(lt, zt−lt) with respect to (lt, zt−lt) to identify the location and pattern for the segment before the ongoing trend change: (l̂t, ẑt−lt) = argmax (lt,zt−lt )∈beam` Υ̃t(lt, zt−lt) (6.29) 4. Adjusting the Cusum test The shape of the test mask in the Cusum test is adapted online (see the next section). If the beam is empty, do nothing. t=t+1 and go to the next iteration. ———————————— end of beam search ——————————— 6.3.3.2 Adaptive Cusum test If an intra-segmental model transforms different patterns into different de- viations in the forecast residuals, the shape of the test mask for the Cusum test can be designed to capture these patterns. For instance, the target patterns of the upper arm are decreases including and the steady pattern, i.e., {z(5) . . . z(8), z(9)} (see Figure 6.5). Since the abrupt long decrease z(5) is a sustained version of the abrupt short decrease z(6), and a gradual long decrease z(7) is a sustained version of the gradual short decrease z(8), only 81 6.3. Adaptive Cusum test based on the GHMM h
(9)
h
(8)
h
(6) d
(6) d
(8)
d (9) { Overlapping area Figure 6.5: Design of the upper arm of the test mask for the GHMM-based adaptive Cusum test z(6), z(8) and z(9) are considered. Following the guidelines described in Sec- tion 3.1.3 for V-mask design, a standard arm is generated for each of z(6), z(8) and z(9): for a target pattern z(i), the slope of the arm is set as d(i), half of the mean forecasting deviation caused by z(i). The rise distance h(i) is set according to Equation (3.6). As the duration ranges of {z(6), z(8), z(9)} are connected head to tail with some overlapping area, the outline of these arms is used as the test mask. In this thesis the two arms in the overlapping area are simply connected as shown in Figure 6.5. The test mask in the overlapping areas could be smoothed using a more sophisticated method, such as averaging the two arms with the duration probabilities as weighting factors. To obtain a consistent type I and type II error rates, the rise distance h of each constituting arm is adjusted according to the updated probability of the corresponding target pattern for the upcoming trend change. The ratio R (i)/(9) t|k between z (i), i∈1 . . . 8, relative to the steady state z(9) for every point t−l̂t t−k)∑zk∈Z∗m P̃k(zk)Azkz(i) D(9)(d > t−k)∑zk∈Z∗m P̃k(zk)Azkz(9) (6.30) where Pr(zk, y1...k, fk=1) is approximated by P̃k(zk), and Z∗m is the promis- ing states for the latest detected trend change. 82 6.4. Application to NIBPmean monitoring The updated ratio R(i)/(9)t|k is used to adjust the shape of the test mask. Every point on the test mask, from the initial point at time t retrospectively to every point k, t−l̂t7 the estimation accuracy of the switching Kalman smoother ap- pears to have reached its plateau and once again deviates from the curve of the standard Kalman smoother. The window size T also influences the distribution of the estimated change probability. A Signal-to-Noise (SN) ratio was defined as the mean squares ofWt(ft−T+1=f∗t−T+1) over the mean squares ofWt(ft−T+1 6=f∗t−T+1) (f∗ is the true value of f which is 1 for change points and 0 for non-change points). A high SN ratio indicates that change points can be identified with high certainty. The SN ratio of the smoothing Kalman smoother improves as the window size T increases and reaches a plateau at T=4 (see Figure 6.7). The plateau value of the SN ratio is ≈62. The plateau indicates that the 87 6.5. Results 2 4 6 8 10 12 0 0.5 1 1.5 2 2.5 3 R M S de via tio n smoothing window size T D2: observation D1: with unknown point states D0: with known point states D1−D0 Figure 6.6: Root Mean Square (RMS) deviations of the smoothed estimates from the true values of the simulated signals as the smoothing window sizes T increases from 1 to 12 sampling intervals certainty of detection could not be further improved by increasing the size of smoothing window. The residual uncertainty explains why the estimation accuracy of the switching Kalman smoother is below that of the smoother with known segmental states. The smoothing window size was set to T=4 for further testing on the simulated signals and clinical data. Another important tuning parameter is the pre-threshold hW onWt(sk). With T=4, the influence of hW on the estimation accuracy and computa- tional complexity of the switching Kalman smoother was evaluated. The number of smoothing computations performed per sampling instant increases from 45 to 550 as hW decreases from 10−2 to 10−9, while the estimation er- ror stops dropping when hW60.01% (see Figure 6.8). The pre-threshold was therefore set at hW=0.01%, in order to reduce the computational complexity without compromising the estimation accuracy. The change probabilitiesWt(ft−T+1=1) generated with T=4 were tested against different thresholds, from h=30% to h=70%, with the step size ∆h=10%. As demonstrated on the ROC curve (Figure 6.9), with a decreased threshold h, the switching Kalman smoother generated more true positive detections, but also introduced more false positive detections. Among all the points on the ROC curve, the point corresponding to h=50% is closest to the upper left corner of the ROC figure. h=50% is therefore considered the 88 6.5. Results 2 4 6 8 10 12 0 10 20 30 40 50 60 70 si gn al −t o− no ise ra tio smoothing window size T Figure 6.7: Signal-to-noise ratio of the change probability estimated using the switching Kalman smoother with different smoothing window sizes optimal setting in the simulated test. With h=50%, the method detected 90.37% of the 270 change points including 73.3% accurate detections and 17.0% acceptable detections. The false positive rate was 6.67%. 6.5.1.2 Results on clinical data The online pattern recognition process of the fixed-point switching Kalman smoother is demonstrated with an example signal in Figure 6.10. Excluding the end of case, there are 7 segments with different patterns in the example signal. As the time delay increases, the change probabilities increase toward 100% around the change points and decrease elsewhere. This is not sur- prising, as using the information carried by the future data, the smoothing process improves the certainty of pattern recognition. When the allowable time delay was extended to 3 sampling intervals, all the change points were detected with a threshold h=50%. For the change points at t=11, 44, and 58, the recognized locations were one point after the annotation. At t=58 the probability of the pattern of short gradual decrease z(8) was 42%, slightly higher than the 39% probability of the pattern of short abrupt decrease z(6), therefore, z(6) was recognized as z(8). At t=31, the decreasing stroke of an artifact amidst a long steady segment was falsely recognized as a short abrupt decrease. Segments of more obvious patterns took a shorter period 89 6.5. Results 2 3 4 5 6 7 8 9 1.5 1.55 1.6 1.65 Es tim at io n er ro r −log10(hW), hW:threshold on W(ft−T+1=1) 0 2 4 6*100 Co m pu ta tio na l c om pl ex ity estimation error computational complexity Figure 6.8: Estimation error and computational overhead of the switching Kalman smoother as the pre-threshold hW decreases from 10−2 to 10−9 of time to recognize. For example, the change point at t=45 and t=63 could be detected with a delay of only one sampling interval. As in the simulation test, the clinical signals were tested repeatedly with different thresholds h. The change detection performance obtained with each threshold was also recorded by plotting an ROC curve (see Figure 6.9). The curve changes in a similar manner as seen in the simulated data, i.e., the true positive rate and the false positive rate both increase as the threshold decreases. However, the performance on the clinical data was worse than that on the simulated data. With the optimal threshold h=50%, the method recognized 29 trend changes with accurate pattern descriptions and 16 with acceptable pattern descriptions, out of the 54 annotated segments. Two segments were missed and 4 were detected with a false direction. The total number of false detections was 8. 6.5.2 Results of the adaptive Cusum test The standard Cusum test and the proposed adaptive Cusum test were both tested on the simulated signals and the clinical data. The detected changes were classified according to their directions and displayed at the detection locations in Figure 6.11. At t1, by tracking backward along the upper arm of the fixed mask, the standard Cusum test falsely detected a decrease. This 90 6.5. Results .5 .5 .4 .7 .6 .3 .3 .4 .7 .6 0 0.1 0.2 0.3 0.4 0.5 0.5 0.55 0.6 0.65 0.7 0.75 0.8 0.85 0.9 0.95 1 .7 .6 .5 .4 .7 .6 .5 .4 .3 tu re p os itiv e ra tio false positive ratio simulated signals clinical data Figure 6.9: ROC curves summarize the performance of change detection of the GHMM-based switching Kalman smoother with different thresholds h on simulated data and clinical data. decrease was successfully avoided by the adaptive Cusum test, since the test mask in the adaptive Cusum test has a much wider openness at the same location. The segment before t2 was recognized as a long abrupt increase. Given that, the probability of a decrease for t>t2 became relatively lower than the initial prior probability. Therefore the rise distance of the upper arm of the adaptive mask became larger than the fixed mask. The change points detected at t=30 were false detections due to artifacts. The adaptive Cusum test and the standard Cusum test were tested on the simulated and clinical data, both repeatedly with different sensitivity settings. The initial rise distance for every pattern z(i) was set to h(i)10%,1, corresponding to a 10% false positive rate, and the σ was changed from 0.7 to 1.1 in the simulated test, and from 0.6 to 1.0 in the clinical test, with the step size ∆σ=0.1. The ROC curves of the standard Cusum test and the adaptive Cusum test were compared in both the simulated study (see Figure 6.12) and the clinical-data study (see Figure 6.13). The performance of the adaptive Cusum test appeared to be better than the standard Cusum test on both the simulated and clinical data. 91 6.5. Results 9 11 18 31 44 55 58 63 45 50 55 60 65 70 Annotation: artifact N IB Pm ea n (m mH g) z(7) z(2) z(7) z(9) z(1) z(6) z(4) 0 0.5 1 h=0.5 z(7) z(2) z (7) z(2) z(9) z(1) z (8) z(4) 3 po in t l ag probability of being a change point 0 0.5 1 h=0.5 z(7) z(2) z(6) z(2) z (9) z(4) 2 po in t l ag 0 0.5 1 h=0.5 z(6) z(2) z(9) z(4) 1 po in t l ag 9 11 18 31 44 55 58 63 0 0.5 1 h=0.5 Time (interval: 3 minutes) n o la g Figure 6.10: Online pattern recognition process of the switching Kalman smoother demonstrated with the results on an example NIBPmean trend signal. The smoothing window size T=4 and the certainty threshold h=50%. 6.5.3 Summary of the results The optimal performances of the standard Cusum test, adaptive Cusum test, and fixed-point switching Kalman smoother are compared with on another. The switching Kalman smoother and adaptive Cusum test performed better than the standard Cusum test on the simulated data (Table 6.3). However, with the clinical data (Table 6.4), the change-detection performances of all the three methods showed no obvious difference. The accuracy of pattern 92 6.6. Discussion 50 60 70 BP m ea n (m mH g) 0 10 20 30 40 50 60 −40 −30 −20 −10 0 10 Time (interval: 3 minutes) Cu su m Fixed Mask Adaptive Mask Increase −Adaptive Cusum Decrease Stable Decrease Increase −Standard Cusum Stable t1t2 Insignificant Figure 6.11: Change detection results of the GHMM-based adaptive Cusum test and the standard Cusum test on an NIBPmean trend signal recognition of the switching Kalman smoother was obviously higher than that of the adaptive Cusum test in both the simulated test and clinical test. 6.6 Discussion The GHMM is a useful framework for incorporating the prior knowledge about the pattern transition learnt from experts or population data into the online trend monitoring process. The results on the simulated data and the clinical signals suggest that the proposed two methods have great promise for trend-change detection. Both methods provide detailed descriptions of trend patterns, which are desirable as inputs for a diagnostic system. The switching Kalman smoother performed better than the adaptive Cusum test in recognizing trend patterns, especially on the simulated data. This indicates that the second-order Generalized Pseudo Bayesian inference in the switching Kalman smoother provides a better summary of the signal history than the MAP estimation in the adaptive Cusum test. In addi- 93 6.6. Discussion 0 0.1 0.2 0.3 0.4 0.5 0.5 0.55 0.6 0.65 0.7 0.75 0.8 0.85 0.9 0.95 1 1.1 1.0 .9 .8 .7 1.1 1.0 .9 .8 .7 tu re p os itiv e ra tio false positive ratio Results of the standard Cusum test Results of the adaptive Cusum test Figure 6.12: The ROC curves of the standard Cusum test and the adaptive Cusum test on simulated data. The sensitivity parameters σ was set to different values from 0.7 to 1.1. Both methods obtained the best performance with σ = 0.9. 0 0.1 0.2 0.3 0.4 0.5 0.5 0.55 0.6 0.65 0.7 0.75 0.8 0.85 0.9 0.95 1 1.0 .9 .8 .7 .6 1.0 .9 .8 .7 .6 tu re p os itiv e ra tio false positive ratio Results of the standard Cusum test Results of the adaptive Cusum test Figure 6.13: The ROC curves of the standard Cusum test and the adaptive Cusum test on clinical data. The sensitivity parameters σ was set to different values from 0.6 to 1.0. Both methods obtained the best performance with σ = 0.8. 94 6.6. Discussion Table 6.3: Results on simulated data unit: percentage (number) Accurate Acceptable FP FN Standard Cusum 87.8% (237) 15.9% (43) 11.9% (32) Adaptive Cusum 60.0% (162) 91.8% (244) 8.2% (25) 4.1% (11) Switching KS 73.3% (198) 90.4% (244) 6.7% (18) 6.7% (18) Switching KS: switching Kalman smoother; FP: false positive; FN: false negative Table 6.4: Results on clinical data unit: percentage (number) Accurate Acceptable FP FN Standard Cusum 79.6% (43) 14.8% (8) 5.6% (3) Adaptive Cusum 35.2% (19) 85.2% (46) 11.1% (6) 3.7% (2) Switching KS 53.7% (29) 83.3% (45) 14.8% (8) 3.7% (2) Switching KS: switching Kalman smoother; FP: false positive; FN: false negative tion, state pruning was more precisely controlled in the switching Kalman smoother than in the adaptive Cusum test. In the switching Kalman smoother, the potential of every point state is evaluated by the probability of occur- rence Wt(sk), whereas in the adaptive Cusum test, the goodness of state pruning depends on the accuracy of the Cusum test. The existence of artifacts is one of the reasons for the false detections. Artifacts often have a very large amplitude and a brief duration. Artifacts of this type could be defined as an extra pattern in the GHMM, and recognized in the process of change detection. The performance of both methods can be improved if artifacts are effectively recognized and removed. The performance of both methods on clinical data was not as good as on simulated data. The reason for the degraded performance is that the model parameters could not describe the signal accurately due to the limited num- ber of training cases. It is desired to train the model using a large number of annotated cases, and based on the new parameter estimates, to evalu- ate the performance of the state-estimation methods. It is also possible to estimate the parameters using the EM algorithm [93] using non-annotated data. The EM estimation can only be trusted when initialized with good ML estimates. The change detection performance with the obtained param- eters should be compared with the performance obtained with the current settings. To reduce annotation bias, it would be desirable to have multiple anesthesiologists annotate the signals and use the common annotations as the reference. 95 6.6. Discussion The tradeoff between temporal granularity and efficiency is an unavoid- able problem for any computationally intensive algorithm. The computa- tional and space complexity for both methods increase with the length of the segmental pattern. Many physiological variables in the current clinical setting are measured at a much higher frequency than the NIBPmean. For example, HR is usually measured every 1 second or every 5 seconds. At such a sampling rate, a trend pattern typically lasts for over one hundred data points. When the proposed algorithm were directly used to monitor- ing the variables recorded at such a high frequency, the computational and space complexity could become intractable. To handle this problem,we rec- ommend signals measured at a high sampling rate be segmented into small time slices and the proposed methods be applied on the features such as mean or median of these slices instead of the original data. However, this preprocessing scheme should only be used with the full awareness that it may reduce the resolution of pattern recognition. The size of the time slices should be decided according to the purpose of a specific application. In contrast to the non-adaptive Cusum test used in Chapter 4, the adap- tive Cusum test based on the GHMM does not generate repetitive detections for sustain trend changes. In the proposed GHMM, the probability is very low for the transition between two segments of the same direction. There- fore, after a trend change is detected, the rise distance for the V-mask arms designed to detect the changes in same direction will automatically become very large. Repetitive detections are avoided in this way. The performance of the standard Cusum test in this chapter appearers worse than that in Chapter 4. This can be explained by the annotation bias. In Chapter 4, data annotation was performed on the detection results, therefore highly susceptible to bias. In this chapter, the annotation process was performed on the unprocessed data. The raters were not influenced by the detection results and therefore were more objective. The proposed switching Kalman smoother can be used to monitor trend signals of other physiological variables or time-series from other applica- tion areas, as long as the signal can be described using the framework of the GHMM, i.e., there are finite number of trend patterns, the transition between these patterns follows a first-order Markov process, and the intra- segmental variations can be appropriately described by a state-space model. The switching DLM used with the switching Kalman smoother is trans- formed from the GHMM in this chapter. The speciality about this model is reflected in the calculation process of Pr(st|st−1). Other than that, the proposed fixed-point switching Kalman smoother can be used for state esti- mation with general switching dynamic linear systems. 96 Chapter 7 Multivariate change detection based on factor analysis 7.1 Introduction The previous chapters have addressed the problem of adaptive trend-change detection in a single variable. The proposed univariate methods can be used on each individual trend signal in the sensor matrix to monitor the adequacy of a patient’s physiological status and the integrity of a monitoring system. More than twenty physiological variables are routinely measured during a standard surgical procedure. These variables interrelate closely, provid- ing an integrated observation of a patient’s physiological state. Monitoring the signals separately is not the best solution for physiological monitor- ing. First, the occurrence of a clinical event often causes trend changes in multiple variables. For example, varying depth of anesthesia causes trend changes in almost all cardiodynamic and respiratory variables. These trend changes will trigger a large number of alerts if not fused together. This will increase the cognitive load of the attending anesthesiologist. Second, many adverse events can only be detected and identified by analyzing the inter- relationship between the direction and amplitude of these trend changes. For example, BP may give an early indication of a cardiovascular problem in an anesthetized patient. However, in a typical case of moderate intraop- erative hemorrhage, the sympathetic autonomic nervous system will cause an increase in HR to compensate for a reduction in stroke volume, which will result in a steady BP. Therefore, the drop in BP is usually small un- til blood loss exceeds compensatory mechanisms. Due to this homeostatic mechanisms, it is sometimes difficult for a univariate method to detect events effectively if the changes in other variables are ignored. In this chapter, Factor Analysis (FA) is used to address these problems by extracting the linear structure in the relationship between trend variations 97 7.2. Factor analysis in different variables. The extracted model represents the pharmacological effect of the anesthetics under normal conditions, and can be used to calcu- late predictions for the whole set of variables. As in the univariate study, investigating the forecast residuals reveals adverse events during surgery. The performance of the proposed method was evaluated using simulated signals in the scenarios of intraoperative hemorrhage and light anesthesia generated using a surgical simulation software tool. The method is intended to evaluate the potential and to identify the challenges when FA is applied to multivariate physiological monitoring. The work has been published in Proceedings of the 29th Annual International Conference of the IEEE Engi- neering in Medicine and Biology Society [162]. 7.2 Factor analysis As most intraoperative events can only be detected by investigating the trends of physiological variables, the first step is detrending. The trend signals are centered using an EWMA forecaster (see Section 4.1.1). A patient’s intraoperative status varies under the regulation of several homeostatic control systems. If we assume that the variations in each physio- logical variable are a linear combination of the variations in the unobservable variables with a specific additive component, the residuals after detrending can be modeled as in Equation (7.1): X = Af + e (7.1) where X : n×1 represents the observed forecast residuals, loadings matrix A : n×m is a constant matrix, common factors f : m×1, represents inde- pendent latent variables that have influence on multiple observed variables, and specific factors e : n×1 is the vector of residuals, each of which only con- tributes to one particular variable. If the training data set is large enough and includes trend variations of different magnitudes and directions, X, f and e can be assumed to follow multivariate Gaussian distributions with zero mean, with the covariance matrices denoted as S=E(XXT ), φ=E(ffT ), and ψ=E(eeT ). FA separates the common factors f from the specific fac- tors e and requires their covariance to satisfy: 1. φ=I: the common factors f are independent of each other and follow the normal distribution; 2. ψ is diagonal: the specific factors e are uncorrelated; 3. E(efT )=0: f and e are uncorrelated. 98 7.2. Factor analysis Principal Component Analysis (PCA) is another popular technique for dimension reduction. However, many researchers have claimed that PCA should not be used as a model, as it only summarizes the training data and is not suitable for representing data beyond the original measurements [147]. There have been a number of comparative simulation studies, such as [125], in which PCA was found to be inferior to FA in finding the underlying structure in data simulated from a factor model. Although PCA and FA are conceptually different, the two techniques often give similar numerical results in many empirical studies, especially if the number of latent variables m is far less than the number of observed variables n, and ψ, the specific covariance only has small terms [68, 147]. The factor model estimated using the ML method (see below) is indepen- dent of the units of measurement of the signals. If a variable yi is multiplied by a constant, the ith row of the loading matrix A needs to be multiplied by the same constant, and the corresponding specific variance needs to be multiplied by the square of the constant [66]. This property is very desirable for physiological monitoring, as some physiological variables, such as Ppeak, may be measured in different units in clinical practice. The relationship between variations in the measured physiological vari- ables during surgery are mainly influenced by anesthetic agents. It is as- sumed that in this chapter the same anesthetic is used throughout the main- tenance phase of anesthesia. The method is intended to detect changes either in the covariance structure or in the trend of latent variables. This task is realized in two steps: model estimation and change detection. 7.2.1 Maximum likelihood estimation of a factor model Many methods have been proposed to estimate a factor model, including canonical correlation analysis [108], principal factor analysis [110], Maximum Likelihood (ML) method [66] and Bayesian method [104]. The ML and Bayesian methods have gained great popularity in recent years with the growing availability of high computational power. The ML FA is used for model estimation in this chapter. The first step in ML FA estimation is to calculate the sample covariance matrix Ŝ. Ŝ is calculated by averaging XXT in the training set: Ŝ = ∑K k=1(XkX T k ) (K−1) (7.2) where X is the vector of forecast residuals and k represents the kth sample vector in the training set. The total number of sample vectors in the training 99 7.2. Factor analysis set is K. The correlation matrix ˆ̄S can be calculated by normalizing each variable using the corresponding standard deviation. Under the multivariate normality assumption, the ML method estimates the loading matrix A and the specific covariance ψ from Ŝ by optimizing the log-likelihood logPr(X|A,ψ). This problem is equivalent to: (A,ψ) = argmin A,ψ {tr(b)− log |b| − n} (7.3) where b=(AAT+ψ)−1Ŝ. Many methods have been proposed for this opti- mization problem given the number of common factors [66, 67, 112]. Number of common factors The Generalized Likelihood Ratio (GLR) test is often used with the ML method to determine the number of common factors m for a factor model [66]. In the GLR test, K(tr(b)− log |b|−n) (K: sample size) has an asymptotic χ2 distribution under the null hypothesis that m=m̂. The number of degrees of freedom for this χ2 distribution is: d = 1 2 [(n− m̂)2 − (n+ m̂)]. (7.4) The goodness-of-fit test usually needs to be carried out a number of times with m̂ increased stepwise until a satisfactory significance is obtained. The PCA is often performed on the sample correlation matrix to generate the initial guess for m [30]. Structure of loading matrix The loading matrix A estimated by the ML method is not the only matrix that satisfies the conditions required by a FA model. All the matrices orthogonal to the estimated A, together with the estimated ψ, meet these conditions. Further constraints were imposed on A to find a loading matrix with a desirable structure. We assume that the variables tend to load either strongly or weakly on the common factors. An orthogonal rotation called Varimax [71] is used to drive the terms in A toward -1, 0, or 1 and away from intermediate values. Varimax maximizes: Q = m∑ j=1 { n∑ i=1 A4ij − 1 n ( n∑ i=1 A2ij) 2}. (7.5) 7.2.2 Change detection based on the FA Calculation of factor scores As mentioned before, the relationship be- tween the variations in physiological variables is mainly determined by the 100 7.2. Factor analysis pharmacological effect of anesthetic agents. More than one FA model is needed to represent the normal responses of an average patient to a wide variety of anesthetic combinations. The first step of intraoperative monitor- ing is to select the right factor model according to contextual information. With the selected factor model, the common scores f and the specific scores e for new observations X can be calculated as below [137]: f̂ = AS−1X ê = X−Af̂ (7.6) Test statistics for change detection Intraoperative events either cause variations in the common scores f (A0→A1 in Figure 7.1), and/or in the correlation structure (A0→B0 or A1→B1 in Figure 7.1). In the latter case, Model collapsed Model collapsed Uncertain area A1 A0 B1 B0 Sp ec ific fa ct or I Specific factor II Co mm on fa cto r Figure 7.1: Intraoperative events may cause physiological signals to change along the direction defined by the common factors (A0→A1) or in the co- variance structure (A0→B0 or A1→B1). The structural changes A0→B0 and A1→B1 have the same degree of clinical significance in most scenarios. the deviations A0→B0 and A1→B1, although with different magnitude |e|, 101 7.3. Simulated test have the same degree of clinical relevance. The test statistics F and E are designed to capture these changes: F (t) = f̂T f̂ E(t) = ∑k=t k=(t−T ) √ ê(k)Tψ−1 ˆe(k)∑k=t k=(t−T ) √ X(k)TX(k) (7.7) F measures the distance from the stable state where the common scores are zero, and E indicates the degree of rotation between the current covariance structure and the covariance under the normal conditions. In E, the specific scores e are scaled by the amplitude of local variations, so as to highlight the changes in the correlation structure. In Equation (7.7), the magnitude of local variations in the observations X and specific scores e are both filtered using a T -point moving average filter, before the ratio of deviation is calculated. Otherwise, when random oscillations are not removed by filters beforehand, the operation of division may exaggerate the uncertainty in these variations. For the same reason, if the magnitude of local variations in X is too small, E should not be used for change detection (see the uncertain area in Figure 7.1). 7.3 Simulated test 7.3.1 Generation of simulated scenarios Simulated clinical cases were generated using a commercially available anes- thesia trainer software Body Simulation (Advanced Simulation Corporation, WA, USA). The cardiopulmonary modeling in this software provides clin- ically realistic responses to drugs and other stimuli. An experienced anes- thesiologist performed general anesthesia virtually using this software on healthy patients who underwent a surgical procedure for inguinal lymph node removal. All cases started with a modified rapid sequence of intra- venous induction of anesthesia with propofol (100mg), morphine (5mg), and rocuronium (6mg) followed by endotracheal intubation. The patients were all on controlled ventilation. Anesthesia was maintained with the in- fusion of propofol. The procedure was performed on simulated patients of three different ages (44, 65, and 95 years). The cardiovascular characteris- tic parameters, such as the venous compliance, were adjusted to reflect the typical baroreceptor reflex in each age group. The scenarios of light anesthesia and slight to moderate intraoperative hemorrhage were simulated in each patient after endotracheal intubation. 102 7.3. Simulated test The propofol infusion rate was reduced from the baseline of 6mg/kg/h to different rates 1-3 times over the duration of each case, in order to simulate the scenario of light anesthesia. In the hemorrhage cases, the propofol in- fusion rate was kept constant at 6mg/kg/h, and a blood loss of 100ml/min for 10 minutes (Hemorrhage-A), 50ml/min for 10 minutes (Hemorrhage-B), and 25ml/min for 10 minutes (Hemorrhage-C) was introduced during the maintenance period of anesthesia. For each age group, 10 light anesthesia cases, 2 Hemorrhage-A cases, 2 Hemorrhage-B cases, and 2 Hemorrhage-C cases were generated. In total there were 30 light anesthesia cases, including 84 episodes of varying depth of anesthesia (53 decreases and 31 increases), and 18 cases of intraoperative hemorrhage. The trend signals of 12 physiological variables, including SpO2, SvO2, HR, MAP, MCVP, BPsys, BPdia, TVexp, EtO2, EtCO2, Ppeak, and RR (see Appendix II for more details of these variables), were recorded at a sampling rate of one second. The starting and end points for each event were annotated by the same anesthesiologist using M-eViewer, with reference to the event log in Body Simulation. The signals were detrended using an EWMA forecaster as in Chapter 4. The forgetting parameter λ in the EWMA model for each physiological variable was set empirically by investigating the filtering residuals for the steady segments. With the chosen forgetting parameter, the mean absolute residual over the first 2 minutes after the starting of every steady segment, averaged over all the annotated steady segments, should be less than two times of the standard deviation of measurement noise. Fifteen light anesthesia cases, 5 from each patient group, were used for model estimation. RR and Ppeak were excluded from the study since these values were steady during the entire maintenance phase of anesthesia in all the simulated cases. The number of common factors was initially set to 2, determined by eigenanalysis of the correlation matrix ˆ̄S with a 90% CPV threshold. The CPV method calculates the percentage of overall variances captured by the m principal eigenvalues and accepts m if the percentage is larger than the predefined threshold. The ML model estimation was implemented in Matlab using the standard optimization toolbox. m=2 was found to meet the 10% significance level and was accepted as the number of common factors. The remaining 15 light-anesthesia cases, containing 47 episodes of vary- ing depth of anesthesia and the 18 intraoperative hemorrhage cases were used with the estimated FA model to verify the performance of the pro- posed test statistics. 103 7.3. Simulated test 7.3.2 Case study A hemorrhage scenario and a light-anesthesia scenario in the same 65-year- old patient were selected to demonstrate the temporal variations of the pro- posed FA-based statistics in different situations. The first case (see Fig- ure 7.2) is a typical hemorrhage scenario. In this case, the hemorrhage started at a rate of 100ml/min after the cardiovascular vital signs reached the post-induction stable stage and then continued for 10 minutes until the patient eventually collapsed. Although intraoperative hemorrhage is a serious clinical event and demands immediate intervention, the resulting variations in the monitored physiological variables were very subtle. For ex- ample, the Mean Arterial Pressure (MAP) dropped from around 63mmHg to around 48mmHg before the collapse. The speed of change was around 1.5mmHg/minute, a rate that can be very difficult to detect during stan- dard clinical monitoring. The proposed statistics summarize the changes in multiple variables and capture the subtle deviation from the normal covariance structure. During hemorrhage, F did not show obvious changes, while the elevated E over the bleeding period indicated that the FA model was no longer valid. This observation is not surprising, as F summarizes the changes in the magnitude of the common scores, and the variations in all the variables were very subtle during the bleeding, while the statistic E is designed to capture the changes in the relationship structure without considering the amplitude of variations. At the collapsing point, both F and E increased dramatically due to very large variations in amplitude (The transient decrease in E right after the patient collapse may be explained by the fact that the changes in the signals are not synchronized.) In the second case (see Figure 7.3), the depth of anesthesia was de- creased, and then increased by changing the infusion rate of propofol. The F statistics increased significantly over the duration of varying depths of anesthesia. The E statistics stayed at a very low magnitude (<0.1), in- dicating that the variations were mainly due to the effects of drugs. The E statistics increased slightly at the beginning of the increased depth of anesthesia because the responses of different physiological variables to the increased propofol infusion rate were not synchronized, and thus resulted in a temporary deviation from the normal inter-variable relationship. 104 7.3. Simulated test 7.3.3 Performance of the Cusum test on F and E The F and E statistics were calculated and then averaged over every anno- tated episode in all the simulated cases. These averaged values, denoted as F̄ and Ē, were grouped according to the patient status in the correspond- ing episode: SF̄s and SĒs contain the mean values F̄s and Ēs for the stable episodes, SF̄h and SĒh contain the mean values F̄h and Ēh for the hemor- rhage episodes, and SF̄va and SĒva contain the mean values F̄va and Ēva for the episodes of varying depth of anesthesia. As demonstrated in the case study, when the patient status is stable, both F and E are close to zero. In contrast, when a clinically relevant event occurs during anesthesia, either F or E increases (or both increase). To detect intraoperative events online, the problem becomes change-point detection in the test statistics F and E. The Cusum test was used to detect upshifts in F and E. Although the uncertainty components in F and E do not follow the Gaussian distribution, the Cusum test, as a robust technique, can still be used for change detection in F and E. The length-constrained Cusum C+ proposed in Chapter 4 was used here with the forgetting length set to 180 sampling intervals (3 minutes) for both statistics. The definition of C+(t) is repeated below: C+(t) = max(C+(t− 1) + e(t)− d+/2, 0) (7.8) where e(t) represents the signal to be tested, i.e., F (t) or E(t). The Cusum statistics C+F and C + E were tested against the corresponding thresholds h+F and h + E to detect changes in the amplitude of common factors or in the correlation structure. The target upshift of minimum interest d+ in Equation (7.8) and the test thresholds were set empirically. To detect the occurrence of a varied depth of anesthesia, the magnitudes of d+F were set to half of the minimum of SF̄va , plus the average of SF̄s , and the test threshold h+F was determined by trial-and-error to obtain the highest true positive rate. The Cusum test was performed twice on the E statistics. In the first test, d+E and h + E were set similarly as in the test on the F statistics, and fixed for all the cases. In the second test, d+E and h + E were manually adjusted for each case. The detection results were listed in Table 7.1. A true positive TPF (TPE) detection refers to an event detected in an unstable episode by testing the statistic F (E). A false positive FPF (FPE) detection refers to an event generated during a stable episode by testing the statistic F (E). If more than one positive detection was generated in the same episode, only the first one was counted. Missed events refer to the episodes with a real event 107 7.4. Discussion Table 7.1: Detection of hemorrhage and varying depth of anesthesia using the Cusum test on the statistics F and E based on a factor model Scenario TPF TPE FPF FPE Missed Varying DA 41 (87.3%) 2 (4.3%) 0 0 4 (8.5%) Bleeding 0 0 15 (83.3%) 0 13 (72.2%) 3 (16.6%) Bleeding ∗ 0 15 (83.3%) 0 3 (16.6%) 3 (16.6%) The test cases contained 47 episodes of varying depth of anesthesia and 18 episodes of bleeding. DA: depth of anesthesia; TPF (TPE): true positive detection generated by testing on F (E); FPF (FPE): false positive detection generated by testing on F (E); Bleeding0: results with fixed settings for the Cusum test; Bleeding∗: results with the Cusum test manually adjusted for each case. that was not detected by either F or E. The Cusum of F appears to be an effective indicator of varying depth of anesthesia. As for hemorrhage detection, the performance of E with the fixed Cusum settings (see Bleeding0 in Table 7.1) was unsatisfactory. Investigation of the failed cases revealed that the magnitudes of the episode means Ē in the same status group (SĒs , SĒh or SĒva) varied dramatically during different cases. The magnitudes of Ēh for the hemorrhage episodes in some cases was even smaller than the magnitudes of Ēs for the stable episodes in other cases. If the magnitudes of d+E and the threshold h + E could both be adjusted for each case, the detection performance would be greatly improved (see Bleeding∗ in Table 7.1). 7.4 Discussion The results of the Cusum test appear to be highly correlated with the occur- rence of simulated intraoperative events. The two simulated clinical scenar- ios are different in nature: the physiological variables still follow the inter- relationship determined by the effect of anesthetic agents during variations in the depth of anesthesia, but not during bleeding. In the factor model, a variation in the depth of anesthesia presents itself as a trend change in the magnitude of common factors, while a hemorrhage presents itself as a sustained deviation from the normal correlation structure. The proposed statistics were able to capture and differentiate these events if the statistical test on them was appropriately configured. However, how to configure the statistical test still remains a challenge. In all the simulated cases, the F statistics for the stable episodes were con- sistently close to zero, and the increases in F caused by variations in the depth of anesthesia were significantly larger than zero. In contrast, these 108 7.4. Discussion was no constant baseline for the E statistics. Any variations not captured by the common factors, regardless of whether they were caused by intra- operative events, inter-patient variability, or unsynchronized responses of different physiological variables to a varied infusion rate, are all transferred to the residual part E, resulting in a great variability in E. Using fixed Cusum parameters generated a high rate of false detections. It is desirable to adapt the test online to the characteristics of each case. A potential so- lution is to detect trend changes instead of level shifts in the E statistics using the univariate methods proposed in the previous chapters. The statistic E summarizes the variations in multiple variables, but also discards some information. One of the discarded properties is the direction of change. As seen in Figure 7.3, the elevated E statistics over the annotated Span B were actually caused by two events: light anesthesia followed by an increase in the depth of anesthesia. The E statistics for the two events merged as one sustained increase, and can only be differentiated by checking the contribution of each individual common factor. Most missed episodes of varying depths of anesthesia in Table 7.1 can be attributed to this cause. There are a few challenges when applying the proposed method to real clinical data. The first challenge is data annotation. To extract an accurate factor model and to learn the characteristics of the proposed statistics in different scenarios, we need to collect a large volume of data from a variety of patient groups. All these data need to be classified according to their con- textual information, and for each intraoperative event, the starting and end points of the relevant changes in each physiological signal need to be anno- tated. Secondly, the variables should be weighted according to their relative significance. Although weighting the variables does not change the structure of the FA model estimated using the ML method, the magnitude of the factor scores will be influenced by the weighting factors. Furthermore, in practice, the uncertainty in the signals does not follow the Gaussian distribution. For many events, the changes in different variables are not synchronized as well as in the simulated data. More sophisticated factor models, such as dynamic factor analysis [88], should be used to address this problem. The statistic F can summarize the variations in all the variables, and the statistic E can capture subtle events by testing the adequacy of a pre-learnt factor model. However, multivariate monitoring techniques, even with suc- cessful application to real clinical data, cannot completely replace univariate trend-change detection methods. Univariate methods usually perform bet- ter than multivariate methods in recognizing temporal features. How to fuse the results of multivariate methods with the results of univariate methods is an interesting research topic. 109 Chapter 8 Artifact detection and data reconciliation The existence of artifacts creates a major challenge for physiological trend monitoring. At each sampling time, yi(t) the measurement of the ith physi- ological variable in anM -dimensional sensor matrix is the level of the physi- ological variable at the site of interest y∗i (t) (called the true signal level) plus the overall noise contamination, as described in Equation (8.1). The overall noise ni(t) refers to the distortion caused conjointly by background noise ηi(t) and artifacts ai(t). Since artifacts are often caused by interference or system malfunction, they are also referred to as gross errors. yi(t) = y∗i (t) + ηi(t) + ai(t)︸ ︷︷ ︸ ni(t) ; ηi(t) ∼ N (0, Ri) i∈1 . . .M (8.1) This chapter estimates the signal true value y∗i in the presence of background noise ηi and artifacts ai. The background noise ηi(t) is assumed to follow a Gaussian distribution and be independent over time and across sensors. The magnitude of the noise variances R can be different for different sensors, but is assumed to be constant over time for the same sensor. As pointed out in Chapter 1, artifacts ai are grouped into transient artifacts and sustained short-peak artifacts according to their frequency power spectrum or their duration. The frequency spectrum of short-peak artifacts has a considerable overlap with that of the physiological trend changes; therefore the presence of short- peak artifacts may degrade the performance of the proposed trend-change detection methods significantly. The measurements contaminated with background noise still carry in- formation about the signal true values and should be utilized in the process of signal estimation. However, if an artifact ai(t) is presented in the ith sensor, the measurement yi(t) often deviates far from the true physiological value. Using the artifact in signal estimation will distort the estimates for other sampling instants or other variables. A signal estimation method in 110 8.1. Methods physiological monitoring always consists of two processes: (1) artifact de- tection to identify and eliminate the contaminated measurements and (2) signal estimation to derive the true signal level. If a variable is contaminated with artifacts, the true values should be estimated from other information sources if measurement redundancy exists. Measurement redundancy commonly exists in the sensor network in the current critical care environment. Most physiological variables are measured at a frequency much higher than that of normal physiological variations. The measurements of a variable in a short time window can be viewed as repet- itive observations, or are related in a known manner. These relationships are referred to as temporal redundancy. Furthermore, some signals are mea- sured by more than one independent sensor, or are related by physiological mechanism or measurement principle. These relationships are referred to as structural redundancy. Both temporal redundancy and structural redun- dancy exist in intraoperative HR measurements. This chapter uses the example of HR trend signals to demonstrate how temporal and structural redundancy can be used to solve the problem of artifact detection and signal estimation. This chapter begins with a brief re- view of the temporal filtering techniques used in the previous chapters. The framework of data reconciliation is then introduced to formalize the prob- lem of signal estimation in the presence of measurement redundancy. In the end, both temporal and structural information are used in the framework of data reconciliation, generating the stochastic dynamic data-reconciliation process. A hybrid median filter is proposed as an implementation of the dy- namic data-reconciliation process for HR measurements. The results demon- strate that the accuracy of signal estimation can be greatly improved by in- corporating structural redundancy into the signal estimation process. This is particularly true when signals are contaminated with short-peak artifacts. The proposed hybrid median filter provides a robust estimation of the HR signal without requiring strict assumptions about the signal’s characteristics, and therefore has great potential for practical use. The work in this chapter has been published in Journal of Clinical Monitoring and Computing [160]. 8.1 Methods 8.1.1 Univariate temporal filtering Univariate filtering techniques utilize temporal redundancy to identify arti- facts and generate signal estimates. Temporal filters formalize the knowledge about redundancy as stochastic dynamic models. Statistical tests based on 111 8.1. Methods the forecasting models can then be used to detect artifacts and estimate the true signal levels. Models used for the purpose of noise reduction are very different from those used for trend monitoring. Models used in trend monitoring, as seen in the previous chapters, are designed to summarize clinically relevant vari- ations, with insignificant physiological oscillations reflected as model uncer- tainty. For the purpose of signal estimation, the dynamic models used in this chapter should be more sensitive. An appropriate model for noise re- duction should be able to quickly follow all the variations in physiological variables, so that only measurement noise and artifacts are separated out in forecast residuals. One option for noise-reduction is to first detect artifacts and then esti- mate the signals from uncontaminated measurements. Another possibility is to use robust estimators to directly estimate signals from the artifact- contaminated measurements. Both schemes are explained in the following sections. The methods are based on different descriptions of the signals’ dynamics, and the results are optimized with respect to different criteria. 8.1.1.1 Kalman filter as MMSE estimator The Kalman filter is a recursive Minimum Mean Square Error (MMSE) es- timator for the signals described by the state-space dynamic linear model. The dynamic linear growth model in Equation (5.1) follows the signal vari- ations very quickly. This property is not desirable for trend monitoring, but serves the purpose of artifact detection very well, and is used here to describe the HR signal. Since background noise is assumed to follow a Gaus- sian distribution, the estimates generated using the Kalman filter are also optimal in the sense of maximum likelihood. Artifact detection should be performed before the standard Kalman fil- tering process (see Section 5.1.2) is used. The forecast residuals e should be tested at every sampling instant. Only the measurements not contaminated with artifacts should be used for updating the signal estimates. Since the Kalman filter is a linear filter and the background noise follows a Gaussian distribution, many statistical tests used for statistical process control can be used for artifact detection. These include the Shewhart (threshold) chart and Cusum test. The threshold method is the most widely used scheme because it introduces a minimal time delay. If a measurement is detected as an artifact, the prediction is used as the signal estimate. Most liner filters, including the moving average filter and the EWMA filter, can be used to estimate signal values after artifact detection. The 112 8.1. Methods Med
[ , y
1
(t-
1
)
, y
2
(t-
1
), y
2
(t)
, y
1
(t) ]
Med
[ y
3
(t)
, y
2
(t), y
1
(t) ]
Med [ y(t-
2
)
, y
(t-
1
)
, y
(t) ] (a) Temporal median filter
(b) Structural median filter
(c) Hybrid median filter y
2
(t-
1
) y
1
(t)
y
1
(t) y
2
(t) y
3
(t) y(t-
2
)
y(t-
1
) y(t) y
1
(t-
1
) y
2
(t) Figure 8.1: Comparison of the standard, structural, and hybrid median filters that generate the estimate ŷ(t). estimation accuracy is partly determined by the threshold used in artifact detection and also by the accuracy with which the underlying dynamic mod- els of the filters describe the signal. 8.1.1.2 Median filter as optimal L-1 filter Standard median filter A standard T -point moving median filter re- places the current measurement with the median of the most recent T raw data points (see Figure 8.1-(a), where T=3). The process is represented by: ŷ(t) =Med [y(t−T+1) . . . y(t)] (8.2) where Med represents the median operator. To find the median, the num- bers in the filter window are first arranged in an increasing (decreasing) order. Then the median is the center value if T is even, or the average of the middle two values if T is odd. In this section, T is assumed to be odd, i.e., T = 2N+1, where N is an positive integer. The filter window moves one step forward after a new measurement is received. The properties of the (2N+1)-order standard median filter are described in [46] using three basic signal characteristics. A constant neighborhood is a region of at least N+1 consecutive constant values; an edge is a monotonic region of at least N+1 sampling intervals between two constant neighbor- hoods of different values; and an impulse is a maximum of N−1 points be- tween two constant neighborhoods of the same value. The standard median filter completely removes impulses, but preserves the shape for edges. The delay introduced by the (2N+1)-order median filter is N sampling intervals. In essence, an appropriately configured median filter can remove tran- sient noises without attenuating the signal edge. This property has estab- 113 8.1. Methods lished median filters as the filter of choice for denoising trend signals [22, 60, 81]. The standard median filter is used in the previous chapters for reducing transient artifacts. Median filter as L-1 estimator The assumption underlying the stan- dard median filter is that the measurements falling within a short window T are samples of the same signal level y∗(t), as described in: y∗(t)− y(t) = ε(t) y∗(t)− y(t−1) = ε(t−1) . . . y∗(t)− y(t−T+1)=ε(t−T+1) (8.3) where ε is a random variable with zero mean, representing the uncertainty about the equality relationship. The degree of variability of ε influences the credibility of the corresponding measurement. If the model in Equation (8.3) is valid, the result of the standard median filter is an optimal estimate of the true value that minimizes the mean absolute deviations (L-1 norm), i.e.: ŷ(t) = argmax ỹ { ∑ k=1...T |ỹ − y(t−k+1)| }. (8.4) Every measurement in the filter window is assigned the same degree of cred- ibility in the standard median filter. Weighted median filter Different credibility can be assigned to the mea- surements in the filter window by weighting the absolute deviations: ŷ(t) = argmax ỹ { ∑ k=1...T w(k)|ỹ − y(t−k+1)| } (8.5) where w(k) is a positive integer. Given W= ∑T k=1w(k), w(k)/W represents the relative weight for the kth measurement in the filter window. The weighted median filter [21] can be used to find the point that opti- mizes the criterion defined in Equation (8.5) as below: ŷ(t) =Med [y(t−T+1) ¦ w(t−T+1) . . . y(t) ¦ w(t)] (8.6) where the symbol ¦ represents duplications, i.e., y(t) ¦ w(t) = y(t) . . . . . . y(t)︸ ︷︷ ︸ w(t) . (8.7) 114 8.1. Methods Electrical and mechanical cardiac events to produce heart beat ECG device
Electrical potential Pulse oximeter Pulsatile variation in
plethysmograph Pressure transducer Pulsatile variation i
n
arterial
pre
ssure Anesthesia monitor HR ecg HR
SpO
2 HR abp Physical principles
Measurement methods
Recording Figure 8.2: Structural redundancy in the HR measurements: the mechanical and electrical events associated with each heartbeat are transmitted to and detected by different sensors. The weight assigned to each measurement in Equation (8.5) is expressed as the frequency with which this measurement appears in the filter data set. Temporal filters can effectively remove transients but have minimal effect on short-peak artifacts. Since the frequency spectrum of short-peak artifacts has a considerable overlap with that of the physiological trend changes, any univariate dynamic models that can reduce short-peak artifacts will also distort the relevant trend changes. Reducing short-peak artifacts represents a critical problem for physiological monitoring. 8.1.2 Data reconciliation with structural redundancy Structural redundancy in HR measurements A typical example of structural redundancy exists in HR measurements. Intraoperative HR is routinely measured by averaging the reciprocals of R-R intervals from the ECG. HR is also calculated by measuring the interval between peaks in the plethysmographic waveform from the pulse oximeter, and when available, from the interval between peaks of the waveform of invasive arterial BP. As the energy generated from the heart beating flows along different paths to these sensors (see Figure 8.2), the measurement of HR is exposed to different types of interference. Artifacts from different propagation paths are independent of one an- other. Electrocautery noise is a common artifact when the ECG (HRecg) is used to determine HR. HR measured by a pulse oximeter (HRSpO2) is sensi- tive to patient motion, low perfusion states, and interference from ambient light [58]. Artifacts from invasive arterial BP waveforms (HRabp) are fre- quently caused by sampling or flushing, disturbances of the sampling fluid 115 8.1. Methods column, and occasionally by catheter clotting [84]. The true pulse rates measured by HRSpO2 and HRabp are identical to the true HR. Situations where HR may differ considerably from pulse rate, such as fibrillation and other instances of abnormal rhythm, are not considered. The relationships between these measurements are described as:{ y∗1(t)− y∗2(t) = 0 y∗1(t)− y∗3(t) = 0 (8.8) where y∗1, y∗2, and y∗3 represent the true level of HRecg, HRSpO2 , and HRabp, respectively. The dimension of the sensor matrix is M=3. Framework of data reconciliation The signal measurements can be rectified so that the estimates are consistent with known relationships. This process is referred to as data reconciliation. The framework of data recon- ciliation for an M -dimensional sensor matrix is described below: min Φ(t) = ∑M i=1 φi(ŷi(t), yi(t)) s.t.{ H(ŷ1(t) . . . ŷM (t)) = εH G(ŷ1(t) . . . ŷM (t)) + εG 6 0 (8.9) where the objective function Φ(t) is a distance from the original measure- ments. Prior knowledge about structural redundancy is formulated into equality or inequality constraints. The uncertainty factors εH and εG are included to allow for inexact knowledge about the constraints. Data reconciliation can be formulated as a problem of constrained op- timization. Its implementation is determined by the formation of a specific problem. If uncertainty exists in the constraints (εH 6=0 or εG 6=0), the es- timates can be found by optimizing the penalized objective function. The penalized objective function is the original objective function plus a measure of the deviation of measurements from the constraints [83]. For determin- istic constrained optimization problems (εH=0 and εG=0), as with the HR measurements in Equation (8.8), one of the best-studied problems is the Weighted-Least-Square (WLS) estimation. If there are only equality con- straints and the constraints are all in linear or some special bilinear forms, the optimal ŷ(t) in a WLS problem can be found through linear decomposi- tion [31, 133]. Otherwise, the problem should be used to solve numerically. Structural median filter As in temporal filtering, artifact detection and data reconciliation can be accomplished in one step using a robust estima- tor. If the three sensors are all available, the L-1 norm can be used as the 116 8.1. Methods objective function in the data-reconciliation process, resulting in: Φ(t) = ∑M=3 i=1 |ŷi(t)− yi(t)| s.t.{ ŷ1(t)− ŷ2(t) = 0 ŷ1(t)− ŷ3(t) = 0. (8.10) The constraints are incorporated into the objective function to generate: ŷ(t) = argmax ỹ { M=3∑ i=1 |ỹi(t)− yi(t)| }. (8.11) ŷ(t) can be found by using the structural median filter as shown in Fig- ure 8.1-(b). As in the temporal median filter, the measurements from dif- ferent sensors can be weighted according to their credibility. The structural median filter introduces no time delay, but it is effective when only one sensor is contaminated with artifacts. Another limitation is that, at least three sensor inputs should be available for the structural median filter to work, but HRabp is not routinely measured. 8.1.3 Dynamic data reconciliation The data reconciliation framework can be extended to include the signals’ dynamic characteristics [165]. In this section, the knowledge about both the temporal and structural redundancy is used in the artifact detection and data reconciliation processes to improve estimation accuracy. 8.1.3.1 Kalman filter for data reconciliation In this section, the Kalman filter is used to obtain the MMSE estimates of HR, subject to the dynamic characteristics (Equation (5.1)) and structural redundancy (Equation (8.10)). The signals are estimated in two steps (Figure 8.3). The first step is to detect artifacts and excluding the contaminated measurements from the data-reconciliation process. Artifacts can be detected using global schemes such as the χ2-test on the whole set of measurements, or using local schemes such as a threshold test on each individual measurement channel [133]. Here artifacts are detected by testing the forecast residuals of each individual signal against a predetermined threshold σ. Only the measurements not contaminated with artifacts are then processed using a Kalman filter to find the estimate that has the MMSE criterion and satisfies the structural con- straints. Given the equality constraints, the MMSE estimates can be found 117 8.1. Methods Kalman Filter
Artifact Detection H, R construction z
-1 y
1 y
3 y
2 Figure 8.3: Kalman filter used for estimating heart rate from multiple mea- surement channels by augmenting the observation matrix and then performing the standard Kalman filter procedure (see Section 5.1.2) on the augmented system. For example, if HRSpO2 (denoted as y2) is contaminated with artifacts at time t, y2 will be removed from the measurement matrix, and the true value y∗t is estimated by filtering the following augmented system:{ x(t) = Fx(t−1) + ζ y = H̃(t)x(t) + η̃(t) ζ(t) ∼ N (0, Q(t)) η̃(t) ∼ N (0, R̃(t)) H̃(t) = [ H H ] R̃(t) = [ R1 0 0 R3 ] (8.12) where Ri is the covariance matrix of measurement noise in the ith sen- sors. H̃(t) and R̃(t) should be constructed online according to the results of artifact detection. If all the sensors are contaminated with artifacts, the prediction is used as the signal estimate. 8.1.3.2 A novel hybrid median filter In this section, the dynamic data reconciliation problem is optimized with respect to the L-1 norm under the assumption that (1) the current HR equals the estimate for the previous sampling instant; (2) HR keeps constant over a short window T; and (3) HR is measured by multiple sensors. With T=2 and M=2, the optimal ŷ(t) is estimated through: min Φ(t) = ∑M=2 i=1 |ŷi(t)− yi(t)| s.t. ŷ1(t)− y1(t−1) = ε1 Dynamics−(1) ŷ2(t)− y2(t−1) = ε2 Dynamics−(2) ŷ1(t)− ŷ(t−1) = ε3 Dynamics−(3) ŷ1(t)− ŷ2(t) = 0 Structual − (1) (8.13) 118 8.1. Methods where ε1...3 indicate the uncertainty about the dynamic constraints. The influence of the stochastic dynamic constraints on the final estimate can be realized by adding penalty terms to the original objective function. Here the penalty terms are also based on the L-1 norm. The penalized objective function Φ̃ is: Φ̃(t)=λ0 ∑M=2 i=1 |ŷi(t)− yi(t)|+ λ1|ŷ1(t)− y1(t−1)| +λ2|ŷ2(t)− y2(t−1)|+ λ3|ŷ1(t)− ŷ(t−1)| s.t. ŷ1(t)− ŷ2(t) = 0 (8.14) where λ0, . . . , λ3 reflect the relative credibility of each term. If λ0, . . . , λ3 are all set to one, replacing ŷ1(t) and ŷ2(t) with ŷ(t), we get: Φ̃(t) = |ŷ(t)− ŷ(t−1)|+ 1∑ k=0 2∑ i=0 |ŷ(t)− yi(t−k)|. (8.15) Minimizing Equation (8.15) generates the results of the hybrid median filter Med[ŷ(t−1), y1(t−1), y2(t−1), y1(t), y2(t)] (see Figure 8.1-(c)). The hybrid median filter can be used to fuse the measurements from M>2 sensors, and the filter window size can be extended to T>2, as long as the assumption about the signal dynamics still holds. Then the penalized objective function becomes: Φ̃(t) = |ŷ(t)− ŷ(t−T+1)|+∑T−1k=0 ∑Mi=1 |ŷ(t)− yi(t−k)|. (8.16) Weighting of the measurements Since the hybrid median filter is a recursive filter, its estimate is therefore based not only on the measurements in the filter window, but it is also influenced by all the historical data. The influence of historical data decreases exponentially. If a hybrid median filter with a T -point filter window is used to fuse the measurements fromM independent sensors, the influence of historical data decreases exponentially in a block-wise fashion, with the base of the exponential rate being 1TM+1 . The proof for this observation is given below. Statement: argmax ỹ(t) Φ̃(t)=argmax ỹ(t) {|ỹ(t)− ŷ(t−T+1)|+Ψ} =argmax ỹ(t) { 1W |ỹ(t)− ŷ(t−2(T−1))| + 1W ∑2(T−1) k=T−1 ∑M i=1 |ỹ(t)− yi(t−k)|+Ψ} (8.17) 119 8.1. Methods where Ψ = ∑T−1 k=0 ∑M i=1 |ỹ(t) − yi(t−k)| is the non-recursive part in the fil- ter data set, andW = TM+1 is the number of elements in the filter data set. Proof: Equation (8.17) is equivalent to: argmax ỹ(t) {W |ỹ(t)− ŷ(t−T+1)|+WΨ} = argmax ỹ(t) {|ỹ(t)− ŷ(t−2(T−1))|+∑2(T−1)k=T−1∑Mi=1 |ỹ(t)− yi(t−k)|+WΨ} (8.18) So the problem becomes to prove that SL : Med[ŷ(t−T+1) ¦W, S] = SR :Med[ŷ(t−2(T−1)), Y (t−2(T−1)), . . . , Y (t−T+1)︸ ︷︷ ︸ S̃ , S] (8.19) where S represents [Y (t−T+1) ¦W,Y (t−T+2) ¦W, . . . , Y (t) ¦W ], and Y (t) represents all the measurements from theM sensors at time t. From now on, the left-side data set in Equation (8.19) is denoted as SL, and the right-side data is denoted as SR. In Equation (8.19), the W data points in ŷ(t−T+1) ¦W on the left side is replaced with the W readings in S̃ on the right side. It should also be noted that Med[SL] = ŷ(t) and Med[S̃] = ŷ(t−T+1). Equation 8.19 is proven in the following three situations: 1. if ŷ(t)=ŷ(t−T+1): the number of readings >ŷ(t) in S̃ is equal to the number of readings 6 ŷ(t), therefore replacing ŷ(t−T+1) ¦W in SL with S̃ does not change the median of SL. 2. if ŷ(t)ŷ(t−T+1), Equation (8.19) can be proven in a similar. Therefore, the statement in Equation (8.17) is proven. The above analysis is for the hybrid median filter with each element in the filter window equally weighted, including ŷ(t). As in the temporal and structural median filters, measurements from different sensors or different sampling instants can be weighted according to their credibility. Degree of redundancy The degree of redundancy provided by the dy- namic and structural constraints determines the maximum number of mea- surements that are allowed to be contaminated with artifacts. Too many artifacts may distort the estimate. For an L-1 estimator the degree of re- dundancy is half of (the number of independent constraints - the number of variables to be estimated). Therefore, to obtain a correct estimate, a max- imum of 2 measurements out of [y1(t−1), y2(t−1), y1(t), y2(t)] are allowed to be contaminated with artifacts, i.e., either each of the sensors at time t is contaminated with a single-point artifact, or both samples from one of the sensors are contaminated with artifacts. The degree of redundancy can be increased by incorporating more independent sensors, or, if a longer delay is acceptable, by increasing the length of the filter window. The performance of the hybrid median filter with T=2 and M=2 is demonstrated in Figure 8.4. If a short peak occurs in one sensor (Figure 8.4- (a)), or transients occur in both sensors (Figure 8.4-(b)), the variations are recognized as artifacts and removed. If short peaks occur in both sensors (Figure 8.4-(c)), a short peak is retained with 1-point delay. 8.2 Evaluation of performance The hybrid median filter with the default window size T=2 was tested using both simulated signals and clinical data. 8.2.1 Simulation test The simulation study was performed in two phases, with three scenarios in each phase. In phase I, three scenarios using one, two, and three sensors were used. The sensors were contaminated solely with background noise. In phase II, a range of one to three sensors was used, and, in addition to background noise, the measurements were contaminated with artifacts. 121 8.2. Evaluation of performance Sensor 1 Sensor 2 Sensor 2 Sensor 1 Sensor 1 Sensor 2 Denoised signal Denoised signal Denoised signal (a) (b) (c) Figure 8.4: Effect of the hybrid median filter on two channels of measure- ments: (a) a short peak in one sensor, (b) transients in both sensors, and (c) short peaks in both sensors. Thirty cases were generated for each of the six scenarios. For each case, the measurement signal was a combination of a true HR series and a noise series. For scenarios involving more than one sensor, multiple measurements were simulated for the same case by contaminating the same true HR series separately with a different noise series. The linear growth DLM in Chapter 5 was used to generate the true HR series. Each series contained 1,000 sample points. Background noise for the first phase of scenarios was generated us- ing the Gaussian distribution N(0, δ2) with δ=5. For phase II, in addition to the background noise, HR signals were contaminated with 100 randomly located 1-point transient artifacts and 10 randomly located short-peak ar- tifacts. The amplitude of the transient artifacts was randomly generated using a uniform distribution over [-50 50]. The amplitude of the short peak artifacts followed a uniform distribution over [-50 50], and their duration was uniformly distributed over [2 10] sampling intervals. The hybrid median filter with T=2 proposed in Section 8.1.3.2 and the Kalman filer described in Section 8.1.3.1 were tested on the simulated data in both scenarios (Phase I and Phase II). For the hybrid median filter, no specific information about the test signals was required. For the Kalman filter method, the model parameters were all set to the true values used for generating the signals, and the threshold σ used for artifact detection was set to different magnitudes, from 1δ to 20δ with the step size being ∆σ=δ. δ is the standard deviation of the background noise. 122 8.2. Evaluation of performance 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 0 0.5 1 1.5 2 2.5 Artifact detection threshold σ (unit: δ) R el at iv e R M SE 1 sensor, hybrid median 2 sensors, hybrid median 3 sensors, hybrid median 1 sensor, Kalman filter 2 sensors, Kalman filter 3 sensors, Kalman filter Hybrid median filter Figure 8.5: The relative Root Mean Square Errors (RMSE) of the Kalman filter with the model covariances set to the true values and the artifact thresholds σ changing from 1δ to 20δ, compared with the RMSE of the hybrid median filter. δ is the standard deviation of the background noise. The signals are contaminated with transient and short-peak artifacts. The performance of the two filters was evaluated by comparing their Root Mean Square Errors (RMSE). The RMSE qualifies the difference between the signal estimates ŷ and the true signal level y∗. The RMSE was calculated for each case and then averaged over all the cases for each scenario. The RMSE levels of the unfiltered data indicate the overall noise levels. The relative RMSE ratios are defined as the RMSE levels relative to the RMSE level of the unfiltered data. The relative RMSE ratios were calculated for the two filters to compare their performance in different scenarios. For the scenarios with artifacts (Phase II), the estimation accuracy of the Kalman filter obtained with different σ thresholds was compared with the estimation accuracy of the median filter (see Figure 8.5). For both filters, the accuracy of signal estimation improved as the number of sensors increased. In all the scenarios, the magnitude of the threshold σ has a great influence on the performance of the Kalman filter. When σ is too small, 123 8.2. Evaluation of performance the estimates of the Kalman filter deviate far from the signal true values, due to severe signal distortion. For σ’s of larger magnitudes, the estimation accuracy of the Kalman filter varies in the three scenarios. The performance in each scenario is described below. If only one sensor is available, the performance of the Kalman filter dete- riorates as σ increases from 3δ to 9δ, indicating that the Kalman filter might have missed more artifacts with the larger σ thresholds. However, when the threshold σ becomes larger than the amplitude of most artifacts (σ>10δ), the method has little effect of artifact removal. Instead, when σ>10δ, the increased σ thresholds further reduce signal distortion. Therefore, the esti- mation accuracy of the Kalman filter improves as σ increases from 10δ and soon reaches peak performance. Except in the situation of severe signal dis- tortion (σ62δ), the optimal relative RMSE for the Kalman filter (marked with ∗ in Table 8.2) is 63%, when one sensor is available, obtained at the plateau area with σ=20δ, and the worst relative RMSE (marked with − in Table 8.2) is 111%, obtained with σ=9δ. The performance obtained at the plateau area is equivalent to the performance obtained without artifact removal (marked with 0 in Table 8.2). If more than one sensor is available, the influence of missed artifacts on the estimation accuracy of the Kalman filter is reduced for 3δ6σ69δ. The estimation accuracy of the Kalman filter improves as σ increases for σ>3δ. If two sensors are available, the optimal relative RMSE is 32%, obtained with σ=3δ, the worst relative RMSE is 50%, obtained with σ=8δ, and the relative RMSE without artifact removal is 47%, obtained with σ>11δ. If three sensors are available, the optimal relative RMSE is 24%, obtained with σ=3δ and the worst relative RMSE is 41%, obtained with σ>11δ, and the relative RMSE without artifact removal is also 41%, obtained with σ>11δ. The results of the simulated test in both phases are listed in Table 8.1 and Table 8.1, respectively. In all six scenarios (Phases I and II), the overall RMSEs of both the Kalman filter and the hybrid median filter are smaller than those of the unfiltered data. As the number of sensors increases, the overall RMSEs of both filters decrease. In the cases where the signals were contaminated with background noise only (Phase I), the Kalman filter per- formed better than the hybrid median filter. In the cases where the signals were contaminated with both background noise and artifacts (Phase II), the optimal performance obtained by the Kalman filter with the true model parameter is only marginally better than the performance of the proposed hybrid median filter. If the threshold for artifact-removal is not set appro- priately, the estimation accuracy of the Kalman filter can be significantly worse than the hybrid median filter. 124 8.2. Evaluation of performance Table 8.1: Root mean square errors and relative root mean square errors (relative to the unfiltered data) for the simulated cases in Phase I: cases without artifacts Number of sensors one two three Unfiltered data 2.23 (100%) 2.23 (100%) 2.23 (100%) Kalman filter 1.43 (64%) 1.09 (49%) 0.96 (43%) Hybrid median filter 1.75 (78%) 1.34 (60%) 1.17 (52%) The true model parameters were used in the Kalman filtering process. Table 8.2: Root mean square errors and relative root mean square errors (relative to the unfiltered data) for the simulated cases in Phase II: cases with artifacts Number of sensors one two three Unfiltered data 11.57 (100%) 11.57 (100%) 11.57 (100%) Kalman filter− 12.84 (111%) 5.79 (50%) 4.74 (41%) Kalman filter0 7.29 (63%) 5.44 (47%) 4.74 (41%) Kalman filter∗ 7.29 (63%) 3.70 (32%) 2.78 (24%) Hybrid median filter 7.65 (66%) 4.28 (37%) 3.24 (28%) Kalman filter−: the worst estimation accuracy of the Kalman filter with an poorly configured artifact-removal threshold; Kalman filter0: the performance of the Kalman filter without artifact removal; Kalman filter∗: the optimal estimation accuracy of the Kalman filter with an appropriately configured artifact-removal threshold. The true model parameters were used in the Kalman filtering process. 8.2.2 Case studies with clinical data Clinical cases were used to demonstrate the performance of the proposed hybrid median filter on intraoperative HR trend measurements. Case I in- cluded multiple and prolonged bursts of electrocautery during surgery. The HRecg measurements were corrupted with a significant number of electro- cautery artifacts (Figure 8.6-(a)). Many of the electrocautery artifacts were short peaks, and the longest lasted for 26 sampling intervals (2 minutes and 10 seconds). HRSpO2 measurements were contaminated with movement arti- facts at around t=2370. The longest short-peak artifact in the pulse oxime- ter lasted for 5 sampling intervals (25 seconds). For a univariate median filter, the window size required to remove these short-peak artifacts would 125 8.2. Evaluation of performance Induction
End of surgery Physiological short variation (a)
(b) (c) Artifacts from pulse oximeter Artifacts from ECG Movement artifact HR
ecg HR
SpO2 Figure 8.6: The hybrid median filter has fused the HR measurements from the ECG (HRecg) and pulse oximeter (HRSpO2) into a single HR series, and removed the electrocautery artifacts from HRecg and movement artifacts from HRSpO2 . be at least 53 points for HRecg and 11 points for HRSpO2 . The remaining oscillations at the beginning and end of the case were caused by surgical intervention. The hybrid median filter fused HRecg and HRSpO2 into one single HR series (Figure 8.6-(b)). The movement artifacts presented on the pulse oximeter and most of the electrocautery artifacts in the ECG device were effectively removed from the HR estimates. In Case II, HRabp was measured from the invasive arterial BP moni- tor, in addition to HRecg and HRSpO2 . Artifacts were present in all sensors (see Figure 8.7-(a)). The oscillations in HRabp at the beginning of the case were produced by flushing the arterial line and movement of the patient and fluid column. During the procedure, adjustment of the patient position introduced movement artifacts, including the artifacts at around t=390 in HRSpO2 and t=700 in HRabp. Flushing of the arterial line introduced addi- tional artifacts in HRabp at around t=380. In this case, the hybrid median filter removed most artifacts from the HR estimates (see Figure 8.7-(b),(c)). 126 8.3. Discussion (a)
(b) (c) Artifacts from pulse oximeter Artifacts from arterial BP monitor Movement artifacts HR
ecg HR
SpO2 HR
abp Flushing the arterial line Movement artifacts Flushing the arterial line and movement Movement artifact Artifacts from ECG Figure 8.7: The hybrid median filter has fused the HR measurements from the ECG (HRecg), pulse oximeter (HRSpO2), and arterial BP moni- tor (HRabp) into a single HR series. 8.3 Discussion When measurements from more than one sensor are used, both hybrid me- dian filter and the Kalman filter generate more robust estimates of HR than would be possible from a single sensor source, by using both the signal’s tem- poral characteristics and the relationship between the measurements from multiple sensors. The hybrid median filter and the Kalman filter optimize the signal estimates with respect to different criteria. As a robust estimator, the hybrid median filter provides an integrated solution for the problem of artifact removal and signal estimation. The Kalman filter, as a linear fil- ter, averages the measurements rather than selecting the artifact-free sensor; therefore, an effective artifact removal step is required for this method. In the simulated scenarios free of artifact, the Kalman filter method with the true model parameters performed marginally better than the hybrid median filter. This is not surprising, as the Kalman filter is the optimal filter for a Gaussian linear model. However, when artifacts were present, the hybrid median filter performed better than the Kalman filter for a wide range of artifact-detection thresholds, even if the model parameters for the Kalman filter were set to the true values. In reality, it is unlikely that the true model parameters will be unknown. The parameters estimated from 127 8.3. Discussion population data often deviate significantly (>100%) from the true settings (refer to the results in Chapter 5). In practice the uncertainty of the model parameters may cause the performance of the Kalman filter to degrade. In the first clinical case a few short peak variations were retained in the estimates of the hybrid median filter. This does not represent a failure of the hybrid median filter. These short variations were present in more than half of the sensors. It is undesirable to remove these short variations as they may well have clinical significance. One of the fundamental assumptions of the proposed hybrid median filter is that the sampling rate is sufficiently high that the physiological variation in the filter window can be ignored. This assumption might not hold if the sampling rate is very low (such as every few minutes). In this case, a temporal model should be used to describe the physiological dynamics between samples. Model predictions should replace the historical estimates and historical measurements used in the hybrid median filter. The hybrid median filter has a few advantages over a previously de- scribed method [40], which is a two-step solution based on artifact detection and Kalman filtering (see Section 2.5). Firstly, the hybrid median filter method is computationally simpler. The only computation required in the hybrid median filter is ordering the samples in the data set. The median filter intrinsically selects the best measurement and generates the estimate. Secondly, the hybrid median filter does not require any explicit assump- tions about the distribution of artifacts. The Gaussian distribution and the uniform distribution were used only for generating signals in the simulated testing. In fact, median filters are able to remove most artifacts that fol- low long-tail distributions. Finally, the hybrid median filtering process is intuitive and the performance can be easily adjusted. These considerable benefits of this method make it highly suitable for clinical use. 128 Chapter 9 Software implementation and clinical testing The EWMA-Cusum method and the Adaptive-DLM method proposed in Chapter 4 and Chapter 5 have been implemented in a software system called iAssist. iAssist allows for real-time evaluation of the proposed trend-change- detection methods in the operating room. The software implementation and clinical study presented in this chapter was mostly conducted by Chris Brouse as part of his Master project, and the results have been published in Proceedings of the 29th Annual International Conference of the IEEE Engineering in Medicine and Biology Society [19]. 9.1 Software implementation: iAssist The iAssist software system was developed in an object-oriented program- ming language Java (Sun Microsystems, Santa Clara, CA). iAssist supports the following functions: 1. Data acquisition from multiple input sources; 2. Signal processing using multiple algorithms; 3. Display of results and reception of feedback using a graphical interface; 4. Data storage to save the results to external devices. The system is realized in a modular framework, which allows for different implementations for each of the function modules mentioned above. These modules communicate via the common interfaces predefined in the frame- work and exchange data during intraoperative monitoring. The framework of iAssist provides great extensibility and flexibility. In the current version, the real clinical data are read from the data- collecting software S/5 Collect (GE Healthcare, Chalfont St Giles, UK) via the TCP/IP communication protocol. Our group is developing other data acquisition modules, aiming to read data directly from a wide range of phys- iological monitors. The EWMA-Cusum method proposed in Chapter 4 is implemented with a one-level threshold and applied to SpO2 and NIBPmean. 129 9.2. Clinical study The Adaptive-DLM method proposed in Chapter 5 is implemented and ap- plied to HR, MVexp, EtCO2, and RR. All the data are preprocessed using a median filter to reduce transient artifacts before being processed by the trend-change-detection methods. The detected trend changes are displayed as directional arrows on the signal trend trace (Figure 9.1). In addition to the visual display, the detected trend changes are encoded as different vibration patterns and sent via Bluetooth to a vibro-belt to provide tac- tile alerts to clinicians. A touch screen is used to receive feedback from clinicians. The algorithm configurations and detection results are formatted according to the ISO/IEEE 11073 international standard for modeling point- of-care medical device data [1], using XML-based deployment descriptors, and stored in ASCII text files. 9.2 Clinical study Following ethics approval, the performance of iAssist was evaluated in real time alongside the physiological monitors with S/5 Collect in the operating room at BC Children’s Hospital. The median filter was optimized for each variable. The sensitivity parameters of the trend-change-detection methods were configured before the start of each case, according to the results in the offline testing. However, the participating anesthesiologists were allowed to change the sensitivity configurations during surgery. Each event detected by iAssist in the maintenance period of anesthesia was graded by the attending anesthesiologists as “artifact” (A), “clinically insignificant” (CI), “clinically significant” (CS), “clinically significant with action taken” (CSAT), or “clinically significant due to intervention” (CSI) with reference to the graphical display of the trends and events marked on the trend display (Figure 9.1). Missed events were also noted. The usefulness of each event was rated on a 10cm visual analogue scale from “frustrating” (0) to “very useful” (10). Fifteen anesthesiologists completed the clinical evaluation in a variety of pediatric surgical cases of at least one hour in duration (38 cases; mean duration of 103 (STD 4.25) minutes). The mean number of events per case was 22.8 (STD 23.4). The clinical evaluations of the generated alerts are shown in Table 9.1. The proposed methods detected 13 events per hour of anesthesia. In total, over 50% of the events iAssist detected were rated as clinically significant (CS, CSAT, or CSI), and less than 7% were rated as artifacts. Only 6 significant events were reported as having been missed by the algorithms. The usefulness value ranged from 0 to 9.3 (9.3 for an EtCO2 event). 130 9.3. Discussion Figure 9.1: Clinicians evaluate the trend-change-detection results generated by iAssist during clinical testing. Table 9.1: Clinical evaluation of the real-time performance of iAssist Total Artifact Insignificant Significant Not Rated (N) (%) (%) (%) (%) EtCO2 212 5.2 24.1 52.3 18.4 HR 253 6.3 36.4 43.9 13.4 MVexp 145 5.5 20.7 54.5 19.3 RR 86 2.3 9.3 60.5 27.9 SpO2 48 37.5 20.8 18.8 22.9 NIBPmean 124 3.2 28.2 62.1 6.5 Total 868 6.8 26.0 50.6 16.6 EtCO2, HR, MVexp, and RR were monitored using the Adaptive-DLM method; SpO2 and NIBPmean were monitored using the EWMA-Cusum method with a one-level certainty threshold. 131 9.3. Discussion 9.3 Discussion In the clinical test, over 50% alerts generated by iAssist were considered as clinically significant. In contrast, the false alarm rate of standard physiolog- ical monitors can be as high as 90% according to a survey [74]. The results demonstrate that the detection accuracy of the EWMA-Cusum method and the Adaptive-DLM method was superior to that of basic threshold-based alarms used in current physiological monitors. However, compared with the performance in the offline testing (see Table 4.1, Table 5.3 and Figure 5.6), both the sensitivity and specificity decreased in the clinical test. One of the contributing factors for the degraded performance is the fre- quent occurrence of artifacts. In the offline testing, we followed the Gaussian assumption on signal characteristics imposed by the change-detection meth- ods and did not use any case with heavy artifact contamination. However, in the real-time clinical test, some signals were corrupted by artifacts. For example, HR signals in electrocautery surgeries were all significantly dis- turbed by the electrocautery artifacts, resulting in many false detections. Future inclusion of the hybrid median filter proposed in Chapter 8, or the wavelet-based denoising approach proposed by our group [18], is expected to significantly reduce the number of false alerts due to electrocautery noise. Furthermore, the tuning parameters were not fixed consistently at the optimal settings throughout the testing. In some cases, the attending clini- cians adjusted the tuning parameters away from the recommended sensitiv- ity setting, in order to reduce time delay by sacrificing detection accuracy. For example, the sensitivity parameter for SpO2 was set to detect changes of 3-5% over only a few minutes throughout the test, resulting in many artifacts (false detections). Some clinically significant trend changes due to intervention (one type of true positive detections) were annotated incorrectly by clinicians as clinically insignificant (CI) or artifacts (A). This evaluation bias is not surprising, since most CSI changes are anticipated responses for clinicians, and do not require immediate clinical intervention. However, from the perspective of trend-change detection, CSI detections notify the clinician who initiated the intervention about the physiological responses of a patient to the treatment, and should be treated as true positive detections. Had all the results been classified according to the classification system described in Section 9.2, the detection performance would appear better than in Table 9.1. The participating clinicians were given a high degree of freedom during the test, as one of the purposes of this pilot study was to promote the con- cept of trend-change detection and to encourage clinicians to explore the 132 9.3. Discussion functionality of iAssist. We have observed the attitudes of the participating clinicians toward iAssist throughout the test. It was noticed that, as clini- cians became more familiar with iAssist and the underlying change-detection methods, they showed more interest in using the software and participating in the clinical testing. This observation indicates that the acceptance of a clinical decision-support system among its target users – clinicians, can be significantly improved by education and training, which agrees with the finding of a previous survey [13]. On the other hand, the technical com- plexity of a system, if too high, may hinder its acceptance among clinicians. Using methods based on intuitive concepts, like the EWMA-Cusum method realized in the current version of iAssist, is very important in the first stage of clinical study. Due to the freedom given to the participating clinicians, the results of this pilot study should only be taken as a preliminary evaluation of the real-time performance of the proposed methods. Many contributing factors were not controlled while conducting the test, neither were they taken into account in data analysis. One example is the influence of workload on the clinician’s viewpoint of trend-change detections. It should be noted that a significant proportion of alerts were not annotated (up to 27.9% for RR, see Table 9.1). These non-annotated alerts very likely occurred during the occurrence of critical events, a high-workload situation where the results of trend-change detection are expected to facilitate the decision making pro- cess but the attending clinicians could not divert their attention to iAssist to evaluate the results due to high workload. To reduce the influence of raters’ workload on the results of evaluation, it is desirable to have a research as- sistant to assist with alert annotation or to use a wireless headset instead of keyboard or touch-screen to receive feedback from clinicians. To obtain a better evaluation of the methods’ real time performance, more clinical tests should be performed following a better controlled testing procedure. More- over, after the software system gains further acceptance among clinicians, other methods proposed in this thesis should be implemented in iAssist and evaluated in real time. 133 Chapter 10 Conclusion and future work 10.1 Summary: work accomplished This thesis has addressed the signal-estimation and trend-change-detection problems in physiological monitoring. The proposed univariate methods are designed to detect changes in different trend features. The EWMA-Cusum method (Chapter 4) can detect changes in trend direction. The Adaptive- DLM (Chapter 5) detects changes in trend direction, but can also be used to capture changes in the incremental rate by adjusting the sensitivity setting as well as the segment-congregation step. In the GHMM-based methods (Chapter 6), trend changes are classified according to their direction, incre- mental rate, and duration. More detailed trend abstraction can be generated online together with a quantitative estimation of the occurrence probabil- ity. The results of the proposed methods summarize the signals’ temporal dynamics, and can be used as inputs for a high-level diagnostic inference system. The accuracy of change detection, as demonstrated in both offline testing and real-time clinical testing, indicates that the proposed methods have great potential for clinical use. In clinical testing, the participating clinicians evaluated over 50% of the alerts generated by the EWMA-Cusum and the Adaptive-DLM methods as clinically relevant. Compared with over 90% irrelevant alarms generated by the standard alarm systems [74], the methods proposed in this thesis achieved a much higher detection accuracy. The proposed methods use different techniques to handle the challenge of intraoperative variability. The EWMA-Cusum method is based on a simple ARIMA(0,1,1) model and has the advantage of flexibility and ro- bustness. Following adjustment, it can be applied to most physiological variables. However, the performance of this method is limited due to the fact that the second-order statistics of the monitored signals are unrealis- tically assumed to be constant during surgery and between patients. In contrast, the Adaptive-DLM method and the GHMM-based methods (the switching fixed-point Kalman smoother and the adaptive Cusum test) can adapt online to the intraoperative variability. The Adaptive-DLM method requires a good initial estimate of the second-order statistics (Q and R), 134 10.1. Summary: work accomplished and estimates the statistical characteristics of the signal online based on the recent data. The estimation process in the Adaptive-DLM method is based on the EM algorithm. If a change occurs in the second-order statis- tics, the Q-R estimation process needs to run a number of iterations before converging to a local optimum. Therefore the Adaptive-DLM method is only appropriate for physiological variables whose covariances vary much slower than the sampling rate. In the GHMM-based methods, the intraop- erative transition between different patterns is modeled as a Markov-chain process. The formalized transition relationship can be learned from anno- tated population data, and then used online for pattern recognition. The methods based on the GHMM, including the switching Kalman smoother and the adaptive Cusum, are suitable for monitoring physiological variables sampled at a lower rate, such as NIBPmean, and can also be extended to monitor variables measured at a higher sampling rate, such as HR. The two GHMM-based methods have the highest computational complexity. To reduce the complexity, it is recommended that signals measured at a high sampling rate be down-sampled or segmented by linear approximation before being processed by the GHMM-based methods. The multivariate analysis (Chapter 7) performed in this thesis is a pi- lot study, intended to investigate the potential of linear dimensionality- reduction techniques to represent the inter-variable relationships. The pro- posed statistics based on the factor model appear to be effective at sepa- rating two different types of changes: changes in the magnitude of common factors and changes in the covariance structure. Therefore, these statistics provide a solid ground for summarizing repetitive alerts due to a common cause and for detecting clinical events with subtle manifestations. How- ever, the covariance structure between physiological variables is sensitive to the contextual information, and may vary significantly between patients. The inter-patient variability is not addressed in the proposed method. The method should be tested using additional clinical data to find directions for further improvement. Finally, a solution for artifact-removal is proposed for HR using a novel hybrid median filter (Chapter 8) that utilizes the temporal and structural re- dundancy in the sensor network in the current clinical environment. Artifact removal, as a fundamental challenge in physiological monitoring, requires further investigation. 135 10.2. Future work: the road ahead 10.2 Future work: the road ahead The ultimate goal of intelligent physiological monitoring is to improve pa- tient safety by providing decision-making support to the clinicians who are dedicated to total care of the individual patient. In a topic so closely related to the well-being of human subjects, every advance in the area of physio- logical monitoring should be validated using stringent evaluation. In the following sections I propose potential future work that can be carried out based on the methods proposed in this thesis. In addition, I discuss the is- sues of clinical testing, software design, and the ethical and legal issues that a clinical-decision support system should address before clinical application. Other subproblems in physiological monitoring Physiological mon- itoring is a multi-level task. Although this thesis has addressed one of the most important subproblems in this task – trend-change detection and pat- tern recognition – subproblems at other levels of interpretation (see Fig- ure 1.2 for more details) still need to be addressed. Finally, the solutions to all the subproblems need to be integrated to build a complete clinical decision-support system. Artifact removal is a fundamental step for any physiological monitor- ing system. An appropriate artifact-removal solution should address three issues: artifact detection, signal estimation, and artifact explanation. As demonstrated in Chapter 8, artifact detection together with signal estima- tion is intended to improve the quality of input for the higher-level modules, and consequently to reduce the uncertainty of final decision making. Artifact explanation is intended to identify the underlying cause for artifacts, and decide whether to trigger an alert. Artifact explanation is necessary because some artifacts indicate device malfunction and may well deserve immediate attention. In addition to HR measurements, measurement redundancy also exists in other signals in the current clinical sensor network. For example, gas flows sampled from the same breathing circuit must satisfy some mass transfer equations. Utilizing the correlation between different variables is a promising direction for artifact removal and signal estimation. For the problem of artifact explanation, signal measurements alone might not pro- vide sufficient information to determine the clinical relevance of artifacts. Indeed, it may be necessary to utilize the contextual information and trend dynamics of other variables in order, to derive a diagnostic interpretation of the detected artifacts. Semantic trend abstracts provide a solid basis for diagnostic inference. It is desirable to integrate temporal abstracts with the expert knowledge 136 10.2. Future work: the road ahead commonly used in physiological monitoring (such as the critical threshold for each variable) to derive alerts more relevant to clinicians. Clinicians naturally comprehend information from all sources and make a final diagno- sis/decision without clear recognition of the individual cognitive processes. The early alerts may cause distraction or confusion to clinicians who are not familiar with the concept of trend-change detection. We found in the pre- liminary clinical test that simply triggering an alert for every trend change introduces a heavy cognitive load for anesthesiologists. To address these is- sues, we are developing a rule-based expert system. The system will be used to fuse trend abstracts with the standard threshold-based alarm scheme to provide alerts of higher clinical relevance [6, 39]. Although information integration is desirable, researchers need to decide on the target level of interpretation for the final results. Increased automated diagnosis may not necessarily provide better cognitive support and improved outcomes. As a matter of fact, clinicians demonstrate obvious reluctance to trust a decision-support system that produces specific diagnostic or ther- apeutic suggestions [13]. This attitude could reduce the user compliance for a decision support system. Furthermore, some contextual information required for diagnostic inference can only be obtained by manual input in the current clinical setting. The increased workload counteracts the benefits provided by automatic information processing. Another concern for diagnostic inference is knowledge completeness. As pointed out in Chapter 2, it is very difficult if not impossible to build a knowledge reservoir for detecting all the potential clinical events. Biomed- ical researchers should work closely with clinicians to identify the clinical subcategory in which a complete collection of clinical scenarios is relatively easier to obtain. Alert design and display The information extracted by the data inter- pretation methods needs to be transformed into some form and delivered to clinicians. The efficiency of information expression and quality of in- formation transmission influence the amount of relevant information that clinicians can finally receive from a physiological monitoring system. A multi-level alert system (the EWMA-Cusum method) usually trig- gers more alerts than a single-level alert system, and may increase clini- cians’ cognitive load. The display of multi-level alerts is an important topic that requires further study. Visual communication remains the predomi- nant method currently used to relay information to clinicians. Our group has been investigating the relatively under-utilized sense of touch to trans- 137 10.2. Future work: the road ahead mit information during physiological monitoring [11]. This tactile display uses the body’s largest sensory organ, the skin, to provide subtle cues to clinicians based on features extracted from the data. Compared with vi- sual and auditory displays, alerts sent via tactile devices may not distract from patient observation and do not disturb other individuals in the clinical environment. Clinical testing Extensive testing should be performed in the clinical environment to evaluate the performance of every component of a physio- logical monitoring system. We have conducted preliminary testing to verify the real-time performance of the proposed trend-change-detection methods, as well as the efficiency of tactile display. More real-time testing will be performed in the operating room to evaluate the combined performance of the proposed trend-change-detection methods and the tactile display. After a system is confirmed to be able to effectively detect and deliver early alerts to a clinician, the next question is whether adopting such a sys- tem in the clinical environment can enhance situation awareness of the at- tending clinician, and eventually improve patient safety. Many human-factor studies have been performed to prove the link between situation awareness and patient outcomes [103, 136]. However, extensive clinical testing will be needed to verify whether the alerts generated by our methods improve the situation awareness of clinicians in the operating room. Situation awareness in physiological monitoring refers to the perception and comprehension of any change in the clinical environment related to the well-being of patients, and the ability to evaluate a patient’s status based on the perceived changes [43]. Situation awareness can be assessed using a variety of approaches, including direct and indirect measures of performance, mental workload measures, and analytical measures of specific aspects of performance such as movement or communication [155]. In order to learn whether the proposed methods improve the clinicians’ situation awareness, we should first choose an assessment approach to measure the degree of situation awareness. We can then perform clinical testing to compare the degrees of situation awareness with and without the presence of early alerts. Ethical and legal issues Significant ethical and legal obstacles will need to be overcome before the work completed in this thesis can be adopted in the clinical environment. The software implementation of the proposed methods, iAssist, is currently in its prototype stage and is mainly used for conducting preliminary clinical testing. iAssist could be converted into 138 10.2. Future work: the road ahead a fully-developed software product for use in clinical practice. Here we discuss the ethical and legal requirements that a medical software product should comply with, using the US Food and Drug Administration (FDA) regulations as an example. The Federal Food, Drug, and Cosmetic Act (FD&C Act) [2], defines a medical device as: “an instrument, apparatus, implement, machine, con- trivance, implant, in vitro reagent, or other similar or related article, in- cluding any component, part, or accessory, which is . . . intended for use in the diagnosis of disease or other conditions, or in the cure, mitigation, treat- ment, or prevention of disease . . . or intended to affect the structure or any function of the body”. According to this definition, a software implementa- tion of any signal processing or decision-support system should be considered as a medical device and should be subject to all the FDA medical device reg- ulations and guidelines. The FDA groups medical devices into three classes based on the level of control necessary to assure the safety and effectiveness of the device [111]. These groups range from Class I, which has a minimal potential for harm to the user, to Class III, which needs premarket approval. Under the current FDA policy, , software devices are grouped depending on the purpose of application into stand-alone devices and components, parts, or accessories to a pre-existing medical device. The level of concern for a software device is determined by its inherent risk, the degree of involvement of human intervention, and the level of regulation of the parent device. The level of risk involved with a physiological monitoring system is mainly determined by the relevance of its outcomes to the diagnosis and control of patient status. A system that contains built-in decision rules and generates diagnostic suggestions, in the opinion of some experts, is a stand- alone Class III device [16]. However, if a system only removes artifacts, and is only intended to be used as a supplementary module in an existing physiological monitor, the system is subject to less restrictive regulations. A software medical device differs from regular medical devices in many ways. Software is often used to realize complex functions, and the imple- mentation is often easy to change. It is difficult to test a software device and keep track of its changes. The FDA currently requires a substantial volume of documentation to be submitted and examined for a premarket notification of a software product. Now the FDA is also considering Soft- ware Quality Audits (SQA) and certification as a replacement for premarket notification [111], based on the belief that a well managed software life-circle has an effect on the integrity of the resulting software system. Satisfying independent third-party SQA standards, such as IEC 62034 [69] (to be recog- nized by FDA), may remove the obligation to describe the software processes 139 10.2. Future work: the road ahead in detail in regulatory submissions to the FDA [111]. In addition to the FDA regulations, there are other ethical issues, such as patients’ attitudes toward the application of a decision support system. Any issue that may influence patients, clinicians, or the clinical environment during the planning, development, validation, and application phases of a software device should be handled with extreme care. It has been almost half a century since the first wave of research on auto- matic physiological monitoring emerged in the 1960s. Over the years, many problems in physiological monitoring have been investigated and many solu- tions proposed to assist clinical decision making. The computational power of a bedside monitor has increased to a level that makes advanced signal pro- cessing possible in the clinical environment. Moreover, the ever-increasing healthcare burden has significantly increased awareness of the potential ben- efits of a clinical decision support system among clinicians, medical device manufacturers, and the government. For all these reasons, we optimistically anticipate the clinical application of more advanced physiological monitoring systems in the near future. 140 Bibliography [1] ISO/IEEE 11073 Committee. http://www.ieee1073.org. [2] Federal Food, Drug, and Cosmetic Act, as Amended, Sec. 201(h). U.S. Government Printing Office, Washington, DC, 1993. [3] A. K. Akobeng. Understanding diagnostic tests 3: receiver operating characteristic curves. Acta Paediatrica, 96(5):644–647, 2007. [4] R. Allen. Time series methods in the monitoring of intracranial pres- sure I: Problems, suggestion for a monitoring scheme and review of appropriate techniques. Journal of Biomedical Engineering, 5(1):5– 18, 1983. [5] G. Alterovitz, D. Staelin, and J. H. Philip. Temporal patient state characterization using Iterative Order and Noise (ION) estimation: applications to anesthesia patient monitoring. Journal of Clinical Monitoring and Computing, 17(5):471–486, Oct 2003. [6] J. M. Ansermino, J. Lim, G. A. Dumont, C. Brouse, P. Yang, D. Dun- smuir, J. Daniels, and S. Schwarz. Clinical decision support in physi- ological monitoring. International Journal of Biomedical Engineering and Technology. to appear. [7] J. M. Ansermino, P. Yang, G. A. Dumont, J. Lim, N. Pegram, and C. Ries. Detecting an increase in heart rate in children using an adap- tive change point detection algorithm. In IARS 79th Clinical and Scientific Congress, Honolulu, Hawaii, Mar 2005. [8] R. K. Avent and J. D. Charlton. A critical review of trend-detection methodologies for biomedical monitoring systems. Critial Review in Biomedical Engineering, 17(6):621–659, 1990. [9] Y. Bar-Shalom and X. R. Li. Estimation and Tracking: Principles, Techniques, and Software. Artech House, 1993. 141 Chapter 10. Bibliography [10] G. A. Barnard. Control charts and stochastic processes. Journal of the Royal Statistical Society. Series B (Methodological), 21(2):239–271, 1959. [11] P. Barralon, G. Ng, G. Dumont, S. K. W. Schwarz, and M. Anser- mino. Development and evaluation of multidimensional tactons for a wearable tactile display. In Proceedings of the 9th international confer- ence on Human computer interaction with mobile devices and services, pages 186–189, Singapore, 2007. [12] M. Basseville. Detecting changes in signals and systems—a survey. Automatica, 24(3):309–326, 1988. [13] P. C. Beatty. Preliminary survey into clinical attitudes to com- puter based decision support systems. British Journal of Anaesthesia, 79:140–141, 1997. [14] K. Becker, B. Thull, H. Käsmacher-Leidinger, J. Stemmer, G. Rau, G. Kalff, and H.-J. Zimmermann. Design and validation of an intel- ligent patient monitoring and alarm system based on a fuzzy logic process model. Artificial Intelligence in Medicine, 11(1):33–53, 1997. [15] I. A. Beinlich and D. M. Gaba. The ALARM monitoring system- intelligent decision making under uncertainty. Anesthesiology, 71(A):336, 1989. [16] E. S. Berner, K. J. Hannah, and M. J. Ball, editors. Clinical Decision Support Systems: Theory and Practice. Springer-Verlag New York, Inc., Secaucus, NJ, USA, 1998. [17] A. H. Briggs, A. E. Ades, and M. J. Price. Probabilistic sensitivity analysis for decision trees with multiple branches: Use of the dirich- let distribution in a Bayesian framework. Medical Decision Making, 23(4):341–350, Aug 2003. [18] C. Brouse, G. A. Dumont, F. J. Herrmann, and J. M. Ansermino. A wavelet approach to detecting electrocautery noise in the ecg. IEEE Engineering in Medicine and Biology Magazine, 25(4):76–82, Jul-Aug 2006. [19] C. Brouse, G. A. Dumont, P. Yang, J. Lim, and J. M. Ansermino. iassist: A software framework for intelligent patient monitoring. In Proceedings of the 29th Annual International Conference of the IEEE 142 Chapter 10. Bibliography Engineering in Medicine and Biology Society, pages 3790–3793, Aug 2007. [20] C. Burge and S. Karlin. Prediction of complete gene structures in human genomic DNA. Journal of molecular biology, 268(1):78–94, Apr 1997. [21] A. Burian and P. Kuosmanen. Tuning the smoothness of the re- cursive median filter. IEEE Transactions on Signal Processing, [see also IEEE Transactions on Acoustics, Speech, and Signal Processing], 50(7):1631–1639, Jul 2002. [22] D. Calvelo, M. C. Chambrin, D. Pomorski, and P. Ravaux. Towards symbolization using data-driven extraction of local trends for icu mon- itoring. Artificial Intelligence in Medicine, 19(3):203–223, 2000. [23] M.-C. Chambrin. Alarms in the intensive care unit: how can the number of false alarms be reduced? Critical Care, 5(4):184–188, 2001. [24] S. Charbonnier. On line extraction of temporal episodes from ICU high-frequency data: A visual support for signal interpretation. Com- puter Methods and Programs in Biomedicine, 78(2):115–132, 2005. [25] S. Charbonnier, G. Becq, and L. Biot. On-line segmentation algo- rithm for continuously monitored data in intensive care units. IEEE Transactions on Biomedical Engineering, 51(3):484–492, Mar 2004. [26] C. Chattfield. The Analysis of Time-Series, 5th ed. Chapman & Hall, Holden Day, San Francisco, 1996. [27] A. I. Cohn, S. Rosenbaum, M. Factor, and P. L. Miller. DYNASCENE: an approach to computer-based intelligent cardiovascular monitoring using sequential clinical scenes. Methods of Information in Medicine, 29(2):122–131, 1990. [28] E. Coiera, V. Tombs, and T. Clutton-Brock. Attentional overload as a fundamental cause of human error in monitoring. Technical report, Hewlett Packard Laboratories, 1996. [29] J. Cooper, R. Newbower, and R. Kitz. An analysis of major errors and equipment failures in anesthesia management: considerations for prevention and detection. Anesthesiology, 60:34–42, 1984. 143 Chapter 10. Bibliography [30] M. D. Coovert and K. McNelis. Determining the number of common factors in factor analysis — a review and program. Educational and Psychological Measurement., 48(3):687–692, 1988. [31] C. M. Crowe. Data reconciliation — progress and challenges. Journal of Process Control, 6:89–98, 1996. [32] M. Daumer and M. Falk. On-line change-point detection (for state space models) using multi-process Kalman filters. Linear Algebra and its Applications, 284(1–3):125–135, 1998. [33] J. G. De Gooijer and R. J. Hyndman. 25 years of time series forecast- ing. International Journal of Forecasting, 22(3):443–473, 2006. [34] S. Deligne. Interference of variable-length acoustic units for continu- ous speech recognition. In Proceeding of International Conference on Acoustics, Speech, and Signal Processing, volume 3, pages 1731–1734, Munich, Germany, 1997. [35] L. Devroye. Non-uniform Random Variate Generation. Springer- Verlag, New York, 1986. [36] S. R. Dixon, C. D. Wickens, and J. S. McCarley. On the independence of compliance and reliance: Are automation false alarms worse than misses? Human Factors, 49(4):564–572, Aug 2007. [37] M. Dojat, F. Pachet, Z. Guessoum, D. Touchard, and A. Harf. NeoGanesh: A working system for the automated control of assisted ventilation in ICUs. Artificial Intelligence in Medicine, 11(2):97–117, Oct 1997. [38] S. Dreiseitl and M. Binder. Do physicians value decision support? a look at the effect of decision support systems on physician opinion. Artificial Intelligence in Medicine, 33(1):25–30, 2005. [39] D. Dunsmuir, J. Daniels, C. Brouse, S. Ford, and J. M. Ansermino. A knowledge authoring tool for clinical decision support. Journal Journal of Clinical Monitoring and Computing, 22(3):189–198, Jun 2008. [40] M. H. Ebrahim, J. M. Feldman, and I. Bar-Kana. A robust sensor fu- sion method for heart rate estimation. Journal of Clinical Monitoring and Computing, 13(6):385–393, 1997. 144 Chapter 10. Bibliography [41] Z. Elghazzawi. Method and apparatus for removing artifact from phys- iological signals. United States Patent, (5971930), Oct 1999. [42] J. Endresen and D. W. Hill. The present state of trend detection and prediction in patient monitoring. Intensive Care Medicine, 3:15–26, 1977. [43] M. R. Endsley. Design and evaluation for situation awareness en- hancement. In Proceedings of the Human Factors Society 32nd Annual Meeting, Human Factors Society, pages 385–388, Sep 1988. [44] L. M. Fagan. VM: Representing Time-dependent Relations in a Med- ical Detting. PhD Thesis. Stanford University, Stanford, CA, USA, 1980. [45] R. Fried and M. Imhoff. Alarm algorithms in critical care monitoring. Biometrical Journal, 46(1):90–102, 2004. [46] N. J. Gallagher and G. Wise. A theoretical analysis of the proper- ties of median filters. IEEE Transactions on Acoustics, Speech, and Signal Processing [see also IEEE Transactions on Signal Processing], 29(6):1136–1141, Dec 1981. [47] D. Garfinkel, P. Matsiras, T. Mavrides, J. McAdams, and S. Auckburg. Patient monitoring in the operating room: Validation of instrument reading by artificial intelligence methods. In Proceedings of 13th Sym- posium on Computer Applications in Medical Care, pages 575–579, 1989. [48] U. Gather, R. Fried, V. Lanius, and M. Imho. Online monitoring of high dimensional physiological time series - a case study. Estad́ıstica (Revista del instituto interamericano de estad́ıstica), 53:259–298, 2003. [49] X. Ge. Segmental Semi-Markov Models and Applications to Sequence Analysis. PhD Thesis. University of California, Irvine, Dec 2002. [50] G. Goodwin and K. Sin. Prediction and Control. Prentice-Hall, first edition, 1984. [51] K. Gordon. The multi-state Kalman filter in medical monitoring. Com- puter Methods and Programs in Biomedicine, 23:147–154, 1986. [52] I. J. Haimowitz. Knowledge-based trend detection and diagnosis. PhD Thesis. Massachusetts Institute of Technology, Cambridge, MA, USA, 1994. 145 Chapter 10. Bibliography [53] I. J. Haimowitz and I. S. Kohane. Managing temporal worlds for medical trend diagnosis. Artificial Intelligence in Medicine, 8(3):299– 321, 1996. [54] I. J. Haimowitz, P. P. Le, and I. S. Kohane. Clinical monitoring using regression-based trend templates. Artificial Intelligence in Medicine, 7(6):473–496, 1995. [55] J. A. Hanley and B. J. McNeil. The meaning and use of the area under a receiver operating characteristic (ROC) curve. Radiology, 143(1):29– 36, 1982. [56] M. J. Harrison and C. W. Connor. Statistics-based alarms from se- quential physiological measurements. Anaesthesia, 62(10):1015–1023, 2007. [57] P. J. Harrison and C. F. Stevens. Bayesian forecasting (with discus- sion). Journal of the Royal Statistical Society: Series B, 38(3):205–247, 1976. [58] M. J. Hayes and P. R. Smith. A new method for pulse oximetry possessing inherent insensitivity to artifact. IEEE Transactions on Biomedical Engineering, 48(4):452–461, Apr 2001. [59] D. J. Hitchings, M. J. Campbell, and D. E. Taylor. Trend detection of pseudo-random variables using a exponentially mapped past statistical approach: an adjunct to computer assisted monitoring. Journal of Clinical Monitoring, 6(2):73–88, Apr 1975. [60] S. Hoare. Automatic artifact identification in anesthesia patient record keeping: a comparison of techniques. Medical Engineering and Physics, 22(8):547–553, 2000. [61] C. C. Holt. Forecasting seasonals and trends by exponentially weighted moving averages (originally published in 1957). International Journal of Forecasting, 20(1):5–10, 2004. [62] C. E. Hope, C. D. Lewis, I. R. Perry, and A. Gamble. Computed trend analysis in automated patient monitoring systems. British Journal of Anaesthesia, 45(5):440–449, 1973. [63] M. Imhoff, M. Bauer, U. Gather, and D. Lohlein. Time series analysis in intensive care medicine. Applied Cardiopulmonary Pathophysiology, 6:203–281, 1997. 146 Chapter 10. Bibliography [64] A. Ismail. On Dual Control and Adaptive Kalman Filtering with Application in the Pulp and Paper Industry. PhD Thesis. Depart- ment of Electrical & Computer Engineering, The University of British Columbia, Vancouver, Canada, Sep 2001. [65] A. Jawad and J. G. Cundill. An expert system for the mechanical ventilator settings. IEE Colloquium on Intelligent Decision Support Systems and Medicine, pages 1–3, Jun 1992. [66] K. Jöreskog. Some contributions to maximum likelihood factor anal- ysis. Psychometrika, 32(4):443–482, Dec 1967. [67] R. Jennrich and S. Robinson. A Newton-Raphson algorithm for max- imum likelihood factor analysis. Psychometrika, 34(1):111–123, Mar 1969. [68] I. T. Jolliffe. Principal Component Analysis. Springer, second edition, Oct 2002. [69] P. Jordan. Standard IEC 62304 - medical device software - software lifecycle processes. Software for Medical Devices, 2006. The Institution of Engineering and Technology Seminar on, pages 41–47, Nov. 2006. [70] M. G. Kahn. Modeling time in medical decision-support programs. Medical Decision Making, 11(4):249–264, 1991. [71] H. Kaiser. The varimax criterion for analytic rotation in factor anal- ysis. Psychometrika, 23:187–200, 1958. [72] R. Kennedy. A modified Trigg’s Tracking Variable as an ‘advisory’ alarm during anesthesia. International Journal of Clinical Monitoring and Computing, 12:197–204, 1995. [73] G. Klein and R. Calderwood. Decision models: some lessons from the field. IEEE Transactions on Systems, Man and Cybernetics, 21(5):1018–1026, Sep/Oct 1991. [74] E. Koski, A. Mäkivirta, T. Sukuvaara, and A. Kari. Frequency and re- liability of alarms in the monitoring of cardiac postoperative patients. Journal of Clinical Monitoring and Computing, 7(2):129–133, 1990. [75] C. A. Kulikowski. History and development of artificial intelligence methods for medical decision making. In J. D. Bronzino, editor, The Biomedical Engineering Handbook, pages 2681–2698. IEEE Press, 1995. 147 Chapter 10. Bibliography [76] J. Larssan and B. Hayes-Roth. Guardian: intelligent autonomous agent for medical monitoring and diagnosis. IEEE Intelligent Systems and Their Applications, 13(1):58–64, Jan/Feb 1998. [77] S. T. Lawless. Crying wolf: False alarms in a pediatric intensive care unit. Critical Care Medicine, 22(6):981–985, 1994. [78] A. Lowe, R. W. Jones, and M. J. Harrison. Temporal pattern matching using fuzzy templates. Journal of Intelligent Information Systems, 13(1-2):27–45, 1999. [79] I. MacGill, J. Cade, R. Siganporia, and J. Packer. VAD: ventila- tion management in the ICU. In Proceedings of Third Annual IEEE Symposium on Computer-Based Medical Systems, pages 345–349, Jun 1990. [80] W. H. Majoros, M. Pertea, A. L. Delche, and S. L. Salzberg. Efficient decoding algorithms for generalized hidden Markov model gene finders. BMC Bioinformatics, 6:16, 2005. [81] A. Makivirta, E. Koski, A. Kari, and T. Sukuvaara. The median filter as a preprocessor for a patient monitor limit alarm system in intensive care. Computer Methods and Programs in Biomedicine, 34(2-3):139– 144, 1991. [82] V. Manfredi, S. Mahadevan, and J. Kurose. Switching Kalman fil- ters for prediction and tracking in an adaptive meteorological sensing network. In Proceeding of 2005 Second Annual IEEE Communica- tions Society Conference on Sensor and Ad Hoc Communications and Networks, pages 197–206, Santa Clara, California, USA, Sep 2005. [83] D. Maquin, O. Adrot, and J. Ragot. Data reconciliation with uncertain models. ISA Transactions, 39:35–45, Feb 2000. [84] B. H. McGhee and M. E. J. Bridges. Monitoring arterial blood pres- sure: What you may not know. Critical Care Nurse, 22(2):66–70, Apr 2002. [85] W. W. Melek, Z. Lu, A. Kapps, and W. D. Fraser. Comparison of trend detection algorithms in the analysis of physiological time-series data. IEEE Transactions on Biomedical Engineering, 52(4):639–651, Apr 2005. 148 Chapter 10. Bibliography [86] C. Meredith and J. Edworthy. Are there too many alarms in the intensive care unit? an overview of the problems. Journal of Advanced Nursing, 21(1):15–20, 1995. [87] G. Miller. The magic number seven plus or minus two: some limits on our capacity for processing information. Psychological Review, 63:81– 97, 1956. [88] P. Molenaar, J. Gooijer, and B. Schmitz. Dynamic factor analysis of nonstationary multivariate time series. Psychometrika, 57(3):333–349, Sep 1992. [89] M. K. Molyneux, A. K. McIndoe, A. T. Lovell, and P. K. T. Mok. Continuous auditory monitoring. How well can we estimate absolute and changing heart rates? Can this be improved. In International Meeting on Medical Simulation, Albuquerque, NM, USA, Jan 2004. [90] F. A. Mora, G. Passariello, G. Carrault, and J. P. Le Pichon. Intel- ligent patient monitoring and management systems: a review. IEEE Engineering in Medicine and Biology Magazine, 12(4):23–33, Dec 1993. [91] H. J. Murff, V. L. Patel, G. Hripcsak, and D. W. Bates. Detecting adverse events for patient safety research: a review of current method- ologies. Journal of Biomedical Informatics, 36(1-2):131–143, 2003. [92] K. Murphy. Switching Kalman filter. Technical report, Compaq Cam- bridge Research Lab, Cambridge, MA, 1998. [93] K. Murphy. Hidden semi-Markov models (segment models). Technical report, MIT Artificial Intelligence Lab, Nov 2002. [94] I. J. Myung. The importance of complexity in model selection. Journal of Mathematical Psychology, 44(1):190–204, 2000. [95] M. J. Navabi, K. C. Mylrea, and R. C. Watt. Detection of false alarms using an integrated anesthesia monitor. In Proceedings of the 11th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, volume 6, pages 1774–1775, Nov 1989. [96] K. C. Ng and B. Abramson. Uncertainty management in expert sys- tems. IEEE Expert [see also IEEE Intelligent Systems and Their Ap- plications], 5(2):29–48, Apr 1990. 149 Chapter 10. Bibliography [97] J. Orr and D. Westenskow. A breathing circuit alarm system based on neural networks. Anesthesiology, 71(A):339, 1989. [98] M. Ostendorf, V. V. Digalakis, and O. A. Kimball. From HMM’s to segment models: A unified view of stochastic modeling for speech recognition. IEEE Transation of Speech and Audio Processing, 4(5):360–378, 1996. [99] J. Otterman. The properties and methods for computation of exponentially-mapped-past statistical variables. IEEE Transactions on Automatic Control [see also IRE Transactions on Automatic Con- trol], 5:11–17, Jan 1960. [100] E. S. Page. Continuous inspection schemes. Biometrika, 41:100–115, 1954. [101] W. A. Perkins and A. Austin. Adding temporal reasoning to expert- system-building environments. IEEE Expert: Intelligent Systems and Their Applications, 05(1):23–30, 1990. [102] R. R. Pitre, V. P. Jilkov, and X. R. Li. A comparative study of multiple-model algorithms for maneuvering target tracking. In I. Kadar, editor, Proceedings of Signal Processing, Sensor Fusion, and Target Recognition XIV, volume 5809, pages 549–560, May 2005. [103] C. Pott, A. Johnson, and F. Cnossen. Improving situation awareness in anaesthesiology. In Proceedings of the 2005 annual conference on European association of cognitive ergonomics, pages 255–263. Univer- sity of Athens, 2005. [104] Press and Shigemasu. Bayesian inference in factor analysis. In L. J. Glesser, editor, Contributions to Probability and Statistics, New York, 1989. Springer. [105] L. Rabiner. A tutorial on hidden Markov models and selected applica- tions in speech recognition. Proceedings of the IEEE, 77(2):257–286, Feb 1989. [106] N. Ramaux, D. Fontaine, and M. Dojat. Temporal scenario recognition for intelligent patient monitoring. In Proceedings of the 6th Conference on Artificial Intelligence in Medicine in Europe, pages 331–342, 1997. 150 Chapter 10. Bibliography [107] I. Rampil. Intelligent detection of artifact. In J. Gravenstein, editor, The automated anesthesia record and alarm systems, pages 175–190, Boston, 1987. Butterworths. [108] C. Rao. Estimation and tests of significance in factor analysis. Psy- chometrika, 20(2):93–111, Jun 1955. [109] G. Rau. Man-machine communication for monitoring, recording and decision support in an anesthesia information system. In Proceedings of the 9th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, pages 427–428, Boston, MA, 1987. [110] A. C. Rencher. Multivariate Statistical Inference and Applications. Wiley Interscience, Dec 1997. [111] Rockville. FDA regulation of medical device software (background). In the FDA/NLM Software Policy Workshop, Sep 1996. [112] D. Rubin and D. Thayer. EM algorithms for ML factor analysis. Psychometrika, 47(1):69–76, Mar 1982. [113] A. Salatian and J. Hunter. Deriving trends in historical and real-time continuously sampled medical data. Journal of Intelligent Information Systems, 13(1-2):47–71, 1999. [114] T. Schecke, G. Rau, H. J. Popp, H. Kasmacher, G. Kalff, and H. J. Zimmermann. A knowledge-based approach to intelligent alarms in anesthesia. IEEE Engineering in Medicine and Biology Magazine, 10(4):38–44, Dec 1991. [115] Y. Shahar. A framework for knowledge-based temporal abstraction. Artificial Intelligence, 90(1-2):79–133, 1997. [116] Y. Shahar and M. A. Musen. RESUME: A temporal-abstraction system for patient monitoring. Computers and biomedical research, 26(3):255–273, 1993. [117] S. Sharshar, L. Allart, and M.-C. Chambrin. A new approach to the abstraction of monitoring data in intensive care. Lecture Notes in Computer Science, 3581:13–22, 2005. [118] E. H. Shortliffe. Computer-Based Medical Consultations: MYCIN. Elsevier, North-Holland, New York, 1976. 151 Chapter 10. Bibliography [119] R. H. Shumway and D. S. Stoffer. Dynamic linear models with switch- ing. Journal of the American Statistical Association, 86(415):763–769, 1991. [120] T. Sibanda and N. Sibanda. The Cusum chart method as a tool for continuous monitoring of clinical outcomes using routinely collected data. BMC Medical Research Methodology, 7(1):46, 2007. [121] D. Sittig, M. Factor, and D. PSittig. Physiologic trend detection and artifact rejection: a parallel implementation of a multi-state Kalman filtering algorithm. Computer Methods and Programs in Biomedicine, 31:1–10, 1990. [122] D. F. Sittig, N. L. Pace, R. M. Gardner, E. Beck, and A. H. Mor- ris. Implementation of a computerized patient advice system using the HELP clinical information system. Computers and Biomedical Research, 22(5):474–487, 1989. [123] D. A. Sitzman and R. M. Farrell. System and method for select- ing physiological data from a plurality of physiological data sources. United States Patent, (6801802), Oct 2004. [124] A. Smith and M. West. Monitoring renal transplants: an application of the multiprocess Kalman filter. Biometrics, 39:867–878, 1983. [125] S. Snook and R. Gorsuch. Principal component analysis versus com- mon factor analysis: A Monte Carlo study. Psychological Bulletin, 106(1):148–154, Jul 1989. [126] T. Söderström. Discrete-time Stochastic Systems: Estimation and Control. Springer, second edition, 2002. [127] Y. Sowb, R. Loch, and E. Roth. Cognitive modeling of intraoperative critical events. In IEEE International Conference on Systems, Man, and Cybernetics, volume 3, pages 2533–2538, Oct 1998. [128] D. Spiegelhalter, O. Grigg, R. Kinsman, and T. Treasure. Risk- adjusted sequential probability ratio tests: applications to bristol, shipman and adult cardiac surgery. International Journal for Quality in Health Care, 15(1):7–13, 2003. [129] M. Stacey and C. McGregor. Temporal abstraction in intelligent clinical data analysis: A survey. Artificial Intelligence in Medicine, 39(1):1–24, 2007. 152 Chapter 10. Bibliography [130] F. Steimann. Diagnostic Monitoring of Clinical Time Series. PhD thesis. Technical University of Vienna, Austria, 1995. [131] K. Stoodley and M. Mirnia. The automatic detection of transients, step changes and slope changes in the monitoring of medical time series. The Statistician, 28:163–170, 1979. [132] G. Takla, J. H. Petre, D. J. Doyle, M. Horibe, and B. Gopakumaran. The problem of artifacts in patient monitor data during surgery: a clin- ical and methodological review. Anesthesia Analgesia, 103(5):1196– 1204, Nov 2006. [133] A. C. Tamhane and R. S. H. Mah. Data reconciliation and gross error detection in chemical process networks. Technometrics, 27(4):409–422, 1985. [134] S. Tani, M. Fujikake, K. Minamitani, and Y. Yamanaka. Electronic aid in routine observation in hospital ward. In Digest of Papers for Sci- entific Program, 6th International Conference on Medical Electronics and Biological Engineering, Aug 1965. [135] D. E. M. Taylor, J. S. Whamond, and D. J. Hitchings. A probabilistic approach to patient monitor alarm system. In MEDINFO’74, page 746, North-Gikkabdm Ansterdan, 1974. [136] S. Taylor-Adams, A. Brodie, and C. Vincent. Safety skills for clini- cians: An essential component of patient safety. Journal of Patient Safety, 4(3), Sep 2008. [137] G. H. Thomson. The Factorial Analysis of Human Ability. London University Press, 1951. [138] J. Tinker, D. Dull, R. Caplan, R. Ward, and F. Cheney. Role of monitoring devices in prevention of anesthetic mishaps: a closed claims analysis. Anesthesiology, 71(4):541–546, 1989. [139] D. Trigg. Monitoring a forecasting system. Operational Research Quarterly, 15:271–274, 1964. [140] C. L. Tsien. TrendFinder: Automated Detection of Alarmable Trends. PhD Thesis. Massachusetts Institute of Technology, 2000. [141] C. L. Tsien and J. C. Fackler. Poor prognosis for existing monitors in the intensive care unit. Critical Care Medicine, 25(4):614–619, Apr 1997. 153 Chapter 10. Bibliography [142] N. S. Uckun. An Ontology for Model-based Reasoning in Physiological Domains. PhD Thesis. Vanderbilt University, Nashville, TN, USA, 1992. [143] N. S. Uckun. Artificial intelligence in medicine: state-of-the-art and future prospects. Artificial Intelligence in Medicine, 5(2):89–91, 1993. [144] S. Uckun. Intelligent systems in patient monitoring and therapy man- agement. International Journal of Clinical Monitoring and Comput- ing, 11:11–241, 1994. [145] S. Uckun and B. M. Dawant. Qualitative modeling as a paradigm for diagnosis and prediction in critical care environments. Artificial Intelligence in Medicine, 4(2):127–144, 1992. [146] O. Uluyol, X. Wang, and L. H. Tsoukalas. A connectionist approach to trend detection. In Proceedings of International Conference on In- formation Intelligence and Systems, pages 255–259, 1999. [147] W. F. Velicer and D. N. Jackson. Component analysis versus common factor analysis: Some further observations. Multivariate Behavioral Research, 25(1):97–114, 1990. [148] A. Wald. Sequential tests of statistical hypotheses. The Annals of Mathematical Statistics, 16(2):117–186, 1945. [149] K. Watanabe and S. G. Tzafestas. Generalized pseudo-Bayes estima- tion and detection for abruptly changing systems. Journal of Intelli- gent and Robotic Systems, 7(1):95–112, Febuary 1993. [150] C. K. Waterson. Development directions in integrated anesthesia mon- itoring. In Proceedings of the Annual International Conference of the IEEE Engineering in Medicine and Biology Society, volume 4, pages 1765–1766, Nov 1988. [151] J. P. Welch and N. M. Sims. Network for portable patient monitoring devices. United States Patent, (5319363), Jun 1994. [152] D. Westenskow, J. Orr, F. Simon, H. Bender, and H. Frankenberger. Intelligent alarms reduce anesthesiologist’s response time to critical faults. Anesthesiology, 77(6):1074–1079, 1992. [153] J. Whittaker and S. Fruhwirth-Schnatter. A dynamic change point model for detecting the onset of growth in bacteriological infections. Applied Statistics, 43(4):625–640, 1994. 154 Chapter 10. Bibliography [154] C. K. I. Williams, J. Quinn, and N. McIntosh. Factorial switching Kalman filters for condition monitoring in neonatal intensive care. In Advances in Neural Information Processing Systems 18, pages 1513– 1520, Cambridge, MA, 2006. MIT Press. [155] M. Wright, J. Taekman, and M. Endsley. Objective measures of situa- tion awareness in a simulated medical environment. Quality and Safty in Health Care, 13:65–71, October 2004. [156] W. Wu, M. Black, D. Mumford, E. Bienenstock, and J. Donoghue. Modeling and decoding motor cortical activity using a switch- ing Kalman filter. IEEE Transation on Biomedical Engineering, 51(6):933–942, Jun 2004. [157] J. Yamron, L. Gillick, S. Knecht, S. Lowe, and P. van Mulbregt. Sta- tistical models for tracking and detection, 2000. [158] P. Yang, G. Dumont, and J. M. Ansermino. An adaptive Cusum test based on a hidden semi-Markov model for change detection in non- invasive mean blood pressure trend. In Proceedings of the 28th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, pages 3395–3398, New York, USA, Aug 2006. [159] P. Yang, G. A. Dumont, and J. Ansermino. Online pattern recognition based on a generalized hidden Markov model for intraoperative vital sign monitoring. International Journal of Adaptive Control and Signal Processing, in press, 2008. [160] P. Yang, G. A. Dumont, and J. Ansermino. Sensor fusion using a hybrid median filter for artifact removal in intraoperative heart rate monitoring. Journal of Clinical Monitoring and Computing, 23(2):75– 83, Apr 2009. [161] P. Yang, G. A. Dumont, and J. M. Ansermino. Adaptive change de- tection in heart rate trend monitoring in anesthetized children. IEEE Transactions on Biomedical Engineering, 53(11):2211–2219, 2006. [162] P. Yang, G. A. Dumont, S. Ford, and J. M. Ansermino. Multivariate analysis in clinical monitoring: Detection of intraoperative hemor- rhage and light anesthesia. In Proceedings of the 29th Annual Inter- national Conference of the IEEE Engineering in Medicine and Biology Society, pages 6215–6218, Aug 2007. 155 Chapter 10. Bibliography [163] P. Yang, G. A. Dumont, J. Lim, and J. M. Ansermino. Adaptive change point detection for respiratory variables. In Proceedings of the 27th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, Shanghai, China, Sep 2005. [164] D. Young and J. Griffiths. Clinical trials of monitoring in anesthe- sia, critical care and acute ward care: a review. British Journal of Anaesthesia, 97(1):39–45, 2006. [165] L. Zhou, H. Su, and J. Chu. A study of nonlinear dynamic data reconciliation. In Proceedings of the IEEE International Conference on Systems, Man and Cybernetics, pages 1360–1365, Netherlands, Oct 2004. [166] T. Zhou and R. Shumway. One-step approximations for detecting regime changes in the state space model with application to the in- fluenza data. Computational Statistics and Data Analysis, 52(5):2277– 2291, 2008. [167] M. H. Zweig and G. Campbell. Receiver-operating characteristic (ROC) plots: a fundamental evaluation tool in clinical medicine. Clin- ical Chemistry, 39(4):561–577, 1993. 156 Appendices I Covariance estimation in the adaptive Kalman filter This appendix describes the Q-R estimation process used in the adaptive Kalman filter in Chapter 5. Refer to [64] for more details. Analytical formats of Q and R estimates The probability density of (x0...t, y0...t) is: V (x0...t, y0...t|θ) = V (y0...t|x0...t, θ) · V (x0...t|θ) (I-1) where θ is (Q,R). Given the proposed DLM (5.1), the expectation of the log-likelihood E in Equation (5.7) becomes: f =− t+12 log |R| − t2 log |Q| −12 ∑t k=0 tr{R−1(εk|tεTk|t +HVk|tHT )} −12 ∑t k=1 tr{Q−1(αk|t + Fαk−1|tF T )}+ o (I-2) where o represents terms independent of Q and R, tr denotes the matrix trace, and αk|t = x̂k|tx̂Tk|t + Vk|t, βk|t = x̂k|tx̂Tk−1|t + V(k−1,k)|t, εk|t = yk −Hx̂k|t (I-3) with x̂k|t and Vk|t denoting the state estimate and the estimation error covariance of xk, both conditional upon y0...t (k 6 t); V(k−1,k)|t denoting E{(xk−1 − xk−1|t)(xk − xk|t)}. Maximizing f with respect to Q and R, we get the analytical answer of the Q and R estimates: Q̂t = 1t ∑t k=1 (αk|t + Fαk−1|tF T − βk|tF T − FβTk|t)︸ ︷︷ ︸ Lk|t R̂t = 1t+1 ∑t k=0 (εk|tε T k|t +HVk|tH T )︸ ︷︷ ︸ Vk|t (I-4) 157 I. Covariance estimation in the adaptive Kalman filter Recursive update of Q and R estimates Substituting the recursive realization of x̂k|t, Vk|t and V(k−1,k)|t into Equation (I-3) yields: αk|t = αk|t−1 +Wk|t βk|t = βk|t−1 + Uk|t εk|t = εk|t−1 −HKt,k|t−1εt|t−1 (I-5) where Wk|t =M(t, k, k) and Uk|t =M(t, k, k−1), with M(t, k,m) = Ka(t, k)εt|t−1εTt|t−1Ka(m, t) T − Va(t, k)HTKa(m, t)T +Ka(t, k)εt|t−1x̂Tm|t−1 + x̂k|t−1ε T t|t−1Ka(m, t) T − Va(t, k)HTKa(m, t)T Using Equation (I-4) we get:{ Q̂t = 1t {(t− 1) · Q̂t−1 + Lt|t + ∑t−1 k=1 γk|t} R̂t = 1t+1{t · R̂t−1 + Vt|t + ∑t−1 k=0 χk|t} (I-6) where γk|t = HKa(t, k)εt|t−1εTt|t−1Ka(t, k) THT − εk|t−1εTt|t−1Ka(t, k)THT −HKa(t, k)εt|t−1εTk|t−1 −HVa(t, k)HTKa(t, k)THT and χk|t =Wk|t + FWk−1|tF T − Uk|tF T − FUT . Since γ and χ vanish quickly as k moves away from t, a window T with a size of 2-3 times the dominant time constant of the Kalman filter is applied to finally get Equation (5.8). Proof of Equation (5.5) Consider an augmented state vector [xt pt qt]T for t > k, where pt = xk, qt = xk−1. The system model is:xt+1pt+1 qt+1 = F000 I0 0 0I xtpt qt + I0 0 ζt (I-7) The prediction error covariance matrix of the augmented state vector is derived in a similar manner to the standard Kalman filter. One of the matrix elements, V p,qt+1|t, which indicates the cross-covariance of p̂t+1|t and q̂t+1|t, is recursively updated by: V p,qt+1|t = V p,q t|t−1 − [V x,pt|t−1]THT [Kx,qt ]T (I-8) Let Va(t, k) represent [V x,p t|t−1] T andKa(k−1, t) represent [Kx,qt ]. As Vk,k−1|t = V p,qt+1|t, Equation (5.5) is confirmed. Equations in (5.6) follow the standard Kalman filter iteration. 158 II. List of physiological variables II List of physiological variables Variable Unit Description HR beats/min Number of heart beats per minute HRecg beats/min HR from electrocardiogram monitor HRSpO2 beats/min HR from pulse oximeter HRabp beats/min HR from arterial blood monitor NIBPmean mmHg Noninvasive mean blood pressure NIBPdia mmHg Noninvasive diastolic blood pressure NIBPsys mmHg Noninvasive systolic blood pressure MAP mmHg Invasive mean arterial blood pressure BPsys mmHg Invasive systolic arterial blood pressure BPdia mmHg Invasive diastolic arterial blood pressure MCVP mmHg Mean central venous blood pressure SpO2 % Oxygen saturation of arterial blood SvO2 % Mixed venous oxygen saturation MVexp ml End tidal minute volume RR(CO2) breaths/min Respiratory rate from capnograph MVinsp ml Inspiratory minute volume TVexp ml End tidal volume TVinsp ml Inspiratory tidal volume Ppeak mmHg Peak airway pressure EtCO2 (FeCO2) % End tidal concentration of carbon dioxide EtO2 (FeO2) % End tidal concentration of oxygen FeAA % End tidal concentration of anesthetic agent FeN2O % End tidal concentration of nitrous oxide FeN2 % End tidal concentration of nitrogen FiO2 % Inspiratory concentration of oxygen FiAA % Inspiratory concentration of anesthetic agent FiN2O % Inspiratory concentration of nitrous oxide FiN2 % Inspiratory concentration of nitrogen 159"""@en ;
edm:hasType "Thesis/Dissertation"@en ;
vivo:dateIssued "2009-11"@en ;
edm:isShownAt "10.14288/1.0067275"@en ;
dcterms:language "eng"@en ;
ns0:degreeDiscipline "Electrical and Computer Engineering"@en ;
edm:provider "Vancouver : University of British Columbia Library"@en ;
dcterms:publisher "University of British Columbia"@en ;
dcterms:rights "Attribution-NonCommercial-NoDerivatives 4.0 International"@en ;
ns0:rightsURI "http://creativecommons.org/licenses/by-nc-nd/4.0/"@en ;
ns0:scholarLevel "Graduate"@en ;
dcterms:title "Adaptive trend change detection and pattern recognition in physiological monitoring"@en ;
dcterms:type "Text"@en ;
ns0:identifierURI "http://hdl.handle.net/2429/8932"@en .