- Library Home /
- Search Collections /
- Open Collections /
- Browse Collections /
- UBC Theses and Dissertations /
- A multifaceted quantitative validity assessment of...
Open Collections
UBC Theses and Dissertations
UBC Theses and Dissertations
A multifaceted quantitative validity assessment of laparoscopic surgical simulators Kinnaird, Catherine
Abstract
The objective of this work was to design an experimental surgical tool and data acquisition protocol to quantitatively measure surgeon motor behaviour in a human operating room (OR). We want to use expert OR behaviour data to evaluate the concurrent validity of two types of laparoscopic surgical simulators. Current training and evaluation methods are subjective and potentially unreliable, and surgical simulators have been recognized as potential objective training and measurement tools, even though their validity has not been quantitatively established. We compare surgeon motor behaviour in the OR to a ~$50 000 virtual reality simulator, and a ~$1 physical "orange" simulator. It is our contention that if expert behaviour in a simulator is the same as in the OR, then that simulator is a valid measurement tool. A standard laparoscopic surgical tool is instrumented with optical, magnetic, and force/torque sensors to create a hybrid system. We use the hybrid tool in a pilot study, to collect continuous kinematics and force/torque profiles in a human OR. We compare the position, velocity, acceleration, jerk, and force/torque profiles of two expert surgeons across analogous tasks in the three settings (OR, VR, and physical) using the Kolmogorov-Smirnov statistic. We find that intra- and intersubject differences between settings are small (D < 0.3), which indicates that experts exhibit the same motor behaviour in each setting. This also helps to validate our choice of performance measures and analysis method. However, we find larger intersetting expert differences (0.3 < D < 1) from the OR to simulators. We suspect that experts behave the same as each other in all settings, but that OR behaviour is considerably different from simulator behaviour. In other words, for this preliminary study we find that the VR and physical simulators both demonstrate poor performance validity.
Item Metadata
Title |
A multifaceted quantitative validity assessment of laparoscopic surgical simulators
|
Creator | |
Publisher |
University of British Columbia
|
Date Issued |
2004
|
Description |
The objective of this work was to design an experimental surgical tool and data acquisition protocol to quantitatively measure surgeon motor behaviour in a human operating room (OR). We want to use expert OR behaviour data to evaluate the concurrent validity of two types of laparoscopic surgical simulators. Current training and evaluation methods are subjective and potentially unreliable, and surgical simulators have been recognized as potential objective training and measurement tools, even though their validity has not been quantitatively established. We compare surgeon motor behaviour in the OR to a ~$50 000 virtual reality simulator, and a ~$1 physical "orange" simulator. It is our contention that if expert behaviour in a simulator is the same as in the OR, then that simulator is a valid measurement tool. A standard laparoscopic surgical tool is instrumented with optical, magnetic, and force/torque sensors to create a hybrid system. We use the hybrid tool in a pilot study, to collect continuous kinematics and force/torque profiles in a human OR. We compare the position, velocity, acceleration, jerk, and force/torque profiles of two expert surgeons across analogous tasks in the three settings (OR, VR, and physical) using the Kolmogorov-Smirnov statistic. We find that intra- and intersubject differences between settings are small (D < 0.3), which indicates that experts exhibit the same motor behaviour in each setting. This also helps to validate our choice of performance measures and analysis method. However, we find larger intersetting expert differences (0.3 < D < 1) from the OR to simulators. We suspect that experts behave the same as each other in all settings, but that OR behaviour is considerably different from simulator behaviour. In other words, for this preliminary study we find that the VR and physical simulators both demonstrate poor performance validity.
|
Extent |
16905819 bytes
|
Genre | |
Type | |
File Format |
application/pdf
|
Language |
eng
|
Date Available |
2009-11-23
|
Provider |
Vancouver : University of British Columbia Library
|
Rights |
For non-commercial purposes only, such as research, private study and education. Additional conditions apply, see Terms of Use https://open.library.ubc.ca/terms_of_use.
|
DOI |
10.14288/1.0080797
|
URI | |
Degree | |
Program | |
Affiliation | |
Degree Grantor |
University of British Columbia
|
Graduation Date |
2004-11
|
Campus | |
Scholarly Level |
Graduate
|
Aggregated Source Repository |
DSpace
|
Item Media
Item Citations and Data
Rights
For non-commercial purposes only, such as research, private study and education. Additional conditions apply, see Terms of Use https://open.library.ubc.ca/terms_of_use.