- Library Home /
- Search Collections /
- Open Collections /
- Browse Collections /
- UBC Theses and Dissertations /
- Guidance, autonomy, and gaze for surgical robots
Open Collections
UBC Theses and Dissertations
UBC Theses and Dissertations
Guidance, autonomy, and gaze for surgical robots Banks Gadbois, Alexandre
Abstract
Robot-assisted surgery (RAS) is now in standard use for procedures ranging from cardiac surgery to prostatectomy, with more than 6,500 clinical systems worldwide. RAS education, however, lags behind its widespread adoption, highlighting the need for specialized tools that support novice learning and enhance human-robot interaction. This thesis develops and validates three open-source systems to support training and improve perception in RAS. The first, dV-STEAR, addresses guidance by introducing an augmented reality (AR) platform that records and plays back expert surgeon motions. Our AR system is shown to achieve a mean end-effector pose estimation error of 3.86mm ± 2.01mm. A 24-participant user study demonstrates that, on a path following task, dV-STEAR leads to improved completion speed (p=0.03), fewer errors (p=0.01), and a more balanced use of dominant and non-dominant hands (p=0.04). For a precision pick-and-place task, the individuals using AR have better overall performance (p=0.005) and improved hand balance (p=0.004). Participants also report lower frustration and mental demand when learning with AR demonstrations. The second contribution, AutoCam, addresses the limited field of view faced by novice and expert surgeons in RAS. This system autonomously tracks a salient feature in a surgical training scene using an auxiliary camera controlled by the da Vinci Surgical System. In a six-participant study, AutoCam maintains a feature visibility of 99.84% while accounting for workspace and joint-limit constraints. The system's positional error is also demonstrated to be 4.66mm ± 3.03mm. The final contribution of this thesis presents a method to compensate for head motion during gaze tracking in RAS. Because gaze is an index of learning and attention, integrating this into surgical systems is essential for education. Through a 24-participant user study, we show that our method reduces angular error by 1.20 degrees (p=0.037) for the left eye and 1.26 degrees (p=0.079) for the right compared to previous gold-standard approaches. All three systems presented in this thesis are made open-source and implemented on a da Vinci robot. Together, these contributions are shown to improve guidance and feedback mechanisms for surgical training and intraoperative perception.
Item Metadata
Title |
Guidance, autonomy, and gaze for surgical robots
|
Creator | |
Supervisor | |
Publisher |
University of British Columbia
|
Date Issued |
2025
|
Description |
Robot-assisted surgery (RAS) is now in standard use for procedures ranging from cardiac surgery to prostatectomy, with more than 6,500 clinical systems worldwide. RAS education, however, lags behind its widespread adoption, highlighting the need for specialized tools that support novice learning and enhance human-robot interaction. This thesis develops and validates three open-source systems to support training and improve perception in RAS. The first, dV-STEAR, addresses guidance by introducing an augmented reality (AR) platform that records and plays back expert surgeon motions. Our AR system is shown to achieve a mean end-effector pose estimation error of 3.86mm ± 2.01mm. A 24-participant user study demonstrates that, on a path following task, dV-STEAR leads to improved completion speed (p=0.03), fewer errors (p=0.01), and a more balanced use of dominant and non-dominant hands (p=0.04). For a precision pick-and-place task, the individuals using AR have better overall performance (p=0.005) and improved hand balance (p=0.004). Participants also report lower frustration and mental demand when learning with AR demonstrations. The second contribution, AutoCam, addresses the limited field of view faced by novice and expert surgeons in RAS. This system autonomously tracks a salient feature in a surgical training scene using an auxiliary camera controlled by the da Vinci Surgical System. In a six-participant study, AutoCam maintains a feature visibility of 99.84% while accounting for workspace and joint-limit constraints. The system's positional error is also demonstrated to be 4.66mm ± 3.03mm. The final contribution of this thesis presents a method to compensate for head motion during gaze tracking in RAS. Because gaze is an index of learning and attention, integrating this into surgical systems is essential for education. Through a 24-participant user study, we show that our method reduces angular error by 1.20 degrees (p=0.037) for the left eye and 1.26 degrees (p=0.079) for the right compared to previous gold-standard approaches. All three systems presented in this thesis are made open-source and implemented on a da Vinci robot. Together, these contributions are shown to improve guidance and feedback mechanisms for surgical training and intraoperative perception.
|
Genre | |
Type | |
Language |
eng
|
Date Available |
2025-04-22
|
Provider |
Vancouver : University of British Columbia Library
|
Rights |
Attribution-NonCommercial-NoDerivatives 4.0 International
|
DOI |
10.14288/1.0448485
|
URI | |
Degree | |
Program | |
Affiliation | |
Degree Grantor |
University of British Columbia
|
Graduation Date |
2025-05
|
Campus | |
Scholarly Level |
Graduate
|
Rights URI | |
Aggregated Source Repository |
DSpace
|
Item Media
Item Citations and Data
Rights
Attribution-NonCommercial-NoDerivatives 4.0 International