- Library Home /
- Search Collections /
- Open Collections /
- Browse Collections /
- UBC Theses and Dissertations /
- Human skill augmentation in robot-assisted surgery
Open Collections
UBC Theses and Dissertations
UBC Theses and Dissertations
Human skill augmentation in robot-assisted surgery Abdelaal, Alaa Eldin
Abstract
This thesis addresses the problem of assisting humans in robot-assisted surgery. Our research investigates this problem using the following two approaches: (i) designing interfaces that facilitate human control of the surgical robotics platform, and (ii) developing autonomous systems to perform repetitive parts of the surgical task, allowing humans to focus on the more demanding ones. Following the first approach, we explored how to use multiple types of data to facilitate the surgeon’s control of surgical robots, such as motion data of expert surgeons, video data from an additional camera, and eye gaze data. The main application area of this approach is surgical training and skill assessment. Our results show that combining hand-over-hand and trial and error training approaches, based on expert motion data, enables trainees to balance both the speed and accuracy of performing tasks better than using only one of these approaches alone. Furthermore, our results show that a two-view system can improve both training and skill assessment, with its application in training showing the most promise, compared with the traditional case of using only a single view. In addition, we discovered a gaze-based phenomenon called “Quiet Eye” in multiple minimally invasive surgery settings. We report how this phenomenon changes with surgeons’ experience level and/or successful task completion. This opens the door to leverage this phenomenon in surgical training and skill assessment. Following the second approach, in the context of autonomous robotic surgery, we worked on automating tasks such as moving the surgical camera and suturing. To automate the surgical camera, we proposed a rule-based method that uses both the position and 3D orientation information from structures in the surgical scene. We tested the effectiveness of using our autonomous camera method in video-based surgical skill assessment. To automate surgical tasks such as suturing, we leveraged the surgical robot’s capability to move multiple arms in parallel to devise autonomous execution models that go beyond the humans’ way of performing these tasks. Our simulation experiments show that our proposed parallel execution models can lead to at least 40% decrease in the tasks’ completion time, compared with the state-of-the art ones.
Item Metadata
Title |
Human skill augmentation in robot-assisted surgery
|
Creator | |
Supervisor | |
Publisher |
University of British Columbia
|
Date Issued |
2022
|
Description |
This thesis addresses the problem of assisting humans in robot-assisted surgery. Our research investigates this problem using the following two approaches: (i) designing interfaces that facilitate human control of the surgical robotics platform, and (ii) developing autonomous systems to perform repetitive parts of the surgical task, allowing humans to focus on the more demanding ones.
Following the first approach, we explored how to use multiple types of data to facilitate the surgeon’s control of surgical robots, such as motion data of expert surgeons, video data from an additional camera, and eye gaze data. The main application area of this approach is surgical training and skill assessment. Our results show that combining hand-over-hand and trial and error training approaches, based on expert motion data, enables trainees to balance both the speed and accuracy of performing tasks better than using only one of these approaches alone. Furthermore, our results show that a two-view system can improve both training and skill assessment, with its application in training showing the most promise, compared with the traditional case of using only a single view. In addition, we discovered a gaze-based phenomenon called “Quiet Eye” in multiple minimally invasive surgery settings. We report how this phenomenon changes with surgeons’ experience level and/or successful task completion. This opens the door to leverage this phenomenon in surgical training and skill assessment.
Following the second approach, in the context of autonomous robotic surgery, we worked on automating tasks such as moving the surgical camera and suturing. To automate the surgical camera, we proposed a rule-based method that uses both the position and 3D orientation information from structures in the surgical scene. We tested the effectiveness of using our autonomous camera method in video-based surgical skill assessment. To automate surgical tasks such as suturing, we leveraged the surgical robot’s capability to move multiple arms in parallel to devise autonomous execution models that go beyond the humans’ way of performing these tasks. Our simulation experiments show that our proposed parallel execution models can lead to at least 40% decrease in the tasks’ completion time, compared with the state-of-the art ones.
|
Genre | |
Type | |
Language |
eng
|
Date Available |
2023-01-03
|
Provider |
Vancouver : University of British Columbia Library
|
Rights |
Attribution-NonCommercial-NoDerivatives 4.0 International
|
DOI |
10.14288/1.0422943
|
URI | |
Degree | |
Program | |
Affiliation | |
Degree Grantor |
University of British Columbia
|
Graduation Date |
2023-05
|
Campus | |
Scholarly Level |
Graduate
|
Rights URI | |
Aggregated Source Repository |
DSpace
|
Item Media
Item Citations and Data
Rights
Attribution-NonCommercial-NoDerivatives 4.0 International