- Library Home /
- Search Collections /
- Open Collections /
- Browse Collections /
- UBC Theses and Dissertations /
- Self-supervised learning at the sensor layer in robotics
Open Collections
UBC Theses and Dissertations
UBC Theses and Dissertations
Self-supervised learning at the sensor layer in robotics Yuan, Kaiwen
Abstract
Modern robots are generally armed with diverse modalities of sensors, for various functionalities and safety redundancy. The recent breakthrough of deep learning (DL) technologies demonstrates superior performance on many high-level tasks, especially with using multi-sensor fusion. While the majority of multi-modal DL methods assume that sensors are well calibrated, synchronized and denoised, such efforts at the sensor layer are non-trivial and get increasingly expensive with the growing complexity of robotic systems. Currently dominant approaches heavily rely on specific hardware setup or high-end sensors, which generally are not cost-effective. This cost concern could be a bottleneck for the potential wide adoption of low-cost robots in the near future. Even though DL has a huge potential at the sensor layer, the difficulty of acquiring sufficient and accurate annotations for related tasks remain a major challenge. This thesis first formulates key problems at the robot sensor layer from the machine learning perspective, and further proposes efficient self-supervised learning approaches systematically. In our work, the popular and representative LiDAR-camera-inertial system is utilized as the study target. Firstly, the challenging LiDAR-camera online extrinsic calibration task is delved into, and we investigate the Riemannian metrics equipped self-supervised learning approach via synthetic data. This was the first work in the literature which demonstrates competing performance of data-driven methods when compared with conventional approaches at the sensor layer. It lays the foundation and shows the potential for later deeper explorations. Secondly, we address several overlooked limitations of the conventional synchronization pipelines and propose the first DL based LiDAR-camera synchronization framework, which is an innovative self-supervised learning schema. Thirdly, the problem of Inertial Measurement Unit (IMU) denoising for navigation is studied, and we propose a self-supervised multi-task framework. This work demonstrates the superiority of data-driven approaches on IMU denoising and presents one realistic self-supervised learning implementation. These explorations initialize the adoption of deep learning for robot sensor layer tasks and show case how self-supervised learning can be applied. Our work helps push the boundary of self-supervised learning at the sensor layer to an usable stage, demonstrate the potential for this direction and shed the lights for future research.
Item Metadata
Title |
Self-supervised learning at the sensor layer in robotics
|
Creator | |
Supervisor | |
Publisher |
University of British Columbia
|
Date Issued |
2023
|
Description |
Modern robots are generally armed with diverse modalities of sensors, for various functionalities and safety redundancy. The recent breakthrough of deep learning (DL) technologies demonstrates superior performance on many high-level tasks, especially with using multi-sensor fusion. While the majority of multi-modal DL methods assume that sensors are well calibrated, synchronized and denoised, such efforts at the sensor layer are non-trivial and get increasingly expensive with the growing complexity of robotic systems. Currently dominant approaches heavily rely on specific hardware setup or high-end sensors, which generally are not cost-effective. This cost concern could be a bottleneck for the potential wide adoption of low-cost robots in the near future. Even though DL has a huge potential at the sensor layer, the difficulty of acquiring sufficient and accurate annotations for related tasks remain a major challenge.
This thesis first formulates key problems at the robot sensor layer from the machine learning perspective, and further proposes efficient self-supervised learning approaches systematically. In our work, the popular and representative LiDAR-camera-inertial system is utilized as the study target. Firstly, the challenging LiDAR-camera online extrinsic calibration task is delved into, and we investigate the Riemannian metrics equipped self-supervised learning approach via synthetic data. This was the first work in the literature which demonstrates competing performance of data-driven methods when compared with conventional approaches at the sensor layer. It lays the foundation and shows the potential for later deeper explorations. Secondly, we address several overlooked limitations of the conventional synchronization pipelines and propose the first DL based LiDAR-camera synchronization framework, which is an innovative self-supervised learning schema. Thirdly, the problem of Inertial Measurement Unit (IMU) denoising for navigation is studied, and we propose a self-supervised multi-task framework. This work demonstrates the superiority of data-driven approaches on IMU denoising and presents one realistic self-supervised learning implementation.
These explorations initialize the adoption of deep learning for robot sensor layer tasks and show case how self-supervised learning can be applied. Our work helps push the boundary of self-supervised learning at the sensor layer to an usable stage, demonstrate the potential for this direction and shed the lights for future research.
|
Genre | |
Type | |
Language |
eng
|
Date Available |
2023-03-01
|
Provider |
Vancouver : University of British Columbia Library
|
Rights |
Attribution-NonCommercial-NoDerivatives 4.0 International
|
DOI |
10.14288/1.0427378
|
URI | |
Degree | |
Program | |
Affiliation | |
Degree Grantor |
University of British Columbia
|
Graduation Date |
2023-05
|
Campus | |
Scholarly Level |
Graduate
|
Rights URI | |
Aggregated Source Repository |
DSpace
|
Item Media
Item Citations and Data
Rights
Attribution-NonCommercial-NoDerivatives 4.0 International