Open Collections will undergo scheduled maintenance on the following dates: On Monday, April 27th, 2026, the site will not be available from 7:00 AM – 9:00 AM PST and on Tuesday, April 28th, 2026, the site will remain accessible from 7:00 AM – 9:00 AM PST, however item images and media will not be available during this time.
- Library Home /
- Search Collections /
- Open Collections /
- Browse Collections /
- UBC Theses and Dissertations /
- Prediction and production of legible human-like robot...
Open Collections
UBC Theses and Dissertations
UBC Theses and Dissertations
Prediction and production of legible human-like robot reaching motions : applications for collaborative human–robot interaction Sheikholeslami, Sara
Abstract
Fluent collaboration between humans and robots depends on the ability to infer and communicate intent through motion. Humans naturally anticipate each other’s actions from partial movements, but this predictive fluency remains limited in current robotic systems. This thesis addresses this gap by developing a unified framework that models, predicts, and reproduces human-reaching trajectories to enable both predictive inference and legible motion generation in collaborative contexts. Natural human-reaching motions were collected using a marker-based Vicon motion capture system in a semi-structured placement task. Analyses revealed that unconstrained reaching trajectories exhibit consistent planarity and spatial regularities that can be effectively represented using an elliptical model. Building on this geometric representation, a snippet-based goal prediction framework was developed to infer motion intent from partial trajectories, demonstrating that early segments of an ongoing reach provide sufficient structure for accurate goal inference. The derived human-driven elliptical model was then mapped onto a redundant seven-degree-of-freedom robotic manipulator using an optimization-based inverse kinematics formulation, enabling the robot to reproduce human-like trajectories while maintaining kinematic feasibility. Finally, an online perceptual study evaluated the legibility of these motions by examining whether human observers could correctly and confidently infer motion goals from visualized trajectories. Results showed that the human-derived elliptical-reaching trajectories supported legibility across both human and robot embodiments, although the strength of this effect depended on the specific embodiment and goal context. Together, these findings establish a computationally efficient and perceptually grounded approach to modeling human-reaching motion, bridging predictive understanding and expressive motion generation for more intuitive human–robot collaboration.
Item Metadata
| Title |
Prediction and production of legible human-like robot reaching motions : applications for collaborative human–robot interaction
|
| Creator | |
| Supervisor | |
| Publisher |
University of British Columbia
|
| Date Issued |
2026
|
| Description |
Fluent collaboration between humans and robots depends on the ability to infer and communicate intent through motion. Humans naturally anticipate each other’s actions from partial movements, but this predictive fluency remains limited in current robotic systems. This thesis addresses this gap by developing a unified framework that models, predicts, and reproduces human-reaching trajectories to enable both predictive inference and legible motion generation in collaborative contexts. Natural human-reaching motions were collected using a marker-based Vicon motion capture system in a semi-structured placement task. Analyses revealed that unconstrained reaching trajectories exhibit consistent planarity and spatial regularities that can be effectively represented using an elliptical model. Building on this geometric representation, a snippet-based goal prediction framework was developed to infer motion intent from partial trajectories, demonstrating that early segments of an ongoing reach provide sufficient structure for accurate goal inference. The derived human-driven elliptical model was then mapped onto a redundant seven-degree-of-freedom robotic manipulator using an optimization-based inverse kinematics formulation, enabling the robot to reproduce human-like trajectories while maintaining kinematic feasibility. Finally, an online perceptual study evaluated the legibility of these motions by examining whether human observers could correctly and confidently infer motion goals from visualized trajectories. Results showed that the human-derived elliptical-reaching trajectories supported legibility across both human and robot embodiments, although the strength of this effect depended on the specific embodiment and goal context. Together, these findings establish a computationally efficient and perceptually grounded approach to modeling human-reaching motion, bridging predictive understanding and expressive motion generation for more intuitive human–robot collaboration.
|
| Genre | |
| Type | |
| Language |
eng
|
| Date Available |
2026-04-16
|
| Provider |
Vancouver : University of British Columbia Library
|
| Rights |
Attribution-NonCommercial-NoDerivatives 4.0 International
|
| DOI |
10.14288/1.0451985
|
| URI | |
| Degree (Theses) | |
| Program (Theses) | |
| Affiliation | |
| Degree Grantor |
University of British Columbia
|
| Graduation Date |
2026-05
|
| Campus | |
| Scholarly Level |
Graduate
|
| Rights URI | |
| Aggregated Source Repository |
DSpace
|
Item Media
Item Citations and Data
Rights
Attribution-NonCommercial-NoDerivatives 4.0 International